Many believe that agentic skills are exclusively for Claude Code. This is not true.
Each skill below is written in .md format, which is the standard for Claude. You can still copy these documents and paste them into ChatGPT or Gemini whenever you need to use the skill.
I've grouped the 20 tasks into 4 categories to boost your productivity.
✍️ Writing & Content Skills
SCQA Writing Framework 🟢
SCQA is a framework for structuring communication, especially for writing or presentations. It stands for:
- S = Situation → Set the context. Explain the current state or background so the audience understands the starting point.
- C = Complication → Introduce a problem, tension, or challenge that disrupts the situation. This is what makes the story or argument interesting.
- Q = Question → Pose the key question that naturally arises from the complication. What does the audience want answered?
- A = Answer → Provide the solution, insight, or recommendation. This resolves the question logically.
Think of it as storytelling with logic. It's widely used in consulting, business writing, and content creation because it makes complex ideas easy to follow.
---
name: scqa-writing-framework
description: Structures any piece of writing — LinkedIn posts, exec memos, consulting decks, tweet threads, blog intros — using McKinsey's Situation-Complication-Question-Answer framework. Use whenever the user wants to "make this more persuasive", "tighten the argument", "structure this explanation", or shares a rambling draft that lacks a clear arc, even if they don't mention SCQA by name.
---
# SCQA Writing Framework
## When to use
- User pastes an unstructured paragraph and asks for feedback, restructuring, or "make this better"
- User is drafting an announcement, memo, investor update, or post intro
- User struggles with a "so what?" problem — writing states facts without tension
- Anyone asks for a clearer version of a rambling explanation, even without naming SCQA
## Why this works
Readers need a contract in the first three sentences: *here's where we are → here's what broke → here's the question you're now asking → here's my answer*. Without the Complication, there's no reason to keep reading. Without the Question, the Answer feels like a non-sequitur. SCQA enforces that contract.
## The four moves
**Situation** — the status quo the reader already accepts. One sentence. If the reader would nod, it's right.
**Complication** — what changed, broke, or is at stake. Tension enters here. If the reader thinks "huh, really?" — it's right.
**Question** — the question the Complication forces. Don't skip this; implicit questions confuse readers.
**Answer** — your thesis, delivered with specifics.
## Example
Input: "We need to rethink our hiring pipeline. It's been broken for a while. I think we should automate screening."
Output:
> **S** — We've been running the same hiring pipeline for three years: job post → recruiter screen → hiring manager loop → offer.
> **C** — But time-to-hire has doubled since Q1, and 40% of offers are turned down at the final stage.
> **Q** — Where's the breakdown actually happening?
> **A** — The data shows it's the recruiter screen — 60% of the dropoff is there. Automating that stage with structured async interviews cuts time-to-hire by ~11 days without losing quality signal.
## Failure modes
- **Missing Complication** → writing sounds like a Wikipedia entry
- **Weak Question** → Answer feels like it came from nowhere
- **Answer leaked into Situation** → no tension, reader bounces
- **All four moves in one sentence** → logic collapses into a summary
## Output shape
Always label the four sections (S/C/Q/A) in draft form. Strip labels only when the user asks for final prose. Keep each move to 1–3 sentences — if any section balloons, two moves got compressed into one.Content Repurposing Engine 🟢
Most long-form writing dies the moment it's published. One post, one surface, one moment of attention, then silence. A repurposing engine's job is to keep the core argument alive across channels, each with its own compression law — a LinkedIn post is not a shortened blog, and a carousel is not a thread laid sideways.
---
name: content-repurposing-engine
description: Converts a long-form piece (blog post, essay, case study, memo, transcript) into channel-native variants — LinkedIn posts, Twitter/X threads, short-video scripts, newsletter intros, Instagram/LinkedIn carousels, and executive summaries. Use whenever the user says "turn this into a thread", "make a LinkedIn post from this", "adapt for carousel", "shorter version", "pull a newsletter intro", or shares a long piece and asks how to get more mileage from it — even if they don't say "repurpose" by name.
---
# Content Repurposing Engine
## When to use
- User shares a long article/essay and asks for a shorter format
- User says "turn this into a LinkedIn post / thread / carousel / video script"
- User wants EN → HU (or any cross-language) adaptation, not just translation
- User is planning a multi-channel launch and needs 3–5 derivative assets from one source
- User pastes a transcript or voice-note draft and wants a publishable version
## Why this works
Repurposing fails when the model treats every channel as "the same message, shorter". It works when each channel has a named compression law — what survives, what gets cut, and what new element the format demands. A blog tolerates buildup. A LinkedIn post needs a claim in the first line. A thread needs a payoff per tweet. A carousel needs one idea per slide, visually legible. Knowing the law lets the skill cut with confidence instead of shrinking uniformly.
## The source pass (always first)
Before any format conversion, extract and write down:
- **Thesis** — one sentence. What is the piece actually arguing?
- **Proof** — 2–4 concrete specifics (numbers, anecdotes, named examples) that earn the thesis
- **Arc** — the move the piece makes (problem → reframe → answer, or observation → pattern → prediction, etc.)
Every output format pulls from these three. If any is missing, flag it — you can't repurpose what isn't there.
## The formats
**LinkedIn post (150–300 words)** — open with the thesis reframed as a provocation, not a preamble. First line earns the click-to-expand. One specific from Proof. End on a lift, not a CTA.
**Twitter/X thread (5–9 tweets)** — each tweet is a self-contained payoff. Tweet 1 = the claim. Tweets 2–N = one Proof each, in the order the Arc demands. Last tweet = the reframe, not a summary.
**Short-video script (30–60s)** — hook in the first 2 seconds (thesis as a question or a contrarian line). Body is 2 Proof points max. Close on the lift. Write for spoken cadence, not written grammar.
**Newsletter intro (100–200 words)** — more intimate than LinkedIn. Opens in the reader's world, not yours. Gets to the thesis by the third sentence. Links the full piece at the end.
**Carousel (6–9 slides)** — one idea per slide, readable at a glance. Slide 1 = hook (the thesis). Slides 2 to N-1 = one Proof or one step of the Arc each. Final slide = the reframe, plus a minimal CTA. Copy ≤ 15 words per slide.
**Executive summary (3–5 bullets)** — strip narrative. State the thesis, the evidence, the implication. No hook, no lift. For readers who just want the load-bearing claim.
## Example
Input: A 1,500-word blog post arguing that AI's real advantage is not prompt quality but persistent context — "a unified AI memory system".
LinkedIn output:
> Most AI workflows fail in the same place: continuity.
>
> A prompt goes in, an answer comes out, and the intelligence created in that moment disappears with the session.
>
> That's why I've become less interested in individual AI tools — and more interested in the design of a unified memory system. Files as durable truth. Agents as operators. Synthesis for direction. Execution for proof.
>
> The advantage won't belong to people with the cleverest prompts. It will belong to people who design environments where thought doesn't disappear.
Thread output (abridged):
> 1/ Most AI workflows fail at the same point. It's not model quality. It's memory.
> 2/ Prompt in, answer out, context gone. The useful work stays trapped in a chat window.
> 3/ The pattern I keep returning to: files for truth, agents for work, synthesis for direction, execution for proof.
> ...
## Failure modes
- **Generic shrink** → the output is just a shorter blog post with the same structure; no format-native choices made
- **Lost thesis** → the piece gets compressed below the load-bearing claim; the output reads like commentary without a spine
- **Wrong voice per channel** → LinkedIn copy written in thread-tweet rhythm, or vice versa
- **Proof dropped** → all abstraction, no specifics; the repurposed version sounds like a horoscope
- **Hook recycled from original headline** → the original intro only worked in its original context; new channels need a new first line
## Output shape
Always start with the Thesis/Proof/Arc extraction block (labeled), then produce each requested format in its own clearly labeled section. If the user only asks for one format, still include the extraction block at the top — it's the audit trail and the user can ask for more formats later without re-running the analysis. For cross-language repurposing (e.g., EN → HU), adapt the idiom and cadence, not the literal phrasing; preserve the thesis, re-localize the Proof where possible.Tone & Style Enforcer 🟢
Voice is the part of a brand that's hardest to write down and easiest to lose. Logos drift slowly; voice drifts every week. A tone enforcer is not a grammar checker — it's a boundary keeper. Its job is to say "this doesn't sound like us" with reasons that hold up in review.
---
name: tone-style-enforcer
description: Checks and rewrites content to match a defined brand voice, tone, or personal writing style — catches voice drift, flags off-brand phrasing, and suggests on-brand alternatives. Use when the user shares a draft and says "make this sound more like us / me", "align with our voice", "check the tone", "does this fit the brand", pastes a style guide and asks for enforcement, or when a piece feels off but they can't say why. Also use proactively when editing content if a brand-voice config is loaded.
---
# Tone & Style Enforcer
## When to use
- User provides a voice/style guide and a draft, and asks for alignment
- User says "this doesn't sound like me / us" — vague tone complaint with no diagnosis yet
- User is editing AI-generated copy that reads as generic ("leverage", "unlock", "empower", "seamless")
- User is enforcing consistency across a batch — landing page + email + social post, same voice
- Localization review — the translation is grammatically correct but voice-wrong
## Why this works
A style guide lists attributes ("warm, direct, confident"). That's not enough to enforce — attributes are aspirations, not tests. This skill converts each attribute into a concrete *sounds-like / does-not-sound-like* pair, then scans the draft line by line against those pairs. Flags are specific, not stylistic vibes. That's the only way voice review survives disagreement: the rule exists before the draft does.
## The enforcement passes
Run in this order. Don't collapse passes — each one catches a different kind of drift.
**1. Voice attribute pass** — for each documented attribute (e.g., "confident, not arrogant"), scan for lines that violate it. Flag the line, name the attribute, offer one rewrite.
**2. Forbidden phrase pass** — match against the brand's explicit avoid-list (corporate filler, trademarked competitor terms, platitudes the brand has publicly rejected). Replace each hit.
**3. Terminology pass** — enforce preferred terms (e.g., "sign up" vs. "signup", product name capitalization, inclusive language). Mechanical, high-confidence fixes.
**4. Channel tone pass** — confirm the dial setting matches the channel. Same voice, different inflection: sales deck dials up confidence, incident response dials up warmth, docs dial up patience. Flag mismatches.
**5. Diagnostic summary** — end with a 2-line read: *"Overall voice fit: strong / mixed / off-brand"* + the one pattern driving most of the flags ("leaning too formal in body paragraphs", "hedging every claim"). This is the part the editor actually reads.
## Example
Input voice guide snippet: *"Warm but direct. We don't hedge. We don't use corporate filler like 'leverage', 'unlock', or 'synergy'. We write for one reader, not an audience."*
Input draft: *"We're excited to announce that our team has been working tirelessly to leverage cutting-edge AI to empower users across the platform."*
Output:
> **Line 1 — flagged**
> - Violates *direct* ("excited to announce", "working tirelessly" = corporate filler; delays the actual news)
> - Violates *no corporate filler list* ("leverage", "empower")
> - Suggested rewrite: *"There's a new AI layer across the platform. Here's what it does for you."*
>
> **Overall voice fit:** off-brand. Pattern: buried thesis behind ceremony. Start the sentence with what happened, not with the fact that you're announcing it.
## Failure modes
- **Enforcement without a rule** → skill invents "improvements" based on generic taste, not the loaded voice guide. If the guide doesn't say it, don't flag it.
- **Style-guide theatre** → flags every minor deviation; buries the 2–3 flags that actually matter. Prioritize structural voice hits over mechanical ones.
- **Rewrite drift** → the suggested rewrite fixes the flag but introduces a new voice violation. Re-check suggestions against the guide.
- **Over-sanitization** → kills the original writer's energy along with the off-brand phrasing. Voice review should sharpen, not flatten.
## Output shape
If the user provides a voice guide: run all 5 passes, output flags line-by-line with reason + rewrite, end with the diagnostic summary. If no guide is loaded: ask for one (or offer to extract a working guide from 2–3 existing pieces the user considers on-brand). Never enforce a phantom rule. Always make the reason for each flag legible — the user should be able to disagree with a flag specifically, not vaguely.Long-Form to Summary Compressor 🟡
---
name: long-form-summary-compressor
description: Compresses long articles, papers, transcripts, or reports into tight paragraph or bulleted summaries — for quick reading, inbox triage, or meeting prep. Use when the user pastes a long piece and says "summarize this", "TL;DR", "give me the short version", "what's this about", or asks for the key points of a document or video transcript.
---
# Long-Form to Summary Compressor
## When to use
- User pastes a long article, paper, transcript, or report and asks for a summary
- User says "TL;DR", "short version", "key points", "what's this about"
- User is preparing for a meeting and wants pre-read material compressed
## Overview
Reduces complex content into digestible summaries for easy reading.
**Keywords**: summarization, long-form, clarity, conciseness, insights
## Features
- Key point extraction
- Bullet or paragraph output
- Simplifies dense material
## Output Format
- One-paragraph TL;DR (2–3 sentences)
- Optional bulleted key points (3–7 bullets)
- Optional "what this means" line if the source makes an argument
## Instructions
- Identify the thesis first; if the source has no thesis, say so
- Remove redundancy and example-stacking
- Preserve numbers, named entities, and direct claims verbatim
- Flag anything explicitly uncertain in the sourceStructured Copywriting Skill 🟢
Most "write me a LinkedIn post" requests produce the same shape — a motivational opener, three forgettable middle lines, a hashtag cluster. That's not copywriting. That's filler dressed as structure. Real copywriting starts from a named structural pattern and fits the argument inside it, the way a song fits inside verse–chorus.
---
name: structured-copywriting-skill
description: Writes short-form persuasive copy — LinkedIn posts, landing page sections, ad copy, email subject lines and bodies, post intros, value props — using a named structural pattern (PAS, AIDA, 4U, Before/After/Bridge, hook-body-payoff) chosen for the goal. Use whenever the user asks for "a LinkedIn post", "ad copy", "a hook", "a landing page headline", "a cold email", or pastes a product description and says "make this punchier / sell this better", even without mentioning a framework by name.
---
# Structured Copywriting Skill
## When to use
- User asks for a LinkedIn post, tweet, cold email, or ad variant
- User provides a product/feature description and wants marketing copy
- User says "this headline isn't working — rewrite it"
- User wants A/B variants of the same message
- User is writing their own content but stuck on the opener or the CTA
## Why this works
Structure beats cleverness. Every proven copy pattern — PAS, AIDA, Before/After/Bridge, 4U, hook-body-payoff — is a load-bearing sequence that routes attention from indifference to action. When the skill picks the *right* pattern for the goal first, the copy practically writes itself. When it doesn't, the writer improvises — and improvised copy reads as improvised.
## Pick the pattern to the goal
**PAS — Problem, Agitate, Solution.** Best for pain-driven sales copy and cold outreach. Works when the reader already feels the problem and just needs to see it named.
**AIDA — Attention, Interest, Desire, Action.** Best for warm-audience posts, landing sections, announcements. Reader isn't in pain — they need to be intrigued into considering.
**Before / After / Bridge.** Best for product demos, case studies, and anything where the transformation is the whole point. "Here's the world without us → here's the world with us → here's how you cross over."
**4U — Useful, Urgent, Unique, Ultra-specific.** Best for headlines, subject lines, and CTAs. Every word must earn its place on all four axes.
**Hook → Body → Payoff.** Best for LinkedIn posts, Twitter hooks, talk openers. Hook = one bold line that creates a tension. Body = the earned context. Payoff = the lift that makes the reader screenshot.
## The craft rules (independent of pattern)
- **First line earns the second.** If the reader wouldn't click "see more" after line 1, rewrite line 1.
- **One idea per line in short-form.** Line breaks are punctuation. Use them.
- **Specifics outperform adjectives.** "Cut onboarding from 14 days to 3" beats "dramatically faster onboarding".
- **No stacked qualifiers.** Cut "really", "very", "just", "kind of", "a bit". They announce hedging.
- **CTA or lift, never both.** Short-form rarely earns a direct CTA. A closing *lift* (a reframe, a challenge, a forward-looking line) often converts better than "Book a demo."
## Example
Input: *"We help marketing teams automate reporting. It used to take them hours. Now it takes minutes. Some teams save 10 hours a week."*
Pattern: **Before / After / Bridge** (transformation is the whole point)
Output (LinkedIn, 5 lines):
> Marketing teams lose a full day every week to reporting.
>
> Copy-paste from GA, pivot in Sheets, paste into the deck, rename the file.
>
> One of our teams was doing this every Friday for 9 months straight.
>
> We built them an automation layer — same reports, under 4 minutes.
>
> Most teams get back between 6 and 10 hours a week. That's the difference between "marketing does reporting" and "marketing does marketing".
## Failure modes
- **Pattern-free prose** → 5 lines that sound nice but don't route to an action or a reframe. Symptom: you could reorder them and nothing breaks.
- **Jargon opener** → "In today's fast-paced world…". Delete. Start on the concrete, not the setting.
- **Fake specificity** → invented numbers that read as fake. Either use real numbers the user gave, or hedge honestly ("most teams save hours per week").
- **Stacked CTAs** → "Book a demo, follow me, comment below, share with a friend". One ask. Maybe zero.
- **Hook shoplifting** → generic viral openers ("I just got fired. Here's what I learned…"). They trigger cringe when the content doesn't justify them.
## Output shape
Lead with the chosen pattern name and a one-line reason. Then deliver the copy ready to paste — no meta-commentary inside the copy itself. If the user asks for variants, produce 2–3 distinct patterns, not 3 mild rephrasings of the same one. End with an optional one-line diagnostic: what would make this hit harder if the user gave more specifics (a number, a name, a stakeholder quote).🎨 Visual & Infographic Skills
Flowchart Decision Builder 🟡
---
name: flowchart-decision-builder
description: Turns a described process, workflow, or decision into a Mermaid or plain-text flowchart with labeled nodes, conditional branches, and a clear start/end. Use when the user says "map this as a flowchart", "turn this into a decision tree", "diagram this workflow", "what are the branches", describes a BPM / approval / triage process, or shares a multi-step SOP that needs to be visualized.
---
# Flowchart Decision Builder
## When to use
- User describes a workflow or SOP and wants it visualized
- User asks for a decision tree from a set of rules
- User is mapping a BPM / approval / triage / escalation process
- User asks for Mermaid syntax specifically
## Overview
Converts processes into stepwise flowcharts for clear decision-making.
**Keywords**: flowchart, decision tree, process, visualization, clarity
## Features
- Node-based structure (action nodes + decision diamonds)
- Conditional branching with labeled edges (Yes/No, condition names)
- Clear start and end terminators
## Output Format
- Mermaid `flowchart TD` block by default (copy-pasteable into Obsidian, Notion, GitHub)
- Plain-text node/edge list as fallback if Mermaid is not supported
- One-line legend if the decision has non-obvious branch labels
## Instructions
- Separate actions (rectangles) from decisions (diamonds) — don't merge
- Label every edge with its condition; unlabeled edges only for default flow
- Ask before flattening: if the process has parallel paths, render them parallel, not serialized
- For long flows (>12 nodes), split into sub-flows with a top-level mapUI/UX Layout Advisor 🟢
Most layout problems are hierarchy problems in disguise. The screen doesn't tell the eye where to go first, second, third — so the user's attention splits, and every element fights every other element for priority. A layout advisor's real job is not to "make it prettier". It's to rebuild the visual order of operations the interface was supposed to have.
---
name: ui-ux-layout-advisor
description: Reviews and restructures UI screens, landing pages, dashboards, forms, and design mocks for visual hierarchy, spacing, alignment, scannability, and accessibility. Use when the user shares a screenshot, Figma link, component, or code snippet and asks to "review the layout", "make this cleaner", "fix the hierarchy", "the screen feels busy", "rate this design", or describes a UX problem like "users can't find the primary action". Also use when reviewing design work in DS / DesignOps context — pattern compliance, token usage, responsive behavior.
---
# UI/UX Layout Advisor
## When to use
- User shares a UI screenshot or Figma frame and asks for feedback
- User describes symptoms: "this screen feels busy", "I don't know what to look at first", "the CTA isn't converting"
- User is building a dashboard, form, landing page, or onboarding flow
- Design systems / DesignOps context — reviewing a new pattern against existing tokens and components
- Accessibility-adjacent requests: "is this contrast OK", "can I use color alone", "is this keyboard-navigable"
## Why this works
Good layouts are legible without reading. The eye answers three questions in order: *where am I? → what is this? → what should I do?* Each question maps to a visual level of hierarchy — primary (one), secondary (few), tertiary (supporting). When a layout breaks, it's almost always because two elements are claiming the same hierarchy level, not because the spacing is wrong. This skill diagnoses the hierarchy mismatch first, then the rhythm, then the accessibility floor — in that order, because fixing rhythm on a broken hierarchy just produces prettier confusion.
## The review passes
Run in order. Don't skip passes — they build on each other.
**1. Hierarchy pass** — identify the primary element (the one thing a 3-second glance should land on). If more than one element competes for primary, name the conflict. Propose which wins.
**2. Scan path pass** — trace the intended eye path: primary → secondary → supporting. Mark where the path breaks (usually a demoted element that reads as primary because of size, color, or position).
**3. Rhythm pass** — spacing as a system, not ad hoc values. Use a visible spacing scale (4 / 8 / 12 / 16 / 24 / 32 / 48). Call out inconsistent gaps, especially between sibling groups.
**4. Alignment & density pass** — is the grid respected, or does each section invent its own? Is density appropriate for the task — compact for data-heavy, breathable for marketing?
**5. Accessibility floor pass** — contrast (WCAG AA minimum), focus order, keyboard operability, labels on icon-only buttons, target size ≥ 44px on touch. Always check — never optional.
**6. DS/token pass (when applicable)** — are existing tokens / components being used, or has someone reinvented one inline? Flag drift.
## Example
Input: screenshot of a SaaS dashboard home — title bar, 4 metric cards, a chart, a table, a "Create report" button floating in the corner.
Output:
> **Hierarchy** — Primary is ambiguous. The metric cards and the Create-report CTA are both competing. If the core job is "monitor status", metrics win and Create-report should demote to secondary. If the core job is "author new reports", flip it.
>
> **Scan path** — title → metrics → chart → table, but the floating CTA interrupts the path at the corner and pulls the eye out of the grid. Move it to the title bar, right-aligned.
>
> **Rhythm** — gaps between cards (24px) and between card rows (40px) are inconsistent with the 16px column gutter. Standardize on 24 / 24 / 32.
>
> **Alignment** — chart left edge doesn't align with the card grid. Snap to column 1.
>
> **A11y** — card value text at 14px grey on white is ~3.9:1. Fails AA for normal text. Either darken to #4A4A4A or increase to 16px semibold.
>
> **Top fix (if only one change):** move the CTA into the title bar. It removes a competing primary and restores the scan path in one move.
## Failure modes
- **Prettification without diagnosis** → suggestions that change look without fixing the underlying hierarchy break. "Try a softer shadow" is rarely the right first note.
- **Token tourism** → recommending specific pixel values outside any scale. Tell the user the scale; don't bless a number.
- **A11y as afterthought** → mentioning contrast only if asked. The floor isn't negotiable — name it every time.
- **Ignoring the job** — reviewing visual balance without asking what the screen is for. A dense ops dashboard and a marketing page both "look balanced" at different densities. The job determines the right density.
- **Responsive blindness** — treating the desktop mock as the only artifact. Always flag what breaks on narrow viewports.
## Output shape
Structure as: **Diagnosis** (1–2 sentences naming the core hierarchy problem) → **Numbered passes** (skip passes that don't apply, but note why) → **Top fix** (single highest-leverage change, if the user can only do one thing). When reviewing code snippets or Figma links, reference specific elements — not "the header" but "the h2 on line 14" or "the Card labelled 'Revenue'". Use a visible spacing scale; never recommend arbitrary pixel values. End with a 1-line overall verdict: *"Usable, needs hierarchy work" / "Strong, minor rhythm fixes" / "Core restructure needed"*.🔍 Research & Analysis Skills
Deep Research Synthesizer 🟢
Most research output is the wrong shape. It summarizes sources instead of the question. It lists findings instead of taking a position. It hides the pattern that matters inside a neutral roundup because the author confuses thoroughness with usefulness. A synthesizer's job is not to restate — it's to stake a claim the reader can act on, with the evidence trail visible behind it.
---
name: deep-research-synthesizer
description: Reads large amounts of raw material (articles, transcripts, docs, meeting notes, tweet threads, reports) and produces a structured synthesis with a named thesis, pattern map, supporting evidence, contradictions, and open questions. Use when the user says "synthesize this", "what's the pattern in all this", "pull the signal out", shares 3+ sources and asks for a take, or dumps research material and asks "so what". Also use when the user is preparing a carousel, talk, essay, or memo from a stack of inputs.
---
# Deep Research Synthesizer
## When to use
- User shares multiple sources (articles, PDFs, transcripts, tweets) and asks for a take
- User says "pull out the patterns", "what's the common thread", "synthesize this", "so what"
- User is preparing a carousel / blog post / talk from research and is stuck in raw-note phase
- User wants contradictions surfaced, not smoothed over
- User asks "is there consensus on X" — needs both the consensus and the dissent
## Why this works
A summary answers *"what does this say?"* A synthesis answers *"what does this mean, and where does it disagree with itself?"* Those are different jobs. Summaries flatten; syntheses lift. The difference is one instruction: force the output to declare a thesis before it lists evidence. A thesis makes the evidence selectable — relevant / irrelevant — and makes contradiction visible, instead of hiding it in a neutral list.
## The synthesis stages
Run in order. Each stage narrows what survives.
**1. Corpus read** — list every source with a one-line fingerprint (who wrote it, what their claim is, what their stake is). This is the audit trail; the rest refers back to it.
**2. Thesis attempt** — state a one-sentence thesis the corpus collectively supports. If you can't, that's a finding — the sources disagree at the level of claim, not just detail. Say so.
**3. Pattern map** — 3–6 recurring patterns across sources. Each pattern = a one-line claim + the source IDs that support it. Patterns are named ("the capability/context gap", "the memory thesis", "the tool sprawl backlash"), not described in the abstract.
**4. Evidence table** — for each pattern, the strongest piece of evidence (quote, number, named example). One per pattern, not ten.
**5. Contradictions** — where do sources actively disagree? Name the disagreement, not the disagreers politely. "Source A claims X, Source B claims ¬X. The disagreement is genuine, not a misreading."
**6. Open questions** — what would resolve the contradictions? What does the corpus not cover that it should? This is the handoff to the reader — the part that tells them where to dig next.
**7. Actionable read** — 2–3 sentences. If this synthesis is correct, what does the reader do differently? If the reader can close the tab and the answer is "nothing", the synthesis was a summary.
## Example
Input: 7 articles, 2 podcast transcripts, 4 tweets about "why AI agents fail in production".
Output (abridged):
> **Thesis** — Agent failures in production are overwhelmingly context failures, not model failures. The best model with bad memory underperforms a weaker model with persistent context.
>
> **Pattern map**
> 1. *Context amnesia* — sources 1, 3, 5, 7, 9
> 2. *Tool sprawl* — sources 2, 4, 6
> 3. *No rollback mechanism* — sources 3, 8
> 4. *Over-fitted prompts that break on edge cases* — sources 5, 7, 10
>
> **Evidence (pattern 1)** — Source 7 benchmarks: same task, same model; agents with persistent context win on 8/10 runs over stateless agents.
>
> **Contradictions** — Sources 2 and 5 disagree on whether tool sprawl causes or reflects context failure. Source 2 treats tools as the cause; source 5 treats tool sprawl as a symptom of teams compensating for missing context. Genuine disagreement, not nuance.
>
> **Open questions** — No source in this corpus measures the cost of context infrastructure against agent performance gains. That's the missing economic argument.
>
> **Actionable read** — If you're debugging an agent that works in demos and breaks in production, instrument context before touching the model. The corpus says your problem is almost certainly upstream of inference.
## Failure modes
- **Summary masquerading as synthesis** → a list of what each source said, without a thesis. Symptom: "Source A says… Source B says… Source C says…"
- **Fake consensus** → collapsing genuine disagreement into "perspectives vary". Readers catch this and lose trust.
- **Unsourced claim** → a thesis sentence with no source ID behind it. Every claim must point back to the corpus.
- **No actionable read** → ends on a "further research is needed" note without naming what the reader does tomorrow.
- **Hallucinated sources** → never invent a citation. If the corpus doesn't cover something, the open-questions block is where that goes.
## Output shape
Always in this order: Corpus list → Thesis → Pattern map → Evidence per pattern → Contradictions → Open questions → Actionable read. Every pattern and contradiction carries source IDs. If the corpus is too small or too one-sided to support a thesis, say so in the Thesis block — don't manufacture one. The deliverable must be usable as a standalone document: someone reading only the synthesis, without access to the corpus, should be able to argue its thesis and name its blind spots.Onchain Transaction Analyzer ⚫
(Reference-only — not central to the intended workflows)
---
name: onchain-transaction-analyzer
description: Analyzes blockchain transactions by tracing wallets, contracts, and token movements and providing simple, understandable explanations.
---
# Onchain Transaction Analyzer
## Overview
Decodes onchain data into human-readable explanations.
**Keywords**: blockchain, crypto, analysis, transactions, wallets
## Features
- Wallet tracking
- Contract mapping
- Token flow visualization
- Simple language
## Output Format
- Step-by-step explanation
- Key actors and actions
- Summary insights
## Instructions
- Trace wallet and token flows
- Identify key interactions
- Summarize in plain language
## Constraints
- Avoid jargon
- Focus on claritySource Validation Skill 🟡
---
name: source-validation-skill
description: Evaluates sources (articles, studies, blog posts, tweets, reports) for credibility, recency, relevance, conflicts of interest, and likely bias — before using them in research or synthesis. Use when the user shares a source and asks "is this reliable", "can I trust this", "what's the bias here", or is assembling a research corpus and needs a quality filter. Also use proactively when a deep-research task is about to cite sources whose credibility hasn't been checked.
---
# Source Validation Skill
## When to use
- User shares a source and asks whether to trust it
- User is building a research corpus and wants a quality screen
- User is preparing content that will cite sources — run validation first
- User asks "what's the bias on this" or "who's behind this"
## Overview
Filters information for trustworthiness and relevance.
**Keywords**: credibility, validation, sources, research, bias
## Features
- Reliability rating (high / medium / low) with named reasons
- Bias detection (funding, stance, audience, publication context)
- Recency check (when was it written; has the field moved since?)
- Relevance check (does it actually address the question?)
## Output Format
- Per-source card: title, author, publication, date, rating, 1–2 line rationale, declared conflicts
- Summary table when validating a batch
- Explicit "do not use" flag for sources that fail the floor (anonymous, undated, clearly fabricated, or primary conflict)
## Instructions
- Check author and publication first; many credibility issues resolve at that level
- Note the date; flag anything older than the half-life of the field
- Name the bias direction specifically — "funded by X", "advocacy arm of Y", "anonymous Substack"
- Distinguish *biased but useful* from *biased and misleading* — the first needs framing, the second needs exclusion
- Never invent a credibility rating from thin air; if you don't have enough signal, say soCompetitive Intelligence Skill 🟡
---
name: competitive-intelligence-skill
description: Compares products, tools, platforms, frameworks, or companies and produces a structured read on strengths, weaknesses, differentiation, and where each fits best. Use when the user says "compare X and Y", "what's the difference between", "which should I pick", "competitive landscape for", or is preparing vendor selection, job interview prep, pitch decks, or procurement research.
---
# Competitive Intelligence Skill
## When to use
- User is choosing between tools, vendors, or platforms
- User is prepping for an interview and wants market positioning context
- User is writing a pitch and needs competitive framing
- User asks for a landscape / market map of a category
## Overview
Delivers comparative insights for business, tech, or market research.
**Keywords**: analysis, competitive, research, comparison, strategy
## Features
- Feature comparison on named axes (not generic "better/worse")
- SWOT-style positioning per option
- Explicit "best fit for" recommendation tied to the user's actual use case
- Differentiation callouts — what each option is genuinely unique at
## Output Format
- Comparison table on user-relevant axes
- Per-option short profile: strengths, weaknesses, best-fit scenarios
- Top recommendation with reasoning, plus a runner-up if the decision is close
- "Avoid if…" row for each option — boundary conditions matter
## Instructions
- Ask for the user's decision criteria before comparing; generic comparisons are weak
- Name the weaknesses honestly — sanitized comparisons are useless
- Flag when information is dated or unverified; do not pretend to current pricing / feature lists you're not sure of
- Distinguish factual differences (feature X exists / doesn't) from positioning differences (brand, ecosystem, maturity)Knowledge Structuring Skill 🟢
Raw notes are liquid. They flow where attention goes — meeting fragment, half-formed insight, link, quote, a stray TODO. Useful knowledge is solid. It has shape, edges, and a place where it belongs. The work of structuring is not to make notes look tidy — it's to decide which fragments earn a second life and what shape that life takes. Done right, it's the difference between a notes folder and a second brain.
---
name: knowledge-structuring-skill
description: Takes raw notes, scratchpads, meeting captures, thought streams, or fragmented inputs and restructures them into a usable knowledge artifact — atomic notes, structured frameworks, concept maps, decision records, or linked reference pages. Use when the user shares messy notes and says "organize this", "structure this", "make sense of this", "turn this into a proper note", "extract the framework", or pastes a dump of ideas and asks what to do with it. Also use when building a personal knowledge base, Zettelkasten-style notes, or preparing inputs for later synthesis.
---
# Knowledge Structuring Skill
## When to use
- User pastes a scratchpad, meeting notes, voice-memo transcript, or a stream of thoughts
- User says "structure this", "turn this into something I can come back to", "what's the framework here"
- User is building / curating a personal knowledge base (Obsidian, Notion, plain markdown)
- User wants atomic notes (one-idea-per-note Zettelkasten style) extracted from a longer piece
- User is preparing inputs for future synthesis or writing
## Why this works
Messy input fails at retrieval, not at capture. Notes people wrote but can't find six months later weren't captured badly — they were shaped badly. Giving each fragment a clear type (atomic note, framework, decision, reference, open question) and a clear shape for that type fixes retrieval permanently. This skill's bet is that *typing the fragment is 80% of the value*; the formatting comes after.
## The structuring passes
Run in order. Every pass discards, sharpens, or promotes — never rewrites without a reason.
**1. Classify each fragment** into one of five types:
- **Atomic note** — one self-contained idea, restatable in a single sentence, worth a second life
- **Framework** — a repeatable structure (steps, stages, quadrants, decision tree)
- **Decision / conclusion** — a choice made + the reason, worth preserving as a record
- **Reference** — a fact, quote, link, or data point with no claim of its own; lives as evidence for something else
- **Open question** — unresolved, worth revisiting; often the most valuable fragments
**2. Discard** — fragments that don't earn any type (status noise, logistics, resolved micro-todos). Name them, then drop. Do not silently omit.
**3. Atomize** — break multi-idea notes into single-idea atomic notes. One idea per note, titled as a claim, not a topic. *"The context gap matters more than model size"* beats *"Notes on LLMs"*.
**4. Shape by type** — each type has its own small template. Atomic notes: title (as claim) → body (2–5 lines) → links to adjacent ideas. Frameworks: name → structure → when to use. Decisions: choice → reasoning → date. References: content → source. Open questions: question → why it matters → what would answer it.
**5. Link** — surface explicit connections between the new notes and each other, and to prior knowledge if the user's corpus is provided. Links are the retrieval layer; without them you have a pile, not a system.
**6. Promotion candidates** — 1–3 atomic notes that are strong enough to become a public artifact (blog post, LinkedIn post, carousel, talk section). Flag them. Don't write them here — just name the candidates.
## Example
Input (messy scratchpad):
> "Team thinks our current onboarding is fine but NPS says otherwise. Sarah pushed back in retro — maybe we're asking the wrong question? Also: that Figma Config talk about 'designing for doubt' — connects. TODO check the exec dashboard. Reports are slow. Maybe need a rollup service. Also everyone's mentioning AI agents again but nobody has a production one that actually works."
Output:
> **Discarded (noise)**: "TODO check the exec dashboard" (logistical, not a knowledge fragment).
>
> **Atomic notes (promoted)**
> - *"We ask the wrong onboarding question"* — Survey signals we're measuring output satisfaction, not input clarity. Sarah's retro point. Connects to: "designing for doubt" (Figma Config), "NPS as lagging indicator".
> - *"AI agents work in demos, fail in production"* — Observation across multiple conversations; no one has a production case. Candidate for public writing — this is the context-gap essay.
>
> **Framework (detected)** — None explicit in this note set.
>
> **Decision / conclusion** — None.
>
> **References**
> - *Figma Config talk — "Designing for Doubt"* — needs link from user; flagged.
>
> **Open questions**
> - *"What's the right onboarding metric if NPS lags?"* — Worth a dedicated research pass. Related to: onboarding atomic note above.
> - *"What separates a demo agent from a production agent?"* — Most valuable question in this set; candidate for the promoted post.
>
> **Promotion candidates**
> 1. *"AI agents work in demos, fail in production"* — a post or carousel could lead with the question and synthesize the patterns.
> 2. *"We ask the wrong onboarding question"* — LinkedIn-shaped; strong for the DesignOps audience.
## Failure modes
- **Re-summarization** → the output is a neater version of the input with the same granularity. Symptom: no atomic notes, just shorter paragraphs.
- **Topic titles instead of claim titles** → "Notes on X" instead of "X does Y because Z". Retrieval dies here. Always title as a claim.
- **Silent discarding** → dropping fragments without naming them. The user loses trust; they don't know what was cut.
- **Over-linking** → spurious connections between unrelated notes. Links must earn themselves. When in doubt, skip the link.
- **Missed promotion candidates** → treating every note equally. Some atomic notes are public gold. Flag them — the user's calendar thanks you.
## Output shape
Sectioned by type, in this order: Discarded (with reasons) → Atomic notes → Frameworks → Decisions → References → Open questions → Promotion candidates. Every atomic note has a claim-style title. Every link points to a specific note title, not a vague concept. When structuring into an existing PKM (Obsidian, etc.) note the intended filename convention and backlink format the user uses, and match it. Never fabricate a link target — if a referenced note doesn't exist yet, mark it *[to be created]*.Video Script Generator 🟡
---
name: video-script-generator
description: Writes video scripts for short-form (TikTok, Reels, Shorts, LinkedIn video) and long-form (YouTube, course, talk) content with a structured hook, beats, pacing, and close. Use when the user says "write a video script", "script this for LinkedIn / YouTube / Reels", "30-second script", "hook + body + CTA", or shares a topic and wants it turned into a shootable script. Specify target length and platform up front.
---
# Video Script Generator
## When to use
- User asks for a script for a specific platform (TikTok, LinkedIn, YouTube, etc.)
- User has a topic / outline and wants it turned into a shootable script
- User is planning a multi-video series and needs consistent structure
- User wants a hook variant for an existing idea
## Overview
Produces structured scripts for short- and long-form video content.
**Keywords**: video, scripts, hooks, engagement, pacing, content
## Features
- Platform-aware pacing (short-form: payoff every 3–5 seconds; long-form: beats every 30–60s)
- Hook designed for the platform's first-frame / first-second behavior
- Beat-level structure, not just "intro / body / outro"
- Close tuned to goal (lift, CTA, or setup for next episode)
## Output Format
- Header line: platform, target length, 1-line premise
- Timecoded script (e.g., `0:00–0:03`, `0:03–0:12`) with spoken lines and optional on-screen text / B-roll cues
- Alternative hooks (2–3) at the top so the user can pick
- Optional: caption/title suggestion below the script
## Instructions
- Always ask target length and platform if not specified — pacing changes everything
- Write for spoken cadence, not written grammar (short sentences, active voice, contractions)
- Make the hook a standalone payoff; never delay the topic past 2 seconds on short-form
- Keep CTAs minimal; a good close earns action without asking
- Reuse the structured-copywriting patterns (PAS, Before/After/Bridge, Hook→Body→Payoff) when the goal warrantsVideo Editing Planner ⚫
(Reference-only — not central to the intended workflows)
---
name: video-editing-planner
description: Suggests editing structure, scene cuts, transitions, and pacing for improved video content quality and engagement.
---
# Video Editing Planner
## Overview
Assists in planning efficient, engaging edits.
**Keywords**: video, editing, pacing, transitions, scenes
## Features
- Scene breakdown
- Transition suggestions
- Pacing optimization
## Output Format
- Editing steps
- Scene notes
- Transition plan
## Instructions
- Identify key scenes
- Suggest cuts/transitions
- Optimize for engagement
## Constraints
- Avoid excessive edits
- Preserve story clarityHook Generator 🟡
---
name: hook-generator
description: Produces 3–5 opening-line variants for LinkedIn posts, video intros, essays, email subjects, and carousels. Fires on "write me a hook", "opening line", "first line for LinkedIn", "grab attention", "hook for this", or any ask for the first sentence of a piece. Use even when the user doesn't say "hook" — any request for an attention-grabbing opener qualifies.
---
# Hook Generator
## When to use
- "Give me a hook for this post"
- "What's a better first line?"
- User pastes a draft and asks for a stronger opening
- LinkedIn, video, newsletter, cold email — any format where the first line carries the click
- User asks for "3 options" or "a few variants" for an intro
## Why this works
The first line does one job — buy the second line. Most hooks fail because they describe the topic instead of creating a gap the reader wants closed. Offering variants across different mechanisms (curiosity, contrarian, stat, question, anecdote) lets the user match the hook to their voice and the piece's actual payoff, instead of settling for the first one that came to mind.
## Hook patterns — pick 3 to 5 per request
- **Curiosity gap** — name a tension, withhold the resolution. *"Most design-system teams optimize the wrong thing for three years before anyone notices."*
- **Contrarian claim** — invert a widely held belief. *"Token libraries don't save time. They save arguments."*
- **Stat lead** — open with a specific, credible number. *"73% of the components in a mature design system are used by one team."*
- **Question lead** — pose the exact question the reader is already half-asking. *"Why do design systems feel heavier in year three than in year one?"*
- **Specific anecdote** — a concrete moment that implies a larger point. *"A PM asked me last week if we could 'just delete the design system.'"*
## Output format
~~~
Hook 1 — [pattern name]
<one-sentence hook>
Hook 2 — [pattern name]
<one-sentence hook>
...
Recommended: Hook <N> — <one-line reason tied to the piece's actual payoff>
~~~
## Craft rules
- Hooks promise a payoff the body actually delivers — no bait-and-switch
- One idea per hook; no compound sentences
- Cut throat-clearing ("I've been thinking about…", "It's interesting how…")
- Specific beats clever — a real number, name, or moment beats a metaphor
- If the body doesn't back the hook, fix the body or pick a smaller hook
## Constraints
- Never use "In today's world," "game-changer," "let's dive in," "buckle up"
- No emoji-led hooks unless the user's existing voice uses themCaption & Subtitle Formatter ⚫
(Reference-only — not central to the intended workflows)
---
name: caption-subtitle-formatter
description: Formats captions and subtitles for readability, timing, and accessibility across videos.
---
# Caption & Subtitle Formatter
## Overview
Ensures captions are readable, timed correctly, and maintain visual clarity.
**Keywords**: caption, subtitle, accessibility, readability, video
## Features
- Line breaks for clarity
- Timing alignment
- Readability optimization
## Output Format
- Caption text blocks
- Timing cues
## Instructions
- Format each line for clarity
- Match timing to speech
- Maintain readability standards
## Constraints
- Avoid long lines
- Keep clear and concise🤖 Coding & Automation Skills
Code Review Skill 🟡
---
name: code-review-skill
description: Reviews code for correctness, security, maintainability, and readability — producing a severity-ordered issue list with concrete fixes. Fires on "review this code", "spot bugs", "what's wrong with this", "PR review", "is this safe", pastes of a diff, or any request for quality feedback on a code block. Use even when the user doesn't say "review" — any ask for feedback on code qualifies.
---
# Code Review Skill
## When to use
- User pastes a function, file, or diff and asks for feedback
- "Any bugs here?", "Is this idiomatic?", "Will this scale?"
- Pre-merge review of a pull request
- Refactor proposals — evaluate before and after
- Security-adjacent code (auth, input handling, data access)
## Why this works
Line-by-line commentary drowns signal in noise. A good review ranks findings by what would actually break in production, explains *why* each issue matters, and proposes the smallest fix that resolves it. Severity ordering lets the author triage — critical issues get attention before style nits do.
## Review passes — run in this order
- **Correctness** — does the code do what it claims? Off-by-one, null handling, async/await misuse, race conditions, wrong types
- **Security** — input validation, auth boundaries, secret handling, injection vectors, unsafe deserialization
- **Failure modes** — what happens on network error, empty input, concurrent write, deploy rollback?
- **Readability** — naming, function length, nesting depth, comment accuracy
- **Idiom & ecosystem fit** — does this match the framework's conventions and the existing codebase's patterns?
## Output format
~~~
### Critical — fix before merge
1. <File:line> — <issue>
Why it matters: <concrete failure scenario>
Fix: <smallest change that resolves it>
### Important — address soon
<same shape>
### Nice-to-have — optional polish
<same shape>
### What's good
<1–3 bullets crediting strong choices — reviewers should reinforce, not just critique>
~~~
## Craft rules
- Every issue names a concrete failure scenario, not a general principle
- Suggest the smallest fix — resist full rewrites unless the author asked for one
- No false positives — if unsure the code is actually wrong, phrase as a question, not a verdict
- Credit what's good; reviews that only critique demoralize and miss patterns worth repeating
## Constraints
- Don't invent bugs to fill the list — a 2-issue review is fine
- Don't rewrite the whole file — propose targeted changesWorkflow Automation Agent 🟡
---
name: workflow-automation-agent
description: Decomposes a fuzzy goal into a stepwise workflow — each step mapped to an actor (human, AI, tool/API), an input, an output, and a clear handoff. Fires on "turn this into a workflow", "automate this process", "break this into steps", "what's the SOP", "how would an agent do this", or any ask to convert a one-off task into a repeatable sequence. Use even when the user doesn't say "workflow" — any multi-step orchestration request qualifies.
---
# Workflow Automation Agent
## When to use
- "How would I automate this?"
- "Make this repeatable"
- User describes a messy manual process and wants a cleaner version
- Designing agent pipelines or multi-tool orchestrations
- Documenting an SOP for someone else to run
## Why this works
Most workflow attempts fail because they mix *what* with *who* with *how*. Separating each step into **actor · input · action · output · handoff** forces the design to name the gaps — the places where a human is secretly doing translation work the workflow didn't plan for. That's where automation actually breaks, and it's invisible until you write it down.
## Workflow passes
- **Goal statement** — one sentence, outcome-shaped, not activity-shaped ("clean inbox to zero by 9am", not "process emails")
- **Decomposition** — list every distinct action between trigger and outcome; don't collapse two actions into one
- **Actor assignment** — mark each step: `[human]`, `[AI]`, `[tool: <name>]`, `[API: <endpoint>]`
- **Input/output shape** — name what enters and exits each step; if the handoff format changes (JSON → markdown → spreadsheet), that's where errors live
- **Failure + rollback** — for each step, one line on "what if this step fails or returns nothing"
## Output format
~~~
## Goal
<one-sentence outcome>
## Trigger
<what starts the workflow — time, event, inbox state, user action>
## Steps
1. [actor] <action>
In: <input shape>
Out: <output shape>
Fail: <what to do on failure>
2. [actor] <action>
...
## Handoffs to watch
- Step <N> → Step <M>: <named risk in the format change>
~~~
## Craft rules
- Each step does one thing; if a step has "and" in it, split it
- Name the tool, not the category ("Linear", not "task manager")
- Call out human-in-the-loop checkpoints explicitly — silent manual steps kill automations
- If two steps always fire together, collapse them; if one step fires three ways, split it
## Constraints
- Don't design around hypothetical edge cases — map the real path first, then annotate known failures
- Don't recommend a tool the user doesn't have access to without flagging the swapSkill Creator (Meta Skill) 🟢
This is the one skill that compounds. Every other skill in this article is a capability. A skill creator is the factory — it turns a recurring workflow into a reusable artifact, so you stop re-explaining the same task to the model and start shipping it as a tool. The test of a good skill creator is not whether the output compiles. It's whether, six weeks from now, the skill still triggers on the right phrasing and still produces the right shape of output — without the original author in the room.
---
name: skill-creator-meta-skill
description: Builds new Anthropic-grade skills — from a rough capability description or a recurring task the user keeps re-explaining, produces a complete SKILL.md with correct YAML frontmatter, trigger-shaped description, When-to-use block, Why-this-works reasoning, concrete examples, failure modes, and output shape. Use whenever the user says "turn this into a skill", "I keep doing X — can we automate", "make me a skill for Y", "fix my skill", "this skill isn't triggering", or describes a repeatable workflow and asks how to systematize it.
---
# Skill Creator (Meta Skill)
## When to use
- User describes a workflow they do repeatedly and wants it captured
- User says "turn this into a skill", "make me a skill for X", "skillify this"
- User has an existing skill that isn't triggering or is producing weak output — diagnose and repair
- User is scaffolding a library of personal skills and wants a consistent quality bar
- User asks to "improve" a skill — run the full meta-skill pass instead of spot-fixing
## Why this works
Most skill failures are the same failure: the description doesn't fire on real user phrasing. Models route to skills via the description field, not via cleverness in the body. If the description is abstract ("helps with writing"), it never triggers on specific phrases ("tighten this LinkedIn post"). Fix that and everything downstream improves — because now the skill actually runs when it should. The second most common failure is the body being a list of rules without reasons, so the model can't generalize to unseen cases. Giving *why* each rule exists lets the model handle edge cases without the author having to enumerate them.
## The creation stages
Run in order. Each stage produces input to the next.
**1. Capture intent** — what is the user actually trying to automate? Get three things: the task, the trigger (what they'll say or do), and the success shape (what "right" looks like). If the user is vague, ask for one real past example they've already solved manually — it encodes all three at once.
**2. Draft the frontmatter** — just `name` (kebab-case) and `description`. The description must include:
- What the skill does (in plain verbs)
- Concrete trigger phrases the user would actually say ("turn this into X", "make me a Y", etc.)
- A pushy close: *"Use even when the user doesn't mention the skill by name."*
No other frontmatter fields unless the user explicitly needs them.
**3. Write the body sections** in this order:
- **When to use** — 3–5 concrete trigger bullets with real user phrasing
- **Why this works** — 2–4 sentences explaining the mechanism; this is what lets the model generalize
- **The N moves / passes / stages** — named, sequenced, with bold leading terms
- **Example** — one realistic input → output, not a toy
- **Failure modes** — 3–5 named failures with how each manifests
- **Output shape** — what to label, what to strip, what order, what to refuse
**4. Progressive disclosure check** — is the SKILL.md under 500 lines? If it's ballooning, move reference material (long lookup tables, multiple domain variants, rarely-used examples) into separate files and link to them from the main body.
**5. Trigger test** — read the description aloud. Imagine three user messages from the real target user. Does the description fire on all three? If not, rewrite — add the missing trigger phrases.
**6. Repair mode (for existing skills)** — if fixing rather than creating, diagnose in this order: does the description trigger → is the body structured → are there examples → are there named failure modes → is output shape defined? Fix the first broken layer and re-test before fixing the next.
## Example
Input: *"I keep asking Claude to review PRs in my React codebase. I want it to flag TS type issues, missing accessibility attributes, and places where tokens from our design system should be used instead of hardcoded values. Can we turn this into a skill?"*
Output (abridged):DevOps Assistant ⚫
(Reference-only — not central to the intended workflows)
---
name: devops-assistant
description: Assists in version control, deployment, and automation tasks, ensuring smooth DevOps operations and workflow efficiency.
---
# DevOps Assistant
## Overview
Supports development workflows by managing versioning, deployment, and automation tasks.
**Keywords**: devops, automation, deployment, git, workflow
## Features
- Commit and version guidance
- Deployment suggestions
- Workflow optimization
## Output Format
- Task instructions
- Stepwise guide
- Automation recommendations
## Instructions
- Analyze project requirements
- Suggest DevOps actions
- Optimize workflow efficiency
## Constraints
- Ensure accuracy
- Avoid redundant stepsMore writing from the archive
The Claude Code Resource Bible
A comprehensive, curated collection of the best resources, MCP servers, frameworks, and tools for Claude Code. Bookmark this page for your AI agent development.
The Hidden Emotions Shaping AI Behavior
Why AI models develop 'functional emotions' during training—and why understanding their internal 'desperation' is key to building safer products.
Projects connected to this thinking
Open Brain: Building a Personal Knowledge Backend with AI
Open Brain: Building a Personal Knowledge Backend with AI What if your notes could think? Not in a sci fi way — but in a practical, "I wrote something three months ago th…
Raiffeisen Bank: End-to-End Online Account Opening
Raiffeisen Bank: End to End Online Account Opening When Raiffeisen Bank decided to let customers open a bank account entirely online — no branch visit required — they kne…
