Skip to content

Skills Structure and Best Practices

Skills are reusable workflow packages for agents.
Each skill centers on a SKILL.md file plus optional support files.

Every skill needs:

  • SKILL.md
  • YAML frontmatter with:
    • name
    • description

Everything else is optional and loaded on demand.

Note: some runtimes can infer missing fields (for example deriving name from folder), but production skills should set both explicitly.

name and description are mission-critical because they drive discovery and activation.

  • Keep name short, lowercase, and hyphenated
  • Action-oriented names are easier to reason about (for example analyzing-time-series)
  • Use description to explain both:
    • what the skill does
    • when to use it
  • Include keywords users naturally type
  • Follow provider-specific limits for metadata length/format

There is no single required body format, but predictable skills usually share these traits:

  • Step-by-step workflow
  • Explicit edge-case handling
  • Clear “skip only if” conditions
  • Deterministic output expectations

A practical guideline is to keep core instructions concise (often under ~500 lines) and move bulky examples to external files.

Common optional directories:

  • scripts/: executable logic
  • references/: long background docs and interpretation guides
  • assets/: templates, schemas, logos, sample files

Operational guidance:

  • Script dependencies must be documented and validated
  • Add error-handling expectations for critical scripts
  • Prefer forward slashes in paths for cross-platform behavior

Skill systems usually load in phases:

  1. Skill metadata (name + description)
  2. SKILL.md body when triggered
  3. Referenced files/scripts only if needed

This keeps context windows cleaner and improves token efficiency.

Prefer many focused skills over one monolithic skill:

  • One skill for domain analysis
  • One skill for output formatting
  • One skill for compliance/review

Composable skills are easier to test and evolve.

---
name: generating-practice-questions
description: Generate practice questions from lecture notes. Use when a user asks for quizzes, exams, or comprehension checks.
---
## Workflow
1. Parse input notes and extract learning objectives.
2. Generate question types in required order.
3. Render output using requested template format.
4. Validate structure before returning final artifact.

Run a basic release gate:

  1. Metadata check: naming, description quality, trigger clarity
  2. Flow check: workflow order and edge-case handling
  3. Output check: file tree and output format correctness
  4. Human review: domain expert feedback
  5. Model matrix: test on each target model/runtime

For script-heavy skills, unit-test scripts independently, then test skill orchestration behavior.

  • Overloading SKILL.md with huge static content
  • Missing “when to use” trigger guidance
  • Ambiguous steps that reduce repeatability
  • Mixing runtime/session details into reusable instructions
  • Treating template-generation tools as autopilot without explicit constraints