Create Custom Skills
You can create dedicated skills for any repetitive workflow so the AI can follow your process, your standards, and your output format consistently.
Skill File Format
A skill is defined by a SKILL.md file. It uses YAML front matter for metadata, followed by the full execution instructions for the AI:
---
name: my-skill # Skill name (defaults to directory name if omitted)
description: One sentence describing what the skill does and when it should be used
version: "1.0" # Optional
author: Your Name # Optional
tags: [tag1, tag2] # Optional, useful for classification
---
# Skill Title
## Mission
Clearly state the goal and boundaries of the skill.
## Steps
1. Step one: ...
2. Step two: ...
3. Step three: ...
## Output Format
Define the expected output structure...
The description field is the key trigger signal. When the AI decides whether to activate a skill, it relies heavily on that description. Be explicit about what the skill does and when it should be used so the AI can match user intent more accurately.
Create Your First Skill
Step 1: Create the Skill Directory
mkdir -p ~/.aiagent/skills/my-skill
Step 2: Write SKILL.md
Create ~/.aiagent/skills/my-skill/SKILL.md:
---
name: api-tester
description: Test REST API endpoints, automatically send requests, and validate response structure and status codes. Use this when the user needs API endpoint testing.
version: "1.0"
author: DevTeam
tags: [api, testing, rest]
---
# API Test Assistant
## Mission
Systematically test REST API endpoints, including request construction, response validation, and issue diagnosis.
## Steps
### 1. Gather API details
Ask the user for:
- Base URL and endpoint path
- HTTP method (GET/POST/PUT/DELETE)
- Headers (Authorization, Content-Type, etc.)
- Request body (for POST/PUT)
- Expected status code and response structure
### 2. Build and execute the request
```bash
curl -X POST "https://api.example.com/v1/users" \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/json" \
-d '{"name": "test", "email": "[email protected]"}' \
-w "\nHTTP Status: %{http_code}\n"
3. Validate the response
Check:
- Whether the HTTP status code matches expectations
- Whether the response body structure is complete
- Whether required fields exist and have the correct types
- Whether error responses are clear and actionable
4. Report the result
| Item | Result |
|---|---|
| Status Code | ✅ 201 Created |
| Response Time | 245ms |
| Required Fields | ✅ All present |
| Data Format | ✅ Matches specification |
### Step 3: Verify It Loads
Restart Agent (or wait for auto-reload), then ask the AI to trigger the skill:
```text
"Help me test this API endpoint"
"Use api-tester to check the login API"
Use Supporting Files
A skill directory can contain any number of supporting files. The AI reads them on demand through tools instead of loading everything at once, which means you can safely include large reference materials:
~/.aiagent/skills/release-checklist/
SKILL.md ← Main instructions
checklist.md ← Detailed release checklist
rollback-steps.md ← Rollback playbook
environments/
staging.env.md ← Configuration reference for staging
production.env.md
In SKILL.md, tell the AI when to read those files:
## Reference Materials
Before running the release checklist, call `list_skill_files` to see available files,
then call `read_skill_file` on `checklist.md` to get the full checklist.
If rollback is needed, read `rollback-steps.md`.
How to Write High-quality Skills
Make description Precise Enough to Trigger Correctly
# ❌ Too vague, hard for the AI to match
description: Help with development work
# ✅ Clear about what it does and when to use it
description: Generate OpenAPI-compliant API documentation by analyzing code and extracting endpoint definitions. Use this when the user needs to write or update API docs.
Use Step-by-step Instructions to Reduce Ambiguity
The AI is much more reliable with explicit procedural steps than with loose descriptive prose. Break the workflow into concrete steps and explain what to do, how to do it, and how to validate it.
Define a Clear Output Format
If you need a specific output format (table, Markdown, JSON, etc.), define it clearly in the skill so the AI can follow it strictly:
## Output Format
You must report review results using the following Markdown table:
| File | Issue Type | Severity | Recommendation |
|------|------------|----------|----------------|
| ... | ... | 🔴 Critical / 🟡 Warning / 🔵 Info | ... |
Plan for Edge Cases
It is more reliable to define how exceptions should be handled than to make the AI improvise at runtime:
## Exception Handling
- If the target file cannot be found: report that the file does not exist, list the current directory contents, and ask the user how to proceed
- If the API returns 401: tell the user to verify the authentication token and do not retry automatically
- If sensitive information is found (keys, passwords): stop immediately, warn the user, and do not continue processing
Metadata Field Reference
| Field | Required | Description |
|---|---|---|
name | No | Unique skill identifier; defaults to the directory name if omitted |
description | Yes | Core trigger signal; must clearly explain purpose and usage scenarios |
version | No | Semantic version for change tracking |
author | No | Creator name |
tags | No | Classification tags for organization |
Debugging Skills
If a skill does not trigger or behave as expected:
-
Check whether
descriptionis clear enough
Try saying "Use theskill-nameskill" directly in the conversation to force activation. -
Verify the file path
Confirm the file exists at~/.aiagent/skills/<skill-name>/SKILL.md. -
Check YAML formatting
YAML front matter is sensitive to indentation and syntax; validate the format carefully. -
Inspect backend logs
On macOS desktop, logs are written to:~/Library/Application Support/com.example.agentui/flutter_logs.txt