Overview
A skill is a self-contained unit of functionality defined by aSKILL.md manifest file. The manifest declares metadata, input/output schemas, required permissions, and execution instructions. Once registered and approved, skills become available as tools that AI agents can invoke during workflow execution.
Skills use a markdown-first approach. You define the entire skill — including its contract and behavior — in a single
SKILL.md file. No SDK installation or boilerplate code is required.SKILL.md Manifest Format
Every skill starts with aSKILL.md file placed at the root of a directory or Git repository. The file consists of YAML front matter followed by structured markdown sections.
Full Example
Front Matter Reference
The YAML front matter block defines the skill’s identity and metadata.| Field | Required | Description |
|---|---|---|
name | Yes | Unique identifier (2-100 chars, alphanumeric + hyphens/underscores/dots) |
description | Yes | Human-readable summary shown to agents and admins (max 1000 chars) |
version | Yes | Semantic version (e.g., 1.0.0); multiple versions can coexist |
author | No | Author or organization name |
tags | No | List of categories for search and filtering |
Manifest Sections
Input Schema
Input Schema
Defines the parameters your skill accepts. Each row maps to a JSON Schema property. The
Required column determines whether the parameter is mandatory.Supported types: string, integer, number, boolean, array, object.When an AI agent invokes the skill, these parameters are validated and passed as keyword arguments.Output Schema
Output Schema
Describes the structure of the skill’s return value. This helps agents understand and process the result. The output is serialized as JSON when returned to the calling workflow.
Permissions
Permissions
Declares the platform resources your skill requires. During execution, the worker validates these against the workspace’s allowed permissions. If any permission is missing, execution is denied.Common permissions:
llm:invoke— Call LLM modelsnetwork— Make outbound HTTP requestsfile_read— Read files from allowed pathsfile_write— Write files to allowed pathsstorage:read— Access workspace storageshell— Execute shell commands
Instructions
Instructions
Free-form markdown that describes how the skill should behave. For prompt-based skills, this section acts as the system prompt. For code-based skills, it documents the execution logic found in the entry point file.
Registering Skills
Skills are loaded into a workspace through Skill Sources. A source points to a Git repository or a local directory and handles synchronization automatically.- Git Repository
- Local Path
- Direct Upload
Register a Git repository containing one or more skills:The platform clones the repository, discovers all
SKILL.md files, and indexes them. Skills from new sources enter PENDING status by default.Authentication options:| Auth Type | Description |
|---|---|
none | Public repositories |
token | Personal access token or deploy token |
ssh_key | SSH key-based authentication |
Review and Approval
After registration, skills go through a review workflow before they become available to agents.- PENDING: Skill is indexed but not yet available. Admins can inspect the manifest, permissions, and source code.
- APPROVED: Skill is saved to storage (
skills/{workspace_id}/{name}/SKILL.md) and available for execution. - REJECTED: Skill file is removed from storage and the skill cannot be used.
Trusted sources (where
trusted: true) can be configured to auto-approve skills, skipping the manual review step. Use this only for repositories you fully control.Approving via API
Tutorial: Create Your First Skill
Follow these steps to create, register, and use a skill from scratch.Add an entry point (optional)
For code-based skills, add a
skill.py alongside the manifest. The worker passes parameters as JSON via stdin and reads JSON from stdout:Register the skill source
Add the repository as a skill source in your workspace via the UI (Workspace Settings > Skills > Add Source) or the API. The platform clones the repo and discovers the
SKILL.md.How Skills Become Agent Tools
When a workflow runs an AI Agent node, the platform loads all approved skills for the workspace and converts them into LangChainStructuredTool instances. Each skill is exposed to the LLM as a callable tool named skill:{name}.
SkillWorkerService executes the skill in an isolated environment and returns the result to the agent.