Agent skills are reusable capabilities you ship as folders — aDocumentation Index
Fetch the complete documentation index at: https://trigger-docs-tri-7532-ai-sdk-chat-transport-and-chat-task-s.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
SKILL.md describing when and how to use them, plus optional scripts, references, and assets. The chat agent sees a short description of each skill in its system prompt, loads the full instructions on demand via a loadSkill tool, and invokes the bundled scripts via bash — all without you wiring anything up manually.
Built on the AI SDK cookbook pattern. Works with any provider (OpenAI, Anthropic, Gemini, etc.) — not tied to Anthropic’s server-side skills.
Why skills?
Compared to regular AI SDK tools:- Tools are typed functions you pre-declare. Great when you know up-front exactly what capability the agent needs.
- Skills are folders the model discovers and reads on demand. Great when the capability is a bundle of instructions + helper scripts that would be awkward to encode as a single tool.
bash scripts/extract.py report.pdf using a bundled pdfplumber wrapper. A skill ships the script, the instructions, and any reference notes together.
Dashboard-editable SKILL.md is on the roadmap so a platform team can tighten a skill’s description or “when to use” text without a redeploy. Today, skills are SDK-only — defined in your task code and shipped with each deploy.
Trust model
Skills are developer-authored code, not end-user-supplied. The same developer who writes thechat.agent() writes the skill bundle. The trust boundary is identical to any tool.execute handler the developer writes — scripts run directly in the Trigger.dev worker container, no sandboxing required.
This makes skills different from the Claude Code / end-user model where arbitrary user-provided skills need isolation. Don’t accept skill paths from untrusted input.
Skill folder layout
A skill is a directory under your project (conventionallytrigger/skills/{id}/):
SKILL.md
Frontmatter is YAML-subset — onlyname and description are required:
loadSkill tool when the agent decides to use the skill. Write it like documentation for the agent.
Defining and using a skill
trigger/chat.ts
skills.define({ id, path }) does two things:
- Registers the skill with the Trigger.dev build system so the CLI automatically bundles the folder into your deploy image at
/app/.trigger/skills/{id}/. Notrigger.config.tschanges, no build extension — it just works. - Returns a
SkillHandleyou use at runtime.
skill.local() reads the bundled SKILL.md from disk and returns a ResolvedSkill with the parsed frontmatter + body + on-disk path.
chat.skills.set([...]) stores the resolved skills for the current run. chat.toStreamTextOptions() spreads them into streamText automatically:
- The frontmatter
descriptionlands in the system prompt under “Available skills:”. - Three tools are added:
loadSkill,readFile,bash— scoped per skill.
What gets auto-injected
When you spreadchat.toStreamTextOptions() with skills set, the AI SDK call receives three tools:
loadSkill({ name })
Returns the full SKILL.md body for the named skill. The model calls this first when it decides a skill is relevant, to load the full instructions.
readFile({ skill, path })
Reads a file inside the skill’s bundled folder. Paths are relative to the skill’s root and are rejected if they attempt to escape via .. or absolute paths. Output is capped at 1 MB per call.
Use for reference files and templates that the model should read literally:
bash({ skill, command })
Runs a bash command with cwd set to the skill’s root. Stdout and stderr are captured and returned (each capped at 64 KB per call, with tail truncation). The turn’s abort signal propagates — cancelling the run kills the child process.
Use to invoke the skill’s bundled scripts:
extract.py, your deploy image needs Python — add it via your build config the same way you would for any other task dependency.
How discovery works in the model
The model sees a short preamble appended to your system prompt:loadSkill({ name: "time-utils" }) to load the body, then follows the body’s instructions — typically by calling bash or readFile on the bundled scripts.
This is progressive disclosure: each skill costs ~100 tokens up front (its one-line description), and only the ones the model actually uses pay the full context cost.
Mixing skills with custom tools
If you also define your own AI SDK tools, pass them throughchat.toStreamTextOptions() so the merge is explicit:
loadSkill / readFile / bash to keep things predictable.)
Bundling
Bundling is built-in to the CLI — there’s no extension to import. When you runtrigger deploy or trigger dev:
- esbuild bundles your task code as usual.
- The CLI forks the indexer locally against the bundled output, collects every
skills.define({ path })registration. - Each skill’s folder is copied to
{outputPath}/.trigger/skills/{id}/via a recursive copy. - The existing Dockerfile
COPYpicks up.trigger/skills/along with the rest of the bundle — no Dockerfile changes.
trigger dev, the same layout appears in the local dev output directory, so skill.local() works the same way.
Path scoping rules
skill.pathalways resolves to${process.cwd()}/.trigger/skills/{id}/at runtime. Don’t hardcode paths elsewhere.readFilerejects..segments and absolute paths — the tool only exposes files inside the skill’s own directory.bashruns withcwdset to the skill’s root. Inside the script, relative paths resolve against the skill directory.- Cross-skill access isn’t provided — each skill is isolated by design. If two skills need to share data, either duplicate the shared file or consolidate the skills.
Current limitations
skill.resolve()(backend-managed overrides) is not available yet — use.local()for now. Dashboard-editableSKILL.mdis on the roadmap.- No per-skill metrics in the dashboard yet.
- No Anthropic
/v1/skillsintegration — use the portable path today; we’re tracking the Anthropic optimization separately.
Full example
Seereferences/ai-chat/src/trigger/skills/time-utils/ in the Trigger.dev monorepo for a working skill that bundles two bash scripts and a reference cheat-sheet, wired into a chat.agent that answers timezone questions.
Related
- AI SDK cookbook — Agent Skills — the userland pattern we build on
- Anthropic Agent Skills — Anthropic’s codified version (server-side, optional future integration)

