Overview
The@trigger.dev/sdk provides a custom ChatTransport for the Vercel AI SDK’s useChat hook. This lets you run chat completions as durable Trigger.dev tasks instead of fragile API routes — with automatic retries, observability, and realtime streaming built in.
How it works:
- The frontend sends messages via
useChatthroughTriggerChatTransport - The first message triggers a Trigger.dev task; subsequent messages resume the same run via input streams
- The task streams
UIMessageChunkevents back via Trigger.dev’s realtime streams - The AI SDK’s
useChatprocesses the stream natively — text, tool calls, reasoning, etc. - Between turns, the run stays warm briefly then suspends (freeing compute) until the next message
Requires
@trigger.dev/sdk version 4.4.0 or later and the ai package v5.0.0 or later.How multi-turn works
One run, many turns
The entire conversation lives in a single Trigger.dev run. After each AI response, the run waits for the next message via input streams. The frontend transport handles this automatically — it triggers a new run for the first message, and sends subsequent messages to the existing run. This means your conversation has full observability in the Trigger.dev dashboard: every turn is a span inside the same run.Warm and suspended states
After each turn, the run goes through two phases of waiting:- Warm phase (default 30s) — The run stays active and responds instantly to the next message. Uses compute.
- Suspended phase (default up to 1h) — The run suspends, freeing compute. It wakes when the next message arrives. There’s a brief delay as the run resumes.
You are not charged for compute during the suspended phase. Only the warm phase uses compute resources.
What the backend accumulates
The backend automatically accumulates the full conversation history across turns. After the first turn, the frontend transport only sends the new user message — not the entire history. This is handled transparently by the transport and task. The accumulated messages are available in:run()asmessages(ModelMessage[]) — for passing tostreamTextonTurnStart()asuiMessages(UIMessage[]) — for persisting before streamingonTurnComplete()asuiMessages(UIMessage[]) — for persisting after the response
Three approaches
There are three ways to build the backend, from most opinionated to most flexible:| Approach | Use when | What you get |
|---|---|---|
| chat.task() | Most apps | Auto-piping, lifecycle hooks, message accumulation, stop handling |
| chat.createSession() | Need a loop but not hooks | Async iterator with per-turn helpers, message accumulation, stop handling |
| Raw task + primitives | Full control | Manual control of every step — use chat.messages, chat.createStopSignal(), etc. |
Related
- Quick Start — Get a working chat in 3 steps
- Backend — Backend approaches in detail
- Frontend — Transport setup, sessions, client data
- Features — Per-run data, deferred work, streaming, subtasks
- API Reference — Complete reference tables

