chat.task()
The highest-level approach. Handles message accumulation, stop signals, turn lifecycle, and auto-piping automatically.Simple: return a StreamTextResult
Return thestreamText result from run and it’s automatically piped to the frontend:
Using chat.pipe() for complex flows
For complex agent flows wherestreamText is called deep inside your code, use chat.pipe(). It works from anywhere inside a task — even nested function calls.
trigger/agent-chat.ts
Lifecycle hooks
onPreload
Fires when a preloaded run starts — before any messages arrive. Use it to eagerly initialize state (DB records, user context) while the user is still typing. Preloaded runs are triggered by callingtransport.preload(chatId) on the frontend. See Preload for details.
| Field | Type | Description |
|---|---|---|
chatId | string | Chat session ID |
runId | string | The Trigger.dev run ID |
chatAccessToken | string | Scoped access token for this run |
clientData | Typed by clientDataSchema | Custom data from the frontend |
onChatStart
Fires once on the first turn (turn 0) beforerun() executes. Use it to create a chat record in your database.
The continuation field tells you whether this is a brand new chat or a continuation of an existing one (where the previous run timed out or was cancelled). The preloaded field tells you whether onPreload already ran.
onTurnStart
Fires at the start of every turn, after message accumulation andonChatStart (turn 0), but before run() executes. Use it to persist messages before streaming begins — so a mid-stream page refresh still shows the user’s message.
| Field | Type | Description |
|---|---|---|
chatId | string | Chat session ID |
messages | ModelMessage[] | Full accumulated conversation (model format) |
uiMessages | UIMessage[] | Full accumulated conversation (UI format) |
turn | number | Turn number (0-indexed) |
runId | string | The Trigger.dev run ID |
chatAccessToken | string | Scoped access token for this run |
continuation | boolean | Whether this run is continuing an existing chat |
preloaded | boolean | Whether this run was preloaded |
clientData | Typed by clientDataSchema | Custom data from the frontend |
onTurnComplete
Fires after each turn completes — after the response is captured, before waiting for the next message. This is the primary hook for persisting the assistant’s response.| Field | Type | Description |
|---|---|---|
chatId | string | Chat session ID |
messages | ModelMessage[] | Full accumulated conversation (model format) |
uiMessages | UIMessage[] | Full accumulated conversation (UI format) |
newMessages | ModelMessage[] | Only this turn’s messages (model format) |
newUIMessages | UIMessage[] | Only this turn’s messages (UI format) |
responseMessage | UIMessage | undefined | The assistant’s response for this turn |
turn | number | Turn number (0-indexed) |
runId | string | The Trigger.dev run ID |
chatAccessToken | string | Scoped access token for this run |
lastEventId | string | undefined | Stream position for resumption. Persist this with the session. |
stopped | boolean | Whether the user stopped generation during this turn |
continuation | boolean | Whether this run is continuing an existing chat |
rawResponseMessage | UIMessage | undefined | The raw assistant response before abort cleanup (same as responseMessage when not stopped) |
Stop generation
How stop works
Callingstop() from useChat sends a stop signal to the running task via input streams. The task’s streamText call aborts (if you passed signal or stopSignal), but the run stays alive and waits for the next message. The partial response is captured and accumulated normally.
Abort signals
Therun function receives three abort signals:
| Signal | Fires when | Use for |
|---|---|---|
signal | Stop or cancel | Pass to streamText — handles both cases. Use this in most cases. |
stopSignal | Stop only (per-turn, reset each turn) | Custom logic that should only run on user stop, not cancellation |
cancelSignal | Run cancel, expire, or maxDuration exceeded | Cleanup that should only happen on full cancellation |
Detecting stop in callbacks
TheonTurnComplete event includes a stopped boolean that indicates whether the user stopped generation during that turn:
chat.isStopped(). This is useful inside streamText’s onFinish callback where the AI SDK’s isAborted flag can be unreliable (e.g. when using createUIMessageStream + writer.merge()):
Cleaning up aborted messages
When stop happens mid-stream, the captured response message can contain parts in an incomplete state — tool calls stuck inpartial-call, reasoning blocks still marked as streaming, etc. These can cause UI issues like permanent spinners.
chat.task automatically cleans up the responseMessage when stop is detected before passing it to onTurnComplete. If you use chat.pipe() manually and capture response messages yourself, use chat.cleanupAbortedParts():
partial-call state and marks any streaming text or reasoning parts as done.
Stop signal delivery is best-effort. There is a small race window where the model may finish before the stop signal arrives, in which case the turn completes normally with
stopped: false. This is expected and does not require special handling.Persistence
What needs to be persisted
To build a chat app that survives page refreshes, you need to persist two things:- Messages — The conversation history. Persisted server-side in the task via
onTurnStartandonTurnComplete. - Sessions — The transport’s connection state (
runId,publicAccessToken,lastEventId). Persisted server-side viaonTurnStartandonTurnComplete.
Sessions let the transport reconnect to an existing run after a page refresh. Without them, every page load would start a new run — losing the conversation context that was accumulated in the previous run.
Full persistence example
Runtime configuration
chat.setTurnTimeout()
Override how long the run stays suspended waiting for the next message. Call from insiderun():
chat.setWarmTimeoutInSeconds()
Override how long the run stays warm (active, using compute) after each turn:Longer warm timeout means faster responses but more compute usage. Set to
0 to suspend immediately after each turn (minimum latency cost, slight delay on next message).Stream options
Control howstreamText results are converted to the frontend stream via toUIMessageStream(). Set static defaults on the task, or override per-turn.
Error handling with onError
WhenstreamText encounters an error mid-stream (rate limits, API failures, network errors), the onError callback converts it to a string that’s sent to the frontend as an { type: "error", errorText } chunk. The AI SDK’s useChat receives this via its onError callback.
By default, the raw error message is sent to the frontend. Use onError to sanitize errors and avoid leaking internal details:
onError is also called for tool execution errors, so a single handler covers both LLM errors and tool failures.
On the frontend, handle the error in useChat:
Reasoning and sources
Control which AI SDK features are forwarded to the frontend:Per-turn overrides
Override per-turn withchat.setUIMessageStreamOptions() — per-turn values merge with the static config (per-turn wins on conflicts). The override is cleared automatically after each turn.
chat.setUIMessageStreamOptions() works across all abstraction levels — chat.task(), chat.createSession() / turn.complete(), and chat.pipeAndCapture().
See ChatUIMessageStreamOptions for the full reference.
onFinish is managed internally for response capture and cannot be overridden here. Use streamText’s onFinish callback for custom finish handling, or use raw task mode for full control over toUIMessageStream().Manual mode with task()
If you need full control over task options, use the standardtask() with ChatTaskPayload and chat.pipe():
chat.createSession()
A middle ground betweenchat.task() and raw primitives. You get an async iterator that yields ChatTurn objects — each turn handles stop signals, message accumulation, and turn-complete signaling automatically. You control initialization, model/tool selection, persistence, and any custom per-turn logic.
Use chat.createSession() inside a standard task():
ChatSessionOptions
| Option | Type | Default | Description |
|---|---|---|---|
signal | AbortSignal | required | Run-level cancel signal (from task context) |
warmTimeoutInSeconds | number | 30 | Seconds to stay warm between turns |
timeout | string | "1h" | Duration string for suspend timeout |
maxTurns | number | 100 | Max turns before ending |
ChatTurn
Each turn yielded by the iterator provides:| Field | Type | Description |
|---|---|---|
number | number | Turn number (0-indexed) |
chatId | string | Chat session ID |
trigger | string | What triggered this turn |
clientData | unknown | Client data from the transport |
messages | ModelMessage[] | Full accumulated model messages — pass to streamText |
uiMessages | UIMessage[] | Full accumulated UI messages — use for persistence |
signal | AbortSignal | Combined stop+cancel signal (fresh each turn) |
stopped | boolean | Whether the user stopped generation this turn |
continuation | boolean | Whether this is a continuation run |
| Method | Description |
|---|---|
turn.complete(source) | Pipe stream, capture response, accumulate, and signal turn-complete |
turn.done() | Just signal turn-complete (when you’ve piped manually) |
turn.addResponse(response) | Add a response to the accumulator manually |
turn.complete() vs manual control
turn.complete(result) is the easy path — it handles piping, capturing the response, accumulating messages, cleaning up aborted parts, and writing the turn-complete chunk.
For more control, you can do each step manually:
Raw task with primitives
For full control, use a standardtask() with the composable primitives from the chat namespace. You manage everything: the turn loop, stop signals, message accumulation, and turn-complete signaling.
Raw task mode also lets you call .toUIMessageStream() yourself with any options — including onFinish and originalMessages. This is the right choice when you need complete control over the stream conversion beyond what chat.setUIMessageStreamOptions() provides.
Primitives
| Primitive | Description |
|---|---|
chat.messages | Input stream for incoming messages — use .waitWithWarmup() to wait for the next turn |
chat.createStopSignal() | Create a managed stop signal wired to the stop input stream |
chat.pipeAndCapture(result) | Pipe a StreamTextResult to the chat stream and capture the response |
chat.writeTurnComplete() | Signal the frontend that the current turn is complete |
chat.MessageAccumulator | Accumulates conversation messages across turns |
chat.pipe(stream) | Pipe a stream to the frontend (no response capture) |
chat.cleanupAbortedParts(msg) | Clean up incomplete parts from a stopped response |
Example
MessageAccumulator
TheMessageAccumulator handles the transport protocol automatically:
- Turn 0: replaces messages (full history from frontend)
- Subsequent turns: appends new messages (frontend only sends the new user message)
- Regenerate: replaces messages (full history minus last assistant message)

