Documentation Index
Fetch the complete documentation index at: https://trigger-docs-tri-7532-ai-sdk-chat-transport-and-chat-task-s.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
chat.agent now runs on Sessions
Every chat is backed by a durable Session row that outlives any single run. externalId = your chat ID, type = "chat.agent". Under the hood:- Output chunks stream on
session.out(was a run-scopedstreams.writer("chat")). - Client messages and stops land on
session.inas aChatInputChunktagged union (was two run-scopedstreams.inputdefinitions). - Wire endpoints moved from
/realtime/v1/streams/{runId}/...to/realtime/v1/sessions/{sessionId}/.... See the rewritten Client Protocol.
chat.agent(), TriggerChatTransport, AgentChat, chat.stream / chat.messages / chat.stopSignal) is unchanged — existing apps keep working. What’s new is:- Cross-run resume is free. A chat you were in yesterday resumes against the same
sessionIdtoday, even if the original run long since exited. No more lost conversations when a run idle-times-out. - Inbox views via
sessions.list({type: "chat.agent"}). Enumerate every chat in your environment, filter by tag or status. TriggerChatTaskResult.sessionId+ChatTaskRunPayload.sessionId— you can reach into the raw session viasessions.open(payload.sessionId)for advanced cases (writing from a sub-agent, custom transport).- Dashboard Agent tab resolves via
sessionIdand stays in sync with the live stream across runs.
X-Session-Settled — fast reconnect on idle chats
When a client reconnects to session.out and the tail record is a trigger:turn-complete marker (agent finished a turn, idle-waiting or exited), the server sets X-Session-Settled: true and uses wait=0 on the underlying S2 read. The SSE drains any remaining records then closes in ~1s instead of long-polling for 60s.Practical impact: TriggerChatTransport.reconnectToStream no longer needs a client-side isStreaming flag. You can drop the field from your persisted ChatSession state entirely — the server decides. Existing callers that still persist isStreaming are unaffected; reconnectToStream keeps the fast-path short-circuit when it’s false.Migration
See the Sessions Upgrade Guide for the full step-by-step — auth callback split, persistedChatSession shape, server-side helpers (chat.createStartSessionAction, chat.createAccessToken for renewal), and the clientData validation pivot.Docs
- Rewritten Client Protocol — full wire format for the new
/realtime/v1/sessions/{sessionId}/...endpoints, JWT scopes, S2 direct-write credentials, andLast-Event-IDresume. - Database persistence pattern — new
chatId-keyedChatSessionshape (no morerunId) and a warning on theonTurnCompleterace that requires a single atomic write ofmessages+lastEventId. - Reference — added
chat.createStartSessionAction,chat.createAccessToken,ChatInputChunk,TriggerChatTaskResult.sessionId,ChatTaskRunPayload.sessionId. The old run-scoped stream-ID constants are gone. - Refreshed Backend, Frontend, Server Chat, Quick start, Overview, Features, Types, Error handling, and Testing for the session-based wiring.
Agent Skills
Ship reusable capabilities as folders — aSKILL.md plus optional scripts, references, and assets. The agent sees short descriptions in its system prompt, loads full instructions on demand via loadSkill, and invokes bundled scripts via bash — no manual wiring.skills.define({ id, path }) registers the skill; the CLI bundles the folder into the deploy image. chat.skills.set([...]) activates skills for the run; chat.toStreamTextOptions() auto-injects the preamble and tools.See the new Agent Skills guide.chat.endRun() — exit on your own terms
New imperative API to exit the loop after the current turn completes, without the upgrade-required signal that chat.requestUpgrade() sends. Use for one-shot agents, budget-exhausted exits, or goal-reached completions.onBeforeTurnComplete / onTurnComplete fire, the turn-complete chunk is written, and the run exits instead of suspending. Callable from run(), chat.defer(), onBeforeTurnComplete, or onTurnComplete. See Ending a run on your terms.finishReason on turn-complete events
TurnCompleteEvent and BeforeTurnCompleteEvent now include the AI SDK’s finishReason ("stop" | "tool-calls" | "length" | "content-filter" | "error" | "other"). Clean signal for distinguishing a normal turn end from one paused on a pending tool call (HITL flows like ask_user):chat.pipe() flows or aborted streams. See the new Human-in-the-loop pattern.User-initiated compaction pattern
The Compaction guide now covers how to wire a “Summarize conversation” button or/compact slash command via actionSchema + onAction. The agent summarizes on demand, rewrites history with chat.history.set(), and short-circuits the LLM call for action turns.Needed a small type fix for this: ChatTaskPayload.trigger now correctly includes "action", so run() handlers can short-circuit with if (trigger === "action") return when an action doesn’t need a response.Human-in-the-loop pattern page
New Human-in-the-loop page walks throughask_user-style mid-turn user input end-to-end: defining a no-execute tool, rendering pending tool calls on the frontend with addToolOutput + sendAutomaticallyWhen, detecting paused turns via finishReason, and two persistence strategies (overwrite vs. checkpoint nodes).Offline test harness for chat.agent
@trigger.dev/sdk/ai/test now ships mockChatAgent, a harness that drives a chat.agent definition through real turns without network or task runtime. Send messages, actions, and stop signals; inspect emitted chunks; assert on hook order.Dependency injection via locals
setupLocals pre-seeds locals before run() starts — the pattern for injecting database clients, service stubs, and other server-side dependencies that shouldn’t leak through untrusted clientData:locals.get(dbKey). Falls through to the production client in real runs.See Testing.runInMockTaskContext — lower-level test harness
@trigger.dev/core/v3/test now exports runInMockTaskContext for unit-testing any task code offline (not just chat agents). Installs in-memory managers for locals, lifecycleHooks, runtime, inputStreams, and realtimeStreams, plus a mock TaskContext. Drivers let you push data into input streams and inspect chunks written to output streams.Multi-tab coordination
Prevent duplicate messages when the same chat is open in multiple browser tabs. Enable withmultiTab: true on the transport.BroadcastChannel. When the active tab’s turn completes, any tab can send next. Crashed tabs are detected via heartbeat timeout (10s).See Multi-tab coordination and useMultiTabChat.Error stack truncation
Large error stacks no longer OOM the worker process. Stacks are capped at 50 frames (top 5 + bottom 45), individual lines at 1024 chars, messages at 1000 chars. Applied inparseError, sanitizeError, and OTel span recording.Fix: resume: true hangs on completed turns
When refreshing a page after a turn completed, useChat with resume: true would hang indefinitely — reconnectToStream opened an SSE connection that never received data.Added isStreaming to session state. The transport sets it to true when streaming starts and false on trigger:turn-complete. reconnectToStream returns null immediately when isStreaming is false, so resume: initialMessages.length > 0 is now safe to pass unconditionally.The flag flows through onSessionChange and is restored from sessions — no extra persistence code needed.hydrateMessages — backend-controlled message history
Load message history from your database on every turn instead of trusting the frontend accumulator. The hook replaces the built-in linear accumulation entirely — the backend is the source of truth.chat.history — imperative message mutations
Modify the accumulated message history from any hook or run():Custom actions — actionSchema + onAction
Send typed actions (undo, rollback, edit) from the frontend via transport.sendAction(). Actions wake the agent, fire onAction, then trigger a normal run() turn.transport.sendAction(chatId, { type: "undo" })
Server: agentChat.sendAction({ type: "undo" })See Actions and Sending actions.chat.response — persistent data parts
Added chat.response.write() for writing data parts that both stream to the frontend AND persist in onTurnComplete’s responseMessage and uiMessages.data-* chunks written via lifecycle hook writer.write() now automatically persist to the response message, matching the AI SDK’s default semantics. Add transient: true for ephemeral chunks (progress indicators, status updates).See Custom data parts.Tool approvals
Added support for AI SDK tool approvals (needsApproval: true). When the model calls a tool that needs approval, the turn completes and the frontend shows approve/deny buttons. After approval, the updated assistant message is sent back and matched by ID in the accumulator.sendAutomaticallyWhen and addToolApprovalResponse from useChat. See Tool approvals.transport.stopGeneration(chatId)
Added stopGeneration method to TriggerChatTransport for reliable stop after page refresh / stream reconnect. Works regardless of whether the AI SDK passes abortSignal through reconnectToStream.generateMessageId support
generateMessageId can now be passed via uiMessageStreamOptions to control response message ID generation (e.g. UUID-v7). The backend automatically passes originalMessages to toUIMessageStream so message IDs are consistent between frontend and backend.Bug fixes
onTurnCompletenot called: FixedturnCompleteResult?.lastEventIdTypeError that silently skippedonTurnCompletewhenwriteTurnCompleteChunkreturned undefined in dev.- Stop during streaming: Added 2s timeout on
onFinishPromisesoonBeforeTurnCompleteandonTurnCompletefire even when the AI SDK’sonFinishdoesn’t fire after abort. toStreamTextOptionswithoutchat.prompt.set():prepareStepinjection (compaction, steering, background context) now works even when the user passessystemdirectly tostreamTextinstead of usingchat.prompt.set().- Background queue vs tool approvals: Background context injection is now skipped when the last accumulated message is a
toolmessage, preventing it from breakingstreamText’scollectToolApprovals.

