Documentation Index
Fetch the complete documentation index at: https://trigger-docs-tri-7532-ai-sdk-chat-transport-and-chat-task-s.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Transport setup
Use theuseTriggerChatTransport hook from @trigger.dev/sdk/chat/react to create a memoized transport instance, then pass it to useChat:
accessTokenis a pure PAT mint — the transport invokes it on a 401/403 to refresh the session-scoped token. Customer wrapsauth.createPublicToken({ scopes: { sessions: chatId } }).startSessionwrapschat.createStartSessionAction(taskId)and is called when the transport needs to create the session (transport.preload(chatId), or lazily on the firstsendMessagefor a chatId without a cached PAT). The customer’s server controls authorization here, alongside any DB writes paired with session creation.
Typed messages (chat.withUIMessage)
If your chat agent is defined with chat.withUIMessage<YourUIMessage>() (custom data-* parts, typed tools, etc.), pass the same message type through useChat so messages and message.parts are narrowed on the client:
YourUIMessage, default stream options, and backend examples.
Calling a fetch endpoint instead of a server action
If you want to mint tokens via a REST endpoint instead of a Next.js server action, the same callbacks accept any async function. ImportAccessTokenParams and StartSessionParams from @trigger.dev/sdk/chat to type your fetch handler.
auth.createPublicToken({ scopes: { sessions: chatId } }) for refresh and chat.createStartSessionAction(taskId) for create.
Session management
Every chat is backed by a durable Session — the row that owns the chat’s runs, persists across run lifecycles, and orchestrates handoffs. The transport manages the session for you; what you persist on your side is a small piece of state per chat that lets a fresh tab resume without a round-trip to create a new session.What the transport persists per chat
| Field | Type | Notes |
|---|---|---|
publicAccessToken | string | Session-scoped JWT (read:sessions:{chatId} + write:sessions:{chatId}). Refreshed automatically on 401/403 via accessToken. |
lastEventId | string | undefined | Last SSE event received on .out. Used to resume mid-stream after a reload. |
isStreaming | boolean | undefined | Optional. The transport sets it internally, but you don’t have to persist it — the server decides “nothing is streaming” via the session’s X-Session-Settled signal on reconnect. If you do persist it, the transport keeps the fast-path short-circuit. If you drop it, reconnects open the SSE and close fast on settled sessions. |
Session cleanup (frontend)
Since session creation and updates are handled server-side, the frontend only needs to handle session deletion when a run ends:Restoring on page load
On page load, fetch both the messages and the session state from your database, then pass them touseChat and the transport. Pass resume: true to useChat when there’s an existing conversation — this tells the AI SDK to reconnect to the stream via the transport.
Because the underlying Session row outlives individual runs, a chat you were in yesterday resumes against the same chat — even if the original run has long since exited. The transport hydrates from the persisted state and uses lastEventId to resubscribe; if the client tries to send a new message and no run is alive, the server triggers a fresh continuation run on the same session before the message is appended.
app/chat/[chatId]/ChatPage.tsx
resume: true causes useChat to call reconnectToStream on the transport when the component
mounts. The transport uses the session’s lastEventId to skip past already-seen stream events, so
the frontend only receives new data. Only enable resume when there are existing messages — for
brand new chats, there’s nothing to reconnect to.After resuming,
useChat’s built-in stop() won’t send the stop signal to the backend because
the AI SDK doesn’t pass its abort signal through reconnectToStream. Use
transport.stopGeneration(chatId) for reliable stop behavior after resume — see
Stop generation for the recommended pattern.Client data and metadata
Transport-level client data
Set default client data on the transport that’s included in every request. When the task usesclientDataSchema, this is type-checked to match:
clientData through three places automatically: into startSession’s params.clientData for the first run’s payload.metadata, into per-turn metadata on every .in/append chunk, and live-updates if the option value changes between renders (so React-driven values like the current user work without reconstructing the transport).
Per-message metadata
Pass metadata with individual messages viasendMessage. Per-message values are merged with transport-level client data (per-message wins on conflicts):
Typed client data with clientDataSchema
Instead of manually parsingclientData with Zod in every hook, pass a clientDataSchema to chat.agent. The schema validates the data once per turn, and clientData is typed in all hooks and run:
clientData option on the frontend transport:
Stop generation
Usetransport.stopGeneration(chatId) to stop the current generation. This sends a stop signal to the running task via input streams, aborting the current streamText call while keeping the run alive for the next message.
stopGeneration works in all scenarios — including after a page refresh when the stream was reconnected via resume. Call it alongside useChat’s stop() to also update the frontend state:
transport.stopGeneration(chatId) handles the backend stop signal and closes
the SSE connection, while aiStop() (from useChat) updates the frontend
status to "ready" and fires the onFinish callback.Tool approvals
The AI SDK supports tools that require human approval before execution. To use this withchat.agent, define a tool with needsApproval: true on the backend, then handle the approval UI and configure sendAutomaticallyWhen on the frontend.
Backend: define an approval-required tool
streamText in your run function as usual. When the model calls the tool, chat.agent streams a tool-approval-request chunk. The turn completes and the run waits for the next message.
Frontend: approval UI
ImportlastAssistantMessageIsCompleteWithApprovalResponses from the AI SDK and pass it to sendAutomaticallyWhen. This tells useChat to automatically re-send messages once all approvals have been responded to.
Destructure addToolApprovalResponse from useChat and wire it to your approval buttons:
How it works
- Model calls a tool with
needsApproval: true— the turn completes with the tool inapproval-requestedstate - Frontend shows Approve/Deny buttons
- User clicks Approve —
addToolApprovalResponseupdates the tool part toapproval-responded sendAutomaticallyWhenreturnstrue—useChatre-sends the updated assistant message- The transport sends the message via input streams — the backend matches it by ID and replaces the existing assistant message in the accumulator
streamTextsees the approved tool, executes it, and streams the result
Message IDs are kept in sync between frontend and backend automatically. The backend always
includes a
generateMessageId function when streaming responses, ensuring the start chunk
carries a messageId that the frontend uses. This makes the ID-based matching reliable
for tool approval updates.Sending actions
Send custom actions (undo, rollback, edit) to the agent viatransport.sendAction(). Actions wake the agent, fire the onAction hook, and trigger a normal response — the LLM responds to the modified state.
actionSchema on the backend — invalid actions are rejected. See Actions for the backend setup.
sendAction returns a ReadableStream<UIMessageChunk> — the agent’s response to the modified state. If you’re using useChat, the response is handled automatically through the transport.AgentChat has the same method:
Multi-tab coordination
When the same chat is open in multiple browser tabs,multiTab: true prevents duplicate messages and syncs conversation state across tabs. Only one tab can send at a time. Other tabs enter read-only mode with real-time message updates.
How it works
- When a tab sends a message, the transport “claims” the chatId via
BroadcastChannel - Other tabs detect the claim and enter read-only mode (
isReadOnly: true) - The active tab broadcasts its messages so read-only tabs see updates in real-time
- When the turn completes, the claim is released. Any tab can send next.
- Heartbeats detect crashed tabs (10s timeout clears stale claims)
What useMultiTabChat does
- Returns
{ isReadOnly }for disabling the input UI - Broadcasts
messagesfrom the active tab to other tabs - Calls
setMessageson read-only tabs when messages arrive from the active tab - Tracks read-only state via the transport’s
BroadcastChannelcoordinator
Multi-tab coordination is same-browser only (
BroadcastChannel is a browser API). It gracefully degrades to a no-op in Node.js, SSR, or browsers without BroadcastChannel support. Cross-device coordination requires server-side involvement.Self-hosting
If you’re self-hosting Trigger.dev, pass thebaseURL option:

