Skip to main content

Documentation Index

Fetch the complete documentation index at: https://trigger-docs-tri-7532-ai-sdk-chat-transport-and-chat-task-s.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

ChatAgentOptions

Options for chat.agent().
OptionTypeDefaultDescription
idstringrequiredTask identifier
run(payload: ChatTaskRunPayload) => Promise<unknown>requiredHandler for each turn
clientDataSchemaTaskSchemaSchema for validating and typing clientData
onPreload(event: PreloadEvent) => Promise<void> | voidFires on preloaded runs before the first message
onChatStart(event: ChatStartEvent) => Promise<void> | voidFires on turn 0 before run()
onValidateMessages(event: ValidateMessagesEvent) => UIMessage[] | Promise<UIMessage[]>Validate/transform UIMessages before model conversion. See onValidateMessages
hydrateMessages(event: HydrateMessagesEvent) => UIMessage[] | Promise<UIMessage[]>Load message history from backend, replacing the linear accumulator. See hydrateMessages
actionSchemaTaskSchemaSchema for validating custom actions sent via transport.sendAction(). See Actions
onAction(event: ActionEvent) => Promise<void> | voidHandle custom actions. Fires after hydration, before onTurnStart. See Actions
onTurnStart(event: TurnStartEvent) => Promise<void> | voidFires every turn before run()
onBeforeTurnComplete(event: BeforeTurnCompleteEvent) => Promise<void> | voidFires after response but before stream closes. Includes writer.
onTurnComplete(event: TurnCompleteEvent) => Promise<void> | voidFires after each turn completes (stream closed)
onCompacted(event: CompactedEvent) => Promise<void> | voidFires when compaction occurs. Includes writer. See Compaction
compactionChatAgentCompactionOptionsAutomatic context compaction. See Compaction
pendingMessagesPendingMessagesOptionsMid-execution message injection. See Pending Messages
prepareMessages(event: PrepareMessagesEvent) => ModelMessage[]Transform model messages before use (cache breaks, context injection, etc.)
maxTurnsnumber100Max conversational turns per run
turnTimeoutstring"1h"How long to wait for next message
idleTimeoutInSecondsnumber30Seconds to stay idle before suspending
chatAccessTokenTTLstring"1h"How long the scoped access token remains valid
preloadIdleTimeoutInSecondsnumberSame as idleTimeoutInSecondsIdle timeout after onPreload fires
preloadTimeoutstringSame as turnTimeoutSuspend timeout for preloaded runs
uiMessageStreamOptionsChatUIMessageStreamOptionsDefault options for toUIMessageStream(). Per-turn override via chat.setUIMessageStreamOptions()
onChatSuspend(event: ChatSuspendEvent) => Promise<void> | voidFires right before the run suspends. See onChatSuspend
onChatResume(event: ChatResumeEvent) => Promise<void> | voidFires right after the run resumes from suspension
exitAfterPreloadIdlebooleanfalseExit run after preload idle timeout instead of suspending. See exitAfterPreloadIdle
Plus all standard TaskOptionsretry, queue, machine, maxDuration, onWait, onResume, onComplete, and other lifecycle hooks. Those hooks use the same parameter shapes as on a normal task() (including ctx).

Task context (ctx)

All chat.agent lifecycle events (onPreload, onChatStart, onTurnStart, onBeforeTurnComplete, onTurnComplete, onCompacted) and the object passed to run include ctx: the same TaskRunContext shape as the ctx in task({ run: (payload, { ctx }) => ... }).
onValidateMessages does not include ctx — it fires before message accumulation and is designed for pure validation/transformation of incoming messages.
Use ctx for run metadata, tags, parent links, or any API that needs the full run record. The chat-specific string runId on events is always ctx.run.id; both are provided for convenience.
import type { TaskRunContext } from "@trigger.dev/sdk";
// Equivalent alias (same type):
import type { Context } from "@trigger.dev/sdk";
Prefer import type { TaskRunContext } from "@trigger.dev/sdk" in application code. Do not depend on @trigger.dev/core directly.

ChatTaskRunPayload

The payload passed to the run function.
FieldTypeDescription
ctxTaskRunContextFull task run context — same as task run’s { ctx }
messagesModelMessage[]Model-ready messages — pass directly to streamText
chatIdstringYour conversation ID (the session’s externalId)
sessionIdstringFriendly ID of the backing Session (session_*). Use with sessions.open() for advanced cases. Always set — every chat.agent run is bound to a Session.
trigger"submit-message" | "regenerate-message"What triggered the request
messageIdstring | undefinedMessage ID (for regenerate)
clientDataTyped by clientDataSchemaCustom data from the frontend (typed when schema is provided)
continuationbooleanWhether this run is continuing an existing chat (previous run ended)
signalAbortSignalCombined stop + cancel signal
cancelSignalAbortSignalCancel-only signal
stopSignalAbortSignalStop-only signal (per-turn)
previousTurnUsageLanguageModelUsage | undefinedToken usage from the previous turn (undefined on turn 0)
totalUsageLanguageModelUsageCumulative token usage across completed turns so far

PreloadEvent

Passed to the onPreload callback.
FieldTypeDescription
ctxTaskRunContextFull task run context — see Task context
chatIdstringChat session ID
runIdstringThe Trigger.dev run ID
chatAccessTokenstringScoped access token for this run
clientDataTyped by clientDataSchemaCustom data from the frontend
writerChatWriterStream writer for custom chunks. Lazy — no overhead if unused.

ChatStartEvent

Passed to the onChatStart callback.
FieldTypeDescription
ctxTaskRunContextFull task run context — see Task context
chatIdstringChat session ID
messagesModelMessage[]Initial model-ready messages
clientDataTyped by clientDataSchemaCustom data from the frontend
runIdstringThe Trigger.dev run ID
chatAccessTokenstringScoped access token for this run
continuationbooleanWhether this run is continuing an existing chat
previousRunIdstring | undefinedPrevious run ID (only when continuation is true)
preloadedbooleanWhether this run was preloaded before the first message
writerChatWriterStream writer for custom chunks. Lazy — no overhead if unused.

ValidateMessagesEvent

Passed to the onValidateMessages callback.
FieldTypeDescription
messagesUIMessage[]Incoming UI messages for this turn
chatIdstringChat session ID
turnnumberTurn number (0-indexed)
trigger"submit-message" | "regenerate-message" | "preload" | "close"The trigger type for this turn

HydrateMessagesEvent

Passed to the hydrateMessages callback. See hydrateMessages.
FieldTypeDescription
chatIdstringChat session ID
turnnumberTurn number (0-indexed)
trigger"submit-message" | "regenerate-message" | "action"The trigger type for this turn
incomingMessagesUIMessage[]Validated wire messages from the frontend (empty for actions)
previousMessagesUIMessage[]Accumulated UI messages before this turn ([] on turn 0)
clientDataTyped by clientDataSchemaCustom data from the frontend
continuationbooleanWhether this run is continuing an existing chat
previousRunIdstring | undefinedPrevious run ID (only when continuation is true)

ActionEvent

Passed to the onAction callback. See Actions.
FieldTypeDescription
actionTyped by actionSchemaThe parsed and validated action payload
chatIdstringChat session ID
turnnumberTurn number (0-indexed)
clientDataTyped by clientDataSchemaCustom data from the frontend
uiMessagesUIMessage[]Accumulated UI messages (after hydration, if set)
messagesModelMessage[]Accumulated model messages (after hydration, if set)

TurnStartEvent

Passed to the onTurnStart callback.
FieldTypeDescription
ctxTaskRunContextFull task run context — see Task context
chatIdstringChat session ID
messagesModelMessage[]Full accumulated conversation (model format)
uiMessagesUIMessage[]Full accumulated conversation (UI format)
turnnumberTurn number (0-indexed)
runIdstringThe Trigger.dev run ID
chatAccessTokenstringScoped access token for this run
clientDataTyped by clientDataSchemaCustom data from the frontend
continuationbooleanWhether this run is continuing an existing chat
previousRunIdstring | undefinedPrevious run ID (only when continuation is true)
preloadedbooleanWhether this run was preloaded
writerChatWriterStream writer for custom chunks. Lazy — no overhead if unused.

TurnCompleteEvent

Passed to the onTurnComplete callback.
FieldTypeDescription
ctxTaskRunContextFull task run context — see Task context
chatIdstringChat session ID
messagesModelMessage[]Full accumulated conversation (model format)
uiMessagesUIMessage[]Full accumulated conversation (UI format)
newMessagesModelMessage[]Only this turn’s messages (model format)
newUIMessagesUIMessage[]Only this turn’s messages (UI format)
responseMessageUIMessage | undefinedThe assistant’s response for this turn
rawResponseMessageUIMessage | undefinedRaw response before abort cleanup
turnnumberTurn number (0-indexed)
runIdstringThe Trigger.dev run ID
chatAccessTokenstringScoped access token for this run
lastEventIdstring | undefinedStream position for resumption
stoppedbooleanWhether the user stopped generation during this turn
continuationbooleanWhether this run is continuing an existing chat
usageLanguageModelUsage | undefinedToken usage for this turn
totalUsageLanguageModelUsageCumulative token usage across all turns

BeforeTurnCompleteEvent

Passed to the onBeforeTurnComplete callback. Same fields as TurnCompleteEvent (including ctx) plus a writer.
FieldTypeDescription
(all TurnCompleteEvent fields)See TurnCompleteEvent (includes ctx)
writerChatWriterStream writer — the stream is still open so chunks appear in the current turn

ChatSuspendEvent

Passed to the onChatSuspend callback. A discriminated union on phase.
FieldTypeDescription
phase"preload" | "turn"Whether this is a preload or post-turn suspension
ctxTaskRunContextFull task run context
chatIdstringChat session ID
runIdstringThe Trigger.dev run ID
clientDataTyped by clientDataSchemaCustom data from the frontend
turnnumberTurn number ("turn" phase only)
messagesModelMessage[]Accumulated model messages ("turn" phase only)
uiMessagesUIMessage[]Accumulated UI messages ("turn" phase only)

ChatResumeEvent

Passed to the onChatResume callback. Same discriminated union shape as ChatSuspendEvent.
FieldTypeDescription
phase"preload" | "turn"Whether this is a preload or post-turn resumption
ctxTaskRunContextFull task run context
chatIdstringChat session ID
runIdstringThe Trigger.dev run ID
clientDataTyped by clientDataSchemaCustom data from the frontend
turnnumberTurn number ("turn" phase only)
messagesModelMessage[]Accumulated model messages ("turn" phase only)
uiMessagesUIMessage[]Accumulated UI messages ("turn" phase only)

ChatWriter

A stream writer passed to lifecycle callbacks. Write custom UIMessageChunk parts (e.g. data-* parts) to the chat stream. The writer is lazy — no stream is opened unless you call write() or merge(), so there’s zero overhead for callbacks that don’t use it.
MethodTypeDescription
write(part)(part: UIMessageChunk) => voidWrite a single chunk to the chat stream
merge(stream)(stream: ReadableStream<UIMessageChunk>) => voidMerge another stream’s chunks into the chat stream
onTurnStart: async ({ writer }) => {
  // Write a custom data part — render it on the frontend
  writer.write({ type: "data-status", data: { loading: true } });
},
onBeforeTurnComplete: async ({ writer, usage }) => {
  // Stream is still open — these chunks arrive before the turn ends
  writer.write({ type: "data-usage", data: { tokens: usage?.totalTokens } });
},

ChatAgentCompactionOptions

Options for the compaction field on chat.agent(). See Compaction for usage guide.
OptionTypeRequiredDescription
shouldCompact(event: ShouldCompactEvent) => boolean | Promise<boolean>YesDecide whether to compact. Return true to trigger
summarize(event: SummarizeEvent) => Promise<string>YesGenerate a summary from the current messages
compactUIMessages(event: CompactMessagesEvent) => UIMessage[] | Promise<UIMessage[]>NoTransform UI messages after compaction. Default: preserve all
compactModelMessages(event: CompactMessagesEvent) => ModelMessage[] | Promise<ModelMessage[]>NoTransform model messages after compaction. Default: replace all with summary

CompactMessagesEvent

Passed to compactUIMessages and compactModelMessages callbacks.
FieldTypeDescription
summarystringThe generated summary text
uiMessagesUIMessage[]Current UI messages (full conversation)
modelMessagesModelMessage[]Current model messages (full conversation)
chatIdstringChat session ID
turnnumberCurrent turn (0-indexed)
clientDataunknownCustom data from the frontend
source"inner" | "outer"Whether compaction is between steps or between turns

CompactedEvent

Passed to the onCompacted callback.
FieldTypeDescription
ctxTaskRunContextFull task run context — see Task context
summarystringThe generated summary text
messagesModelMessage[]Messages that were compacted (pre-compaction)
messageCountnumberNumber of messages before compaction
usageLanguageModelUsageToken usage from the triggering step/turn
totalTokensnumber | undefinedTotal token count that triggered compaction
inputTokensnumber | undefinedInput token count
outputTokensnumber | undefinedOutput token count
stepNumbernumberStep number (-1 for outer loop)
chatIdstring | undefinedChat session ID
turnnumber | undefinedCurrent turn
writerChatWriterStream writer for custom chunks during compaction

PendingMessagesOptions

Options for the pendingMessages field. See Pending Messages for usage guide.
OptionTypeRequiredDescription
shouldInject(event: PendingMessagesBatchEvent) => boolean | Promise<boolean>NoDecide whether to inject the batch between tool-call steps. If absent, no injection.
prepare(event: PendingMessagesBatchEvent) => ModelMessage[] | Promise<ModelMessage[]>NoTransform the batch before injection. Default: convert each via convertToModelMessages.
onReceived(event: PendingMessageReceivedEvent) => void | Promise<void>NoCalled when a message arrives during streaming (per-message).
onInjected(event: PendingMessagesInjectedEvent) => void | Promise<void>NoCalled after a batch is injected via prepareStep.

PendingMessagesBatchEvent

Passed to shouldInject and prepare callbacks.
FieldTypeDescription
messagesUIMessage[]All pending messages (batch)
modelMessagesModelMessage[]Current conversation
stepsCompactionStep[]Completed steps so far
stepNumbernumberCurrent step (0-indexed)
chatIdstringChat session ID
turnnumberCurrent turn (0-indexed)
clientDataunknownCustom data from the frontend

PendingMessagesInjectedEvent

Passed to onInjected callback.
FieldTypeDescription
messagesUIMessage[]All injected UI messages
injectedModelMessagesModelMessage[]The model messages that were injected
chatIdstringChat session ID
turnnumberCurrent turn
stepNumbernumberStep where injection occurred

UsePendingMessagesReturn

Return value of usePendingMessages hook. See Pending Messages — Frontend.
Property/MethodTypeDescription
pendingPendingMessage[]Current pending messages with mode and injection status
steer(text: string) => voidSend a steering message (or normal message when not streaming)
queue(text: string) => voidQueue for next turn (or send normally when not streaming)
promoteToSteering(id: string) => voidConvert a queued message to steering
isInjectionPoint(part: unknown) => booleanCheck if an assistant message part is an injection confirmation
getInjectedMessageIds(part: unknown) => string[]Get message IDs from an injection point
getInjectedMessages(part: unknown) => InjectedMessage[]Get messages (id + text) from an injection point

ChatSessionOptions

Options for chat.createSession().
OptionTypeDefaultDescription
signalAbortSignalrequiredRun-level cancel signal
idleTimeoutInSecondsnumber30Seconds to stay idle between turns
timeoutstring"1h"Duration string for suspend timeout
maxTurnsnumber100Max turns before ending

ChatTurn

Each turn yielded by chat.createSession().
FieldTypeDescription
numbernumberTurn number (0-indexed)
chatIdstringChat session ID
triggerstringWhat triggered this turn
clientDataunknownClient data from the transport
messagesModelMessage[]Full accumulated model messages
uiMessagesUIMessage[]Full accumulated UI messages
signalAbortSignalCombined stop+cancel signal (fresh each turn)
stoppedbooleanWhether the user stopped generation this turn
continuationbooleanWhether this is a continuation run
MethodReturnsDescription
complete(source)Promise<UIMessage | undefined>Pipe, capture, accumulate, cleanup, and signal turn-complete
done()Promise<void>Signal turn-complete (when you’ve piped manually)
addResponse(response)Promise<void>Add response to accumulator manually

chat namespace

All methods available on the chat object from @trigger.dev/sdk/ai.
MethodDescription
chat.agent(options)Create a chat agent
chat.createSession(payload, options)Create an async iterator for chat turns
chat.pipe(source, options?)Pipe a stream to the frontend (from anywhere inside a task)
chat.pipeAndCapture(source, options?)Pipe and capture the response UIMessage
chat.writeTurnComplete(options?)Signal the frontend that the current turn is complete
chat.createStopSignal()Create a managed stop signal wired to the stop input stream
chat.messagesInput stream for incoming messages — use .waitWithIdleTimeout()
chat.local<T>({ id })Create a per-run typed local (see Per-run data)
chat.createStartSessionAction(taskId, options?)Returns a server action that creates a chat Session + triggers the first run + returns a session-scoped PAT. Idempotent on (env, externalId).
chat.requestUpgrade()End the current run after this turn so the next message starts on the latest agent version. Server-orchestrated handoff.
chat.setTurnTimeout(duration)Override turn timeout at runtime (e.g. "2h")
chat.setTurnTimeoutInSeconds(seconds)Override turn timeout at runtime (in seconds)
chat.setIdleTimeoutInSeconds(seconds)Override idle timeout at runtime
chat.setUIMessageStreamOptions(options)Override toUIMessageStream() options for the current turn
chat.defer(promise)Run background work in parallel with streaming, awaited before onTurnComplete
chat.isStopped()Check if the current turn was stopped by the user
chat.cleanupAbortedParts(message)Remove incomplete parts from a stopped response message
chat.response.write(chunk)Write a data part that streams to the frontend AND persists in onTurnComplete’s responseMessage
chat.streamRaw chat output stream — use .writer(), .pipe(), .append(), .read(). Chunks are NOT accumulated into the response.
chat.history.all()Read the current accumulated UI messages (returns a copy). See chat.history
chat.history.set(messages)Replace all accumulated messages (same as chat.setMessages())
chat.history.remove(messageId)Remove a specific message by ID
chat.history.rollbackTo(messageId)Keep messages up to and including the given ID (undo/rollback)
chat.history.replace(messageId, message)Replace a specific message by ID (edit)
chat.history.slice(start, end?)Keep only messages in the given range
chat.MessageAccumulatorClass that accumulates conversation messages across turns
chat.withUIMessage(config?)Returns a ChatBuilder with a fixed UIMessage subtype. See Types
chat.withClientData({ schema })Returns a ChatBuilder with a fixed client data schema. See Types

chat.withUIMessage

Returns a ChatBuilder with a fixed UIMessage subtype. Chain .withClientData(), hook methods, and .agent().
chat.withUIMessage<TUIM>(config?: ChatWithUIMessageConfig<TUIM>): ChatBuilder<TUIM>;
ParameterTypeDescription
config.streamOptionsChatUIMessageStreamOptions<TUIM>Optional defaults for toUIMessageStream(). Shallow-merged with uiMessageStreamOptions on the inner .agent({ ... }) (agent wins on key conflicts).
Use this when you need InferChatUIMessage / typed data-* parts / InferUITools to line up across backend hooks and useChat. Full guide: Types.

chat.withClientData

Returns a ChatBuilder with a fixed client data schema. All hooks and run get typed clientData without passing clientDataSchema in .agent() options.
chat.withClientData<TSchema>({ schema: TSchema }): ChatBuilder<UIMessage, TSchema>;
ParameterTypeDescription
schemaTaskSchemaZod, ArkType, Valibot, or any supported schema lib
Full guide: Typed client data.

ChatWithUIMessageConfig

FieldTypeDescription
streamOptionsChatUIMessageStreamOptions<TUIM>Default toUIMessageStream() options for agents created via .agent()

InferChatUIMessage

Type helper: extracts the UIMessage subtype from a chat agent’s wire payload.
import type { InferChatUIMessage } from "@trigger.dev/sdk/ai";
// or from "@trigger.dev/sdk/chat/react"

type Msg = InferChatUIMessage<typeof myChat>;
Use with useChat<Msg>({ transport }) when using chat.withUIMessage. For agents defined with plain chat.agent() (no custom generic), this resolves to the base UIMessage.

AI helpers (ai from @trigger.dev/sdk/ai)

ExportStatusDescription
ai.toolExecute(task)PreferredReturns the execute function for AI SDK tool(). Runs the task via triggerAndSubscribe and attaches tool/chat metadata (same behavior the deprecated wrapper used internally).
ai.tool(task, options?)DeprecatedWraps tool() / dynamicTool() and the same execute path. Migrate to tool({ ..., execute: ai.toolExecute(task) }). See Task-backed AI tools.
ai.toolCallId, ai.chatContext, ai.chatContextOrThrow, ai.currentToolOptionsSupportedWork for any task-backed tool execute path, including ai.toolExecute.

ChatUIMessageStreamOptions

Options for customizing toUIMessageStream(). Set as static defaults via uiMessageStreamOptions on chat.agent(), or override per-turn via chat.setUIMessageStreamOptions(). See Stream options for usage examples. Derived from the AI SDK’s UIMessageStreamOptions with onFinish and originalMessages omitted (managed internally — onFinish for response capture, originalMessages for cross-turn message ID reuse).
OptionTypeDefaultDescription
onError(error: unknown) => stringRaw error messageCalled on LLM errors and tool execution errors. Return a sanitized string — sent as { type: "error", errorText } to the frontend.
sendReasoningbooleantrueSend reasoning parts to the client
sendSourcesbooleanfalseSend source parts to the client
sendFinishbooleantrueSend the finish event. Set to false when chaining multiple streamText calls.
sendStartbooleantrueSend the message start event. Set to false when chaining.
messageMetadata(options: { part }) => metadataExtract message metadata to send to the client. Called on start and finish events.
generateMessageId() => stringAI SDK’s generateIdCustom message ID generator for response messages (e.g. UUID-v7). IDs are shared between frontend and backend via the stream’s start chunk.

TriggerChatTransport options

Options for the frontend transport constructor and useTriggerChatTransport hook.
OptionTypeDefaultDescription
taskstringrequiredTask ID the transport’s session is bound to. Threaded into startSession’s params.
accessToken(params: AccessTokenParams) => string | Promise<string>requiredPure refresh — mints a fresh session-scoped PAT. Called on 401/403. See callback shape.
startSession(params: StartSessionParams<TClientData>) => Promise<StartSessionResult>optionalCreates the chat Session and returns the session-scoped PAT. Called on transport.preload(chatId) and lazily on the first sendMessage for any chatId without a cached PAT. See callback shape.
baseURLstring"https://api.trigger.dev"API base URL (for self-hosted)
headersRecord<string, string>Extra headers for API requests
streamTimeoutSecondsnumber120How long to wait for stream data
clientDataTyped by clientDataSchemaDefault client data merged into per-turn metadata and threaded through startSession’s params (so the first run’s payload.metadata matches per-turn metadata). Live-updated when the option value changes.
sessionsRecord<string, ChatSession>Restore sessions from storage. See ChatSession.
onSessionChange(chatId, session | null) => voidFires when session state changes. session is the full ChatSession or null when the run ends.
multiTabbooleanfalseEnable multi-tab claim coordination via BroadcastChannel. See Frontend → multi-tab.
watchbooleanfalseRead-only watcher mode — keep the SSE subscription open across trigger:turn-complete so a viewer sees turns 2, 3, … through one long-lived stream.

accessToken callback

The transport invokes accessToken whenever it needs a fresh session-scoped PAT — initial use after no PAT is cached, or after a 401/403 from any session-PAT-authed request. The callback’s job is to return a token, not to start a run. AccessTokenParams:
FieldTypeDescription
chatIdstringThe conversation id.
Customer implementation typically wraps auth.createPublicToken server-side:
"use server";
import { auth } from "@trigger.dev/sdk";

export async function mintChatAccessToken(chatId: string) {
  return auth.createPublicToken({
    scopes: { read: { sessions: chatId }, write: { sessions: chatId } },
    expirationTime: "1h",
  });
}
const transport = useTriggerChatTransport({
  task: "my-chat",
  accessToken: ({ chatId }) => mintChatAccessToken(chatId),
});

startSession callback

The transport invokes startSession when it needs to create the session — on transport.preload(chatId), and lazily on the first sendMessage for any chatId without a cached PAT. Concurrent and repeat calls dedupe via an in-flight promise, and the customer’s wrapped helper is idempotent on (env, externalId) so two tabs / two preload calls converge on the same session. StartSessionParams<TClientData>:
FieldTypeDescription
taskIdstringThe transport’s task value.
chatIdstringThe conversation id (the session’s externalId).
clientDataTClientDataThe transport’s current clientData option. Pass through to triggerConfig.basePayload.metadata so the first run’s payload.metadata matches per-turn metadata.
Customer implementation wraps chat.createStartSessionAction(taskId):
"use server";
import { chat } from "@trigger.dev/sdk/ai";

export const startChatSession = chat.createStartSessionAction("my-chat");
const transport = useTriggerChatTransport({
  task: "my-chat",
  startSession: ({ chatId, taskId, clientData }) =>
    startChatSession({ chatId, taskId, clientData }),
});
startSession is optional only when the customer fully manages the session lifecycle externally (e.g. by hydrating sessions: { [chatId]: ... } and never calling preload). Most customers should provide it.

multiTab

Enable multi-tab coordination. When true, only one browser tab can send messages to a given chatId at a time. Other tabs enter read-only mode with real-time message updates via BroadcastChannel.
const transport = useTriggerChatTransport({
  task: "my-chat",
  accessToken,
  multiTab: true,
});
No-op when BroadcastChannel is unavailable (SSR, Node.js). See Multi-tab coordination.

Trigger configuration

Trigger config (machine, queue, tags, maxAttempts, idleTimeoutInSeconds) lives server-side in chat.createStartSessionAction(taskId, options?). The transport doesn’t accept these options directly — pass them when wrapping the action:
"use server";
import { chat } from "@trigger.dev/sdk/ai";

export const startChatSession = chat.createStartSessionAction("my-chat", {
  triggerConfig: {
    machine: "small-1x",
    queue: "chat-queue",
    tags: ["user:123"],
    maxAttempts: 3,
    idleTimeoutInSeconds: 60,
  },
});
A chat:{chatId} tag is automatically added to every run. For per-call values that vary by chatId (e.g. plan-tier-driven machine), accept extra params on the customer’s server action and pass them into chat.createStartSessionAction(...)’s options at call time.

transport.stopGeneration()

Stop the current generation for a chat session. Sends a stop signal to the backend task and closes the active SSE connection.
transport.stopGeneration(chatId: string): Promise<boolean>
Returns true if the stop signal was sent, false if there’s no active session. Works for both initial connections and reconnected streams (after page refresh with resume: true). Use alongside useChat’s stop() for a complete stop experience:
const { stop: aiStop } = useChat({ transport });

const stop = useCallback(() => {
  transport.stopGeneration(chatId);
  aiStop();
}, [transport, chatId, aiStop]);
See Stop generation for full details.

transport.sendAction()

Send a custom action to the agent. Actions wake the agent from suspension, fire onAction, then trigger a normal run() turn.
transport.sendAction(chatId: string, action: unknown): Promise<ReadableStream<UIMessageChunk>>
The action payload is validated against the agent’s actionSchema on the backend.
// Undo button
<button onClick={() => transport.sendAction(chatId, { type: "undo" })}>
  Undo
</button>
See Actions for backend setup and Sending actions for frontend usage.

transport.preload()

Eagerly trigger a run before the first message.
transport.preload(chatId, { idleTimeoutInSeconds?: number }): Promise<void>
No-op if a session already exists for this chatId. See Preload for full details.

useTriggerChatTransport

React hook that creates and memoizes a TriggerChatTransport instance. Import from @trigger.dev/sdk/chat/react.
import { useTriggerChatTransport } from "@trigger.dev/sdk/chat/react";
import type { myChat } from "@/trigger/chat";

const transport = useTriggerChatTransport<typeof myChat>({
  task: "my-chat",
  accessToken: ({ chatId }) => mintChatAccessToken(chatId),
  startSession: ({ chatId, taskId, clientData }) =>
    startChatSession({ chatId, taskId, clientData }),
  sessions: savedSessions,
  onSessionChange: handleSessionChange,
});
The transport is created once on first render and reused across re-renders. Pass a type parameter for compile-time validation of the task ID.

useMultiTabChat

React hook for multi-tab message coordination. Import from @trigger.dev/sdk/chat/react.
import { useMultiTabChat } from "@trigger.dev/sdk/chat/react";

const { isReadOnly } = useMultiTabChat(transport, chatId, messages, setMessages);
ParameterTypeDescription
transportTriggerChatTransportTransport instance with multiTab: true
chatIdstringThe chat session ID
messagesUIMessage[]Current messages from useChat
setMessages(messages) => voidMessage setter from useChat
Returns: { isReadOnly: boolean }true when another tab is actively sending to this chatId. The hook handles:
  • Tracking read-only state from the transport’s BroadcastChannel coordinator
  • Broadcasting messages when this tab is the active sender
  • Receiving messages from other tabs and updating via setMessages
See Multi-tab coordination.

ChatSession

Persistable session state for the frontend TriggerChatTransport and the server-side AgentChat. The underlying Session row is keyed on chatId (durable across runs); the persistable shape is just the SSE resume cursor and a refresh token.
FieldTypeDescription
publicAccessTokenstringSession-scoped JWT (read:sessions:{chatId} + write:sessions:{chatId}). Refreshed automatically on 401/403 via the transport’s accessToken callback.
lastEventIdstring | undefinedLast SSE event received on .out. Used to resume mid-stream after a disconnect.
isStreamingboolean | undefinedOptional. If persisted, reconnectToStream uses it as a fast-path short-circuit. If omitted, the server decides via the session’s X-Session-Settled response header.

ChatInputChunk

The wire shape for records sent on .in. Consumed by chat.agent internally — you typically don’t write these yourself; transport.sendMessage, transport.stopGeneration, and transport.sendAction all serialize into this shape.
type ChatInputChunk<TMessage = UIMessage, TMetadata = unknown> =
  | { kind: "message"; payload: ChatTaskWirePayload<TMessage, TMetadata> }
  | { kind: "stop"; message?: string };
VariantWhenPayload
kind: "message"New message, action, approval response, or closepayload is a full ChatTaskWirePayload — its trigger field ("submit-message" / "action" / "close") determines the agent’s dispatch
kind: "stop"Client aborted the active turnOptional message surfaces in the stop handler
For the raw wire format, see Client Protocol — ChatInputChunk.

Session token scopes

Tokens minted for TriggerChatTransport and AgentChat are session-scoped — keyed on the chat’s externalId (the chatId you assign).
ScopeGrants
read:sessions:<chatId>Subscribe to .out, HEAD probe the stream, retrieve the session row
write:sessions:<chatId>Append to .in, close the session, end-and-continue, update metadata
Tokens are produced by auth.createPublicToken({ scopes: { read: { sessions: chatId }, write: { sessions: chatId } } }) (used by the customer’s accessToken server action) or returned automatically from chat.createStartSessionAction / POST /api/v1/sessions. Either form authorizes both URL forms (/sessions/{chatId}/... and /sessions/session_*/...) on every read and write route.