Skip to main content

Documentation Index

Fetch the complete documentation index at: https://trigger-docs-tri-7532-ai-sdk-chat-transport-and-chat-task-s.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

April 24, 2026
SDKPlatform
0.0.0-chat-prerelease-20260501122331

chat.agent now runs on Sessions

Every chat is backed by a durable Session row that outlives any single run. externalId = your chat ID, type = "chat.agent". Under the hood:
  • Output chunks stream on session.out (was a run-scoped streams.writer("chat")).
  • Client messages and stops land on session.in as a ChatInputChunk tagged union (was two run-scoped streams.input definitions).
  • Wire endpoints moved from /realtime/v1/streams/{runId}/... to /realtime/v1/sessions/{sessionId}/.... See the rewritten Client Protocol.
Public surface (chat.agent(), TriggerChatTransport, AgentChat, chat.stream / chat.messages / chat.stopSignal) is unchanged — existing apps keep working. What’s new is:
  • Cross-run resume is free. A chat you were in yesterday resumes against the same sessionId today, even if the original run long since exited. No more lost conversations when a run idle-times-out.
  • Inbox views via sessions.list({type: "chat.agent"}). Enumerate every chat in your environment, filter by tag or status.
  • TriggerChatTaskResult.sessionId + ChatTaskRunPayload.sessionId — you can reach into the raw session via sessions.open(payload.sessionId) for advanced cases (writing from a sub-agent, custom transport).
  • Dashboard Agent tab resolves via sessionId and stays in sync with the live stream across runs.
The full wire-level protocol (session create, channel routes, JWT scopes) is documented in Client Protocol.

X-Session-Settled — fast reconnect on idle chats

When a client reconnects to session.out and the tail record is a trigger:turn-complete marker (agent finished a turn, idle-waiting or exited), the server sets X-Session-Settled: true and uses wait=0 on the underlying S2 read. The SSE drains any remaining records then closes in ~1s instead of long-polling for 60s.Practical impact: TriggerChatTransport.reconnectToStream no longer needs a client-side isStreaming flag. You can drop the field from your persisted ChatSession state entirely — the server decides. Existing callers that still persist isStreaming are unaffected; reconnectToStream keeps the fast-path short-circuit when it’s false.

Migration

See the Sessions Upgrade Guide for the full step-by-step — auth callback split, persisted ChatSession shape, server-side helpers (chat.createStartSessionAction, chat.createAccessToken for renewal), and the clientData validation pivot.

Docs

  • Rewritten Client Protocol — full wire format for the new /realtime/v1/sessions/{sessionId}/... endpoints, JWT scopes, S2 direct-write credentials, and Last-Event-ID resume.
  • Database persistence pattern — new chatId-keyed ChatSession shape (no more runId) and a warning on the onTurnComplete race that requires a single atomic write of messages + lastEventId.
  • Reference — added chat.createStartSessionAction, chat.createAccessToken, ChatInputChunk, TriggerChatTaskResult.sessionId, ChatTaskRunPayload.sessionId. The old run-scoped stream-ID constants are gone.
  • Refreshed Backend, Frontend, Server Chat, Quick start, Overview, Features, Types, Error handling, and Testing for the session-based wiring.
April 19, 2026
SDKCLI
0.0.0-chat-prerelease-20260419173457

Agent Skills

Ship reusable capabilities as folders — a SKILL.md plus optional scripts, references, and assets. The agent sees short descriptions in its system prompt, loads full instructions on demand via loadSkill, and invokes bundled scripts via bash — no manual wiring.skills.define({ id, path }) registers the skill; the CLI bundles the folder into the deploy image. chat.skills.set([...]) activates skills for the run; chat.toStreamTextOptions() auto-injects the preamble and tools.See the new Agent Skills guide.
April 18, 2026
SDK
0.0.0-chat-prerelease-20260418174118

chat.endRun() — exit on your own terms

New imperative API to exit the loop after the current turn completes, without the upgrade-required signal that chat.requestUpgrade() sends. Use for one-shot agents, budget-exhausted exits, or goal-reached completions.
chat.agent({
  id: "one-shot",
  run: async ({ messages, signal }) => {
    chat.endRun();
    return streamText({ model: openai("gpt-4o"), messages, abortSignal: signal });
  },
});
The current turn streams normally, onBeforeTurnComplete / onTurnComplete fire, the turn-complete chunk is written, and the run exits instead of suspending. Callable from run(), chat.defer(), onBeforeTurnComplete, or onTurnComplete. See Ending a run on your terms.

finishReason on turn-complete events

TurnCompleteEvent and BeforeTurnCompleteEvent now include the AI SDK’s finishReason ("stop" | "tool-calls" | "length" | "content-filter" | "error" | "other"). Clean signal for distinguishing a normal turn end from one paused on a pending tool call (HITL flows like ask_user):
onTurnComplete: async ({ finishReason, responseMessage }) => {
  if (finishReason === "tool-calls") {
    // Paused — assistant message has a pending tool call waiting for user input
    await persistCheckpoint(responseMessage);
  } else {
    await persistCompleted(responseMessage);
  }
};
Undefined for manual chat.pipe() flows or aborted streams. See the new Human-in-the-loop pattern.

User-initiated compaction pattern

The Compaction guide now covers how to wire a “Summarize conversation” button or /compact slash command via actionSchema + onAction. The agent summarizes on demand, rewrites history with chat.history.set(), and short-circuits the LLM call for action turns.Needed a small type fix for this: ChatTaskPayload.trigger now correctly includes "action", so run() handlers can short-circuit with if (trigger === "action") return when an action doesn’t need a response.

Human-in-the-loop pattern page

New Human-in-the-loop page walks through ask_user-style mid-turn user input end-to-end: defining a no-execute tool, rendering pending tool calls on the frontend with addToolOutput + sendAutomaticallyWhen, detecting paused turns via finishReason, and two persistence strategies (overwrite vs. checkpoint nodes).
April 18, 2026
SDK
0.0.0-chat-prerelease-20260418083610

Offline test harness for chat.agent

@trigger.dev/sdk/ai/test now ships mockChatAgent, a harness that drives a chat.agent definition through real turns without network or task runtime. Send messages, actions, and stop signals; inspect emitted chunks; assert on hook order.
import { mockChatAgent } from "@trigger.dev/sdk/ai/test";
import { MockLanguageModelV3 } from "ai/test";
import { myAgent } from "./my-agent";

const harness = mockChatAgent(myAgent, {
  chatId: "test-1",
  clientData: { model: new MockLanguageModelV3({ /* ... */ }) },
});

const turn = await harness.sendMessage({
  id: "u1",
  role: "user",
  parts: [{ type: "text", text: "hi" }],
});
expect(turn.chunks).toContainEqual(
  expect.objectContaining({ type: "text-delta", delta: "hello" }),
);
await harness.close();

Dependency injection via locals

setupLocals pre-seeds locals before run() starts — the pattern for injecting database clients, service stubs, and other server-side dependencies that shouldn’t leak through untrusted clientData:
import { dbKey } from "./db";

const harness = mockChatAgent(agent, {
  chatId: "test-1",
  setupLocals: ({ set }) => {
    set(dbKey, testDb);
  },
});
Hooks then read the seeded value with locals.get(dbKey). Falls through to the production client in real runs.See Testing.

runInMockTaskContext — lower-level test harness

@trigger.dev/core/v3/test now exports runInMockTaskContext for unit-testing any task code offline (not just chat agents). Installs in-memory managers for locals, lifecycleHooks, runtime, inputStreams, and realtimeStreams, plus a mock TaskContext. Drivers let you push data into input streams and inspect chunks written to output streams.
April 17, 2026
SDK
0.0.0-chat-prerelease-20260417152143

Multi-tab coordination

Prevent duplicate messages when the same chat is open in multiple browser tabs. Enable with multiTab: true on the transport.
const transport = useTriggerChatTransport({ task: "my-chat", multiTab: true, accessToken });
const { messages, setMessages } = useChat({ id: chatId, transport });
const { isReadOnly } = useMultiTabChat(transport, chatId, messages, setMessages);
Only one tab can send at a time. Other tabs enter read-only mode with real-time message updates via BroadcastChannel. When the active tab’s turn completes, any tab can send next. Crashed tabs are detected via heartbeat timeout (10s).See Multi-tab coordination and useMultiTabChat.

Error stack truncation

Large error stacks no longer OOM the worker process. Stacks are capped at 50 frames (top 5 + bottom 45), individual lines at 1024 chars, messages at 1000 chars. Applied in parseError, sanitizeError, and OTel span recording.
April 15, 2026
SDK
0.0.0-chat-prerelease-20260415164455

Fix: resume: true hangs on completed turns

When refreshing a page after a turn completed, useChat with resume: true would hang indefinitely — reconnectToStream opened an SSE connection that never received data.Added isStreaming to session state. The transport sets it to true when streaming starts and false on trigger:turn-complete. reconnectToStream returns null immediately when isStreaming is false, so resume: initialMessages.length > 0 is now safe to pass unconditionally.The flag flows through onSessionChange and is restored from sessions — no extra persistence code needed.
April 15, 2026
SDK
0.0.0-chat-prerelease-20260415152704

hydrateMessages — backend-controlled message history

Load message history from your database on every turn instead of trusting the frontend accumulator. The hook replaces the built-in linear accumulation entirely — the backend is the source of truth.
chat.agent({
  id: "my-chat",
  hydrateMessages: async ({ chatId, trigger, incomingMessages }) => {
    const stored = await db.getMessages(chatId);
    if (trigger === "submit-message" && incomingMessages.length > 0) {
      stored.push(incomingMessages[incomingMessages.length - 1]!);
      await db.persistMessages(chatId, stored);
    }
    return stored;
  },
});
Tool approval updates are auto-merged after hydration — no extra handling needed.See hydrateMessages.

chat.history — imperative message mutations

Modify the accumulated message history from any hook or run():
chat.history.rollbackTo(messageId);  // Undo — keep up to this message
chat.history.remove(messageId);      // Remove one message
chat.history.replace(id, newMsg);    // Edit a message
chat.history.slice(0, -2);           // Remove last 2 messages
chat.history.all();                  // Read current state
See chat.history.

Custom actions — actionSchema + onAction

Send typed actions (undo, rollback, edit) from the frontend via transport.sendAction(). Actions wake the agent, fire onAction, then trigger a normal run() turn.
chat.agent({
  id: "my-chat",
  actionSchema: z.discriminatedUnion("type", [
    z.object({ type: z.literal("undo") }),
    z.object({ type: z.literal("rollback"), targetMessageId: z.string() }),
  ]),
  onAction: async ({ action }) => {
    if (action.type === "undo") chat.history.slice(0, -2);
    if (action.type === "rollback") chat.history.rollbackTo(action.targetMessageId);
  },
});
Frontend: transport.sendAction(chatId, { type: "undo" }) Server: agentChat.sendAction({ type: "undo" })See Actions and Sending actions.
April 14, 2026
SDK
0.0.0-chat-prerelease-20260414181032

chat.response — persistent data parts

Added chat.response.write() for writing data parts that both stream to the frontend AND persist in onTurnComplete’s responseMessage and uiMessages.
// Persists to responseMessage.parts — available in onTurnComplete
chat.response.write({ type: "data-handover", data: { context: summary } });

// Transient — streams to frontend only, not in responseMessage
writer.write({ type: "data-progress", data: { percent: 50 }, transient: true });
Non-transient data-* chunks written via lifecycle hook writer.write() now automatically persist to the response message, matching the AI SDK’s default semantics. Add transient: true for ephemeral chunks (progress indicators, status updates).See Custom data parts.

Tool approvals

Added support for AI SDK tool approvals (needsApproval: true). When the model calls a tool that needs approval, the turn completes and the frontend shows approve/deny buttons. After approval, the updated assistant message is sent back and matched by ID in the accumulator.
const sendEmail = tool({
  description: "Send an email. Requires human approval.",
  inputSchema: z.object({ to: z.string(), subject: z.string(), body: z.string() }),
  needsApproval: true,
  execute: async ({ to, subject, body }) => { /* ... */ },
});
Frontend setup requires sendAutomaticallyWhen and addToolApprovalResponse from useChat. See Tool approvals.

transport.stopGeneration(chatId)

Added stopGeneration method to TriggerChatTransport for reliable stop after page refresh / stream reconnect. Works regardless of whether the AI SDK passes abortSignal through reconnectToStream.
const stop = useCallback(() => {
  transport.stopGeneration(chatId);
  aiStop(); // also update useChat state
}, [transport, chatId, aiStop]);
See Stop generation.

generateMessageId support

generateMessageId can now be passed via uiMessageStreamOptions to control response message ID generation (e.g. UUID-v7). The backend automatically passes originalMessages to toUIMessageStream so message IDs are consistent between frontend and backend.

Bug fixes

  • onTurnComplete not called: Fixed turnCompleteResult?.lastEventId TypeError that silently skipped onTurnComplete when writeTurnCompleteChunk returned undefined in dev.
  • Stop during streaming: Added 2s timeout on onFinishPromise so onBeforeTurnComplete and onTurnComplete fire even when the AI SDK’s onFinish doesn’t fire after abort.
  • toStreamTextOptions without chat.prompt.set(): prepareStep injection (compaction, steering, background context) now works even when the user passes system directly to streamText instead of using chat.prompt.set().
  • Background queue vs tool approvals: Background context injection is now skipped when the last accumulated message is a tool message, preventing it from breaking streamText’s collectToolApprovals.