Documentation Index
Fetch the complete documentation index at: https://trigger-docs-tri-7532-ai-sdk-chat-transport-and-chat-task-s.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
chat.agent errors fall into four layers, each with different recovery semantics. The default behavior is conversation-preserving: a thrown error in a hook or run() does not kill the chat. The current turn ends with an error chunk, and the agent waits for the user’s next message.
Error layers at a glance
| Layer | Source | Default behavior | Recovery |
|---|---|---|---|
| Stream | streamText errors mid-response (rate limits, model API failures) | onError callback converts to error chunk | Sanitize message via uiMessageStreamOptions.onError |
| Hook / turn | Throws in onValidateMessages, onTurnStart, run, etc. | Error chunk + turn-complete written to stream; conversation continues | Catch in your hook, or rely on default |
| Run | Unhandled exception escapes the run | Run fails. No retry by default. Standard task onFailure fires. | onFailure task hook |
| Frontend | Stream delivers { type: "error", errorText } | useChat exposes via error field and onError callback | Show toast, retry button, etc. |
Stream errors mid-turn
When the model API errors mid-response (rate limits, network failures, malformed output), the AI SDK’sstreamText calls the onError callback. Use uiMessageStreamOptions.onError to convert the error to a user-friendly string. The string is sent to the frontend as an error chunk.
Returning a string from
onError is what gets shown to the user. Do not return raw error messages — they may leak internal details (API keys, stack traces, etc.).useChat exposes via its error field:
Hook and turn errors
If any lifecycle hook (onValidateMessages, onChatStart, onTurnStart, hydrateMessages, onAction, prepareMessages, onBeforeTurnComplete, onTurnComplete) or run() throws an unhandled exception, the turn loop catches it:
- Writes
{ type: "error", errorText: error.message }to the stream - Writes a turn-complete chunk to close the turn
- Waits for the next user message
Catching errors in your own hooks
For granular control, wrap your hook code in try/catch and decide what to do. Common patterns:Catching errors inside run()
run() is your code — wrap it in try/catch for full control. This is the right place to save partial state to your DB before the error chunk goes out:
Saving error state to your DB
To persist errors for debugging or undo, useonTurnComplete (which fires even after errors) or the standard task onComplete hook.
Using onTurnComplete
onTurnComplete fires after every turn — successful or errored. The responseMessage will be undefined or partial on errors. Use this to mark the turn as failed:
Using the standard onFailure task hook
For run-level failures (the entire run dies), use the standard task onFailure hook. This fires when the run terminates with an unhandled exception:
chat.agent uses retry: { maxAttempts: 1 } internally, so the run never retries on failure. To add run-level retries, wrap the agent in a parent task or implement your own retry logic in the frontend (re-send the message).Recovery patterns
Pattern 1: Undo to last successful response
A common pattern is to let the user “undo” the failed turn and try again. Combinechat.history.rollbackTo with a custom action:
Pattern 2: Retry the last message
For transient errors (network blips, rate limits), the simplest recovery is to re-send the last user message. The AI SDK’suseChat provides regenerate():
regenerate() removes the last assistant response and re-sends. Combined with onValidateMessages or hydrateMessages, you can reload the canonical state from your DB before retrying.
Pattern 3: Save partial responses
When a stream errors mid-response, theresponseMessage in onBeforeTurnComplete and onTurnComplete contains the partial output. Save it as a “draft” so the user can see what was generated before the error:
Pattern 4: Fall back to a different model
If the primary model errors, try a fallback model in the same turn:This only catches errors thrown synchronously by
streamText setup. Errors that happen mid-stream go through uiMessageStreamOptions.onError, not your try/catch.What gets written to the stream on error
When an error occurs at any layer, the frontend receives an error chunk in the SSE stream:useChat processes this and:
- Sets
useChat’serrorfield to anErrorwithmessage = errorText - Calls the user’s
onErrorcallback (if set) - Marks the turn as complete (
statusreturns to"ready")
Frontend error handling
Showing the error to the user
Distinguishing error types
TheerrorText is just a string, so distinguish error types via prefixes or codes:
Errors from accessToken / startSession
If your accessToken or startSession callback throws (auth failure, DB write failure, network error), the rejection surfaces through useChat’s error state — same as a stream error. The transport doesn’t retry the callback automatically; the customer is responsible for handling it.
startSession failures most commonly mean the customer’s authorization layer rejected the request (no plan, quota exceeded, user not allowed to chat with this agent). The customer’s server should produce a meaningful error message; the transport propagates it verbatim to useChat’s error state.
Run-level retries
chat.agent uses retry: { maxAttempts: 1 } — the run never retries on unhandled failure. This is intentional: each turn is conversation-preserving, so a true run failure is severe and shouldn’t silently retry (which could send duplicate API calls or mutate state twice).
To add retry-like behavior:
- Per-turn retries: handle inside
run()with try/catch and a fallback model - Per-message retries: re-send from the frontend (call
sendMessageorregenerateagain) - Whole-run retries: wrap
chat.agentwith a parent task that hasretryconfigured, and call the agent’s task internally
Best practices
- Always set
uiMessageStreamOptions.onErrorto sanitize stream errors before they reach the user. - Persist messages in
onTurnStartso a mid-stream failure still leaves the user’s message visible. - Use
onTurnCompleteto mark turn status in your DB (ok/errored/stopped). - Don’t throw raw errors with internal details in hooks — catch, log, then throw a sanitized user-facing message.
- Provide an undo or retry affordance in the UI when errors occur.
- Use
onFailurefor run-level monitoring (Sentry, monitoring dashboards). - For known transient errors (rate limits, network), consider a fallback model inside
run()instead of failing the turn.
ChatChunkTooLargeError
A specific run-failing error worth flagging on its own. Anything written through the chat output is one record on the underlying realtime stream, capped at ~1 MiB per record. A single chunk over the cap throws ChatChunkTooLargeError (named export from @trigger.dev/sdk). The most common trigger is a tool whose result object is large enough to overflow as one tool-output-available chunk.
The error carries chunkType, chunkSize, and maxSize. Catch with the isChatChunkTooLargeError guard and route oversized values out-of-band.
See Large payloads in chat.agent for the two patterns that work around the cap (ID-reference + run-scoped streams.writer()).
See also
uiMessageStreamOptions.onError— stream error handler details- Custom actions — implement undo/retry actions
chat.history— rollback to a previous message- Large payloads — handling the ~1 MiB per-chunk cap
- Database persistence — saving conversation state
- Standard task hooks —
onFailure,onComplete,onWait, etc.

