Some turns need to stop and ask the user something before they can finish — picking between options, confirming a destructive action, or clarifying an ambiguous request. The AI SDK calls this human-in-the-loop (HITL), and the building block is a tool with noDocumentation Index
Fetch the complete documentation index at: https://trigger-docs-tri-7532-ai-sdk-chat-transport-and-chat-task-s.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
execute function.
When the LLM calls a tool that has no execute, streamText ends with the tool call still pending. The turn completes cleanly, the frontend renders UI to collect the answer, and when the user responds, a new turn resumes with the answer merged into the same assistant message.
How it works
toUIMessageStream automatically reuses the assistant message ID across the pause (we pass originalMessages internally), so responseMessage in the post-resume onTurnComplete is the full merged message — the original text, the completed tool call, and any follow-up content — not just the new parts.
Backend: define the tool
A HITL tool has aninputSchema describing what the model can ask, but no execute function. When the LLM calls it, streamText returns control to your agent.
trigger/my-chat.ts
Frontend: render the question and collect the answer
Two pieces on the client:- UI for the pending tool call — render when the tool part is in
input-availablestate, i.e. the LLM has called the tool but there’s no output yet. - Auto-send on resolution — use
sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithToolCallsso answering kicks off the next turn without the user having to hit “send.”
addToolOutput patches the assistant message locally with state: "output-available" and fills in output. lastAssistantMessageIsCompleteWithToolCalls detects that every pending tool call now has a result, and useChat fires a new sendMessage — the backend picks it up as the next turn.
Detecting a paused turn in onTurnComplete
Two ways to detect “this turn paused for user input” vs “this turn finished normally”:
Via finishReason (recommended)
The AI SDK’s finish reason is surfaced on every onTurnComplete event. If the model stopped on tool calls, it’s "tool-calls":
finishReason is only undefined for manual chat.pipe() flows or aborted streams. For the common run() → return streamText(...) pattern it’s always populated.Via response parts
If you need more nuance (e.g. which specific tool is pending), inspect the parts directly:finishReason === "tool-calls" and pendingToolCalls(responseMessage).length > 0 are equivalent in practice. Use finishReason for dispatch, parts for detail.
Persistence: one message vs one record per pause
Because the AI SDK reuses the assistant message ID across the pause, the “same turn” from the user’s perspective maps to twoonTurnComplete firings on the server — but both receive a responseMessage with the same id, and the second firing’s responseMessage contains the fully merged content.
Two common persistence patterns:
Overwrite on every turn (simplest)
Just store the latestuiMessages array on every onTurnComplete. The paused-turn write is overwritten by the resume-turn write; the final DB state has the full merged message.
Checkpoint nodes (immutable history)
For apps that want every pause point recorded as its own immutable snapshot (branching, replay, diff review), save a checkpoint when paused and a sibling when complete:responseMessage.id as the same value — they’re checkpoints of the same logical message. Grouping by messageId + ordering by createdAt gives you the progression.
Multi-pause turns
A single logical turn can pause more than once — the LLM asks question A, gets the answer, thinks, then asks question B before finishing. Each pause fires its ownonTurnComplete with finishReason === "tool-calls"; only the last firing has finishReason === "stop". The checkpoint pattern above handles this naturally — each pause adds a new checkpoint sharing the same responseMessage.id.
Gotchas
- Don’t set an
executefunction on the HITL tool. If it has one,streamTextwill call it immediately instead of handing control back. - The frontend must use
sendAutomaticallyWhen. Without it, the user has to press Enter after answering —addToolOutputupdates local state but doesn’t fire a new turn by itself. - Don’t mutate
responseMessageinonTurnComplete. It’s the captured snapshot. To add custom parts, usechat.response.append()inonBeforeTurnComplete(while the stream is open). - Stop handling. If the user stops the run while a pause is active (
chat.stop()on the transport),onTurnCompletefires withstopped: trueandfinishReasonreflecting the last successful step. Treat stopped paused turns the same as stopped normal turns.

