Documentation Index
Fetch the complete documentation index at: https://trigger-docs-tri-7532-ai-sdk-chat-transport-and-chat-task-s.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Overview
chat.inject() queues model messages for injection into the conversation. Messages are picked up at the start of the next turn or at the next prepareStep boundary (between tool-call steps).
This is the backend counterpart to pending messages — pending messages come from the user via the frontend, while chat.inject() comes from your task code.
Basic usage
Common pattern: defer + inject
The most powerful pattern combineschat.defer() (background work) with chat.inject() (inject results). Background work runs in parallel with the idle wait between turns, and results are injected before the next response.
Timing
- Turn completes,
onTurnCompletefires chat.defer()registers the background work- The run immediately starts waiting for the next message (no blocking)
- Background work completes,
chat.inject()queues the messages - User sends next message, turn starts
- Injected messages are appended before
run()executes - The LLM sees the injected context alongside the new user message
prepareStep boundary instead.
Example: self-review
A cheap model reviews the agent’s response after each turn and injects coaching for the next one. Uses Prompts for the review prompt andgenerateObject for structured output.
gpt-4o-mini (fast, cheap) in the background. If the user sends another message before it completes, the coaching is still injected — chat.inject() persists across the idle wait.
Other use cases
- RAG augmentation: After each turn, fetch relevant documents and inject them as context for the next response
- Safety checks: Run a moderation model on the response, inject warnings if issues are detected
- Fact-checking: Verify claims in the response using search tools, inject corrections
- Context enrichment: Look up user/account data based on what was discussed, inject it as system context
How it differs from pending messages
chat.inject() | Pending messages | |
|---|---|---|
| Source | Backend task code | Frontend user input |
| Triggered by | Your code (e.g. onTurnComplete + chat.defer()) | User sending a message during streaming |
| Injection point | Start of next turn, or next prepareStep boundary | Next prepareStep boundary only |
| Message role | Any (system, user, assistant) | Typically user |
| Frontend visibility | Not visible unless you write custom data-* chunks | Visible via usePendingMessages hook |
API reference
chat.inject()
| Parameter | Type | Description |
|---|---|---|
messages | ModelMessage[] | Model messages to inject (from the ai package) |
- A new turn starts — before
run()executes - A
prepareStepboundary is reached — between tool-call steps during streaming
chat.inject() writes to an in-memory queue in the current process. It works from any code running in the same task — lifecycle hooks, deferred work, tool execute functions, etc. It does not work from subtasks or other runs.
