Flow Execution
Execute flows programmatically and retrieve results.
Run a flow
POST /api/v1/flows/{flowId}/runsThe flowId can be either the flow's slug name (e.g. product_description) or its ObjectId.
Request body
parameters
object
no
Key-value map of template parameters. Keys must match [[param]] tokens.
environment
string
no
Environment name (e.g. "production"). Determines which version runs.
messages
array
no
Additional chat messages appended after the template's system/user messages.
entrypoint
string
no
Override which template to use as entrypoint (by name).
version
string
no
Override which version to use (by ID). Takes precedence over environment.
overrideModel
string
no
Force a specific model, bypassing template and routing config.
temperature
float
no
Override temperature.
maxTokens
integer
no
Override max output tokens.
maxThinkingTokens
integer
no
Budget for extended thinking (reasoning models).
maxToolCalls
integer
no
Max tool-calling loop iterations.
responseSchema
object
no
JSON Schema for structured output. Overrides template's schema.
vendorTools
string array
no
Provider-native tools to enable (e.g. ["web_search"]).
tags
string array
no
Tags for filtering in invocation log and statistics.
nonce
string
no
Cache-bust string. Affects response cache hash, never sent to LLM.
logLevel
string
no
Override logging level: Trace, Debug, Information, Warning, Error.
customerId
string
no
End-customer ID for per-customer usage tracking.
Headers
Authorization
Bearer YOUR_API_KEY (required)
X-Shuttle-Customer-Id
End-customer ID (takes precedence over customerId in body)
X-Shuttle-Debug-Url
Attach a debug URL tag for tracing
Example
Response
Response fields
id
string
Unique run ID. Use this for feedback, agent trees, etc.
timestamp
datetime
When the run was created.
creditsUsed
long
Total credits consumed (1M credits = $1).
milliseconds
integer
Total execution time.
usage
object
Detailed usage breakdown.
usage.creditsLeft
long
Remaining tenant credit balance.
responses
array
Inference responses (one per LLM call in the chain).
responses[].textResponse
string
The LLM's text output.
responses[].toolCalls
array
Any tool calls returned (if tool loop hasn't completed).
responses[].citations
array
Citation URLs (Perplexity only).
warnings
string array
Warnings about unused/unresolved parameters.
Discover parameters
Before running a flow, you can check what parameters it expects:
Query parameters:
environment
Environment to resolve the active version for
version
Specific version ID
entrypoint
Specific template name
Response:
Submit feedback
Collect feedback on run quality for monitoring and fine-tuning:
shuttleRequestId
string
yes
The run ID to attach feedback to
score
integer
yes
Must be +1 (positive) or -1 (negative)
text
string
no
Free-text feedback
endUserId
string
no
Your end-user's identifier
Feedback is upserted: submitting again for the same request replaces the previous feedback.
Get agent execution tree
For flows that use agent tools (multi-agent orchestration), retrieve the full execution hierarchy:
Returns a tree structure showing:
Each agent invocation with its role, duration, and cost
Tool calls made by each agent
Child agents spawned
Cumulative vs. direct credit usage
Status and error messages
Last updated