Templates & Parameters
Template anatomy
Each template in a flow version has:
name
string
Technical name (e.g. main, billing_handler). Used in API calls.
description
string
Human-readable description. Used by the classifier in Route mode.
template
string
The system prompt with [[parameter]] placeholders.
userTemplate
string
Optional user message template. Same parameter syntax.
llm
string
Model to use (e.g. openai/gpt-4o).
temperature
float
Sampling temperature (0-2).
maxToolCalls
integer
Maximum tool-calling iterations before stopping.
toolIds
string array
IDs of tools available to this template.
fallbacks
string array
Ordered fallback models if the primary model fails.
responseSchema
object
JSON Schema for structured outputs.
Parameter syntax
Parameters use double square brackets:
You are a [[role]]. Write a [[document_type]] about [[topic]].Rules
Parameter names must be lowercase with letters, digits, and underscores only
[[my_param]]— valid[[MyParam]]— invalid (no uppercase)[[my-param]]— invalid (no hyphens)[[my param]]— invalid (no spaces)
How parameters are resolved
When a flow is executed, parameters are resolved in this order:
Caller-supplied values — Parameters passed in the API request body
Sub-template references — If a parameter name matches another template in the same version, that template's content is substituted
Unresolved — If neither applies, a warning is returned in the response
Comments in templates
Templates support two comment styles (comments are stripped before sending to the LLM):
System prompt vs. user message
Each template has two prompt fields:
template(system prompt) — Rendered as asystemmessage. This is where you define the LLM's behavior, persona, and instructions.userTemplate(user message) — Rendered as ausermessage after the system prompt. Use this when you want to separate instructions from the user's input.
Both fields support the same [[parameter]] syntax.
Example
System prompt (template):
User message (userTemplate):
Discovering parameters
Before executing a flow, you can discover what parameters it expects:
Response:
The source field indicates where the parameter appears:
template
In the system prompt
userTemplate
In the user message template
toolUrl
In an external tool's URL pattern
Parameters that match a sub-template name also include a promptTemplate object.
Structured outputs
You can constrain the LLM to return JSON matching a schema. Set responseSchema on the template:
The caller can also override this at execution time by passing responseSchema in the flow run request.
Not all models support structured outputs. Check the model's supportsStructuredOutput field via GET /api/v1/models/descriptors.
Tool assignments
Templates can reference tools by their IDs in the toolIds array. When tools are assigned:
PromptShuttle includes the tool definitions in the LLM request
If the LLM returns tool calls, PromptShuttle executes them automatically
Tool results are appended to the conversation and the LLM is re-invoked
This loop continues until the LLM responds without tool calls (or
maxToolCallsis reached)
See Key Concepts: Tools for the five tool types.
Fallback models
Set fallbacks on a template to define backup models:
If the primary model fails (rate limit, timeout, outage), PromptShuttle automatically tries the next model in the list. The response indicates if a fallback was used.
Last updated