# Templates & Parameters

## Template anatomy

Each template in a flow version has:

| Field            | Type         | Description                                                         |
| ---------------- | ------------ | ------------------------------------------------------------------- |
| `name`           | string       | Technical name (e.g. `main`, `billing_handler`). Used in API calls. |
| `description`    | string       | Human-readable description. Used by the classifier in Route mode.   |
| `template`       | string       | The system prompt with `[[parameter]]` placeholders.                |
| `userTemplate`   | string       | Optional user message template. Same parameter syntax.              |
| `llm`            | string       | Model to use (e.g. `openai/gpt-4o`).                                |
| `temperature`    | float        | Sampling temperature (0-2).                                         |
| `maxToolCalls`   | integer      | Maximum tool-calling iterations before stopping.                    |
| `toolIds`        | string array | IDs of tools available to this template.                            |
| `fallbacks`      | string array | Ordered fallback models if the primary model fails.                 |
| `responseSchema` | object       | JSON Schema for structured outputs.                                 |

## Parameter syntax

Parameters use double square brackets:

```
You are a [[role]]. Write a [[document_type]] about [[topic]].
```

### Rules

* Parameter names must be **lowercase** with **letters, digits, and underscores** only
* `[[my_param]]` — valid
* `[[MyParam]]` — invalid (no uppercase)
* `[[my-param]]` — invalid (no hyphens)
* `[[my param]]` — invalid (no spaces)

### How parameters are resolved

When a flow is executed, parameters are resolved in this order:

1. **Caller-supplied values** — Parameters passed in the API request body
2. **Sub-template references** — If a parameter name matches another template in the same version, that template's content is substituted
3. **Unresolved** — If neither applies, a warning is returned in the response

### Comments in templates

Templates support two comment styles (comments are stripped before sending to the LLM):

```
/* This is a block comment */
// This is a line comment
```

## System prompt vs. user message

Each template has two prompt fields:

* **`template`** (system prompt) — Rendered as a `system` message. This is where you define the LLM's behavior, persona, and instructions.
* **`userTemplate`** (user message) — Rendered as a `user` message after the system prompt. Use this when you want to separate instructions from the user's input.

Both fields support the same `[[parameter]]` syntax.

### Example

**System prompt (`template`):**

```
You are a professional translator specializing in [[source_language]] to [[target_language]] translation.
Maintain the original tone and style.
```

**User message (`userTemplate`):**

```
Translate the following text:

[[input_text]]
```

## Discovering parameters

Before executing a flow, you can discover what parameters it expects:

```bash
GET /api/v1/flows/{flowId}/parameters?environment=production
```

**Response:**

```json
[
  { "token": "source_language", "source": "template" },
  { "token": "target_language", "source": "template" },
  { "token": "input_text", "source": "userTemplate" }
]
```

The `source` field indicates where the parameter appears:

| Source         | Meaning                           |
| -------------- | --------------------------------- |
| `template`     | In the system prompt              |
| `userTemplate` | In the user message template      |
| `toolUrl`      | In an external tool's URL pattern |

Parameters that match a sub-template name also include a `promptTemplate` object.

## Structured outputs

You can constrain the LLM to return JSON matching a schema. Set `responseSchema` on the template:

```json
{
  "type": "object",
  "properties": {
    "sentiment": { "type": "string", "enum": ["positive", "negative", "neutral"] },
    "confidence": { "type": "number" },
    "summary": { "type": "string" }
  },
  "required": ["sentiment", "confidence", "summary"]
}
```

The caller can also override this at execution time by passing `responseSchema` in the flow run request.

{% hint style="warning" %}
Not all models support structured outputs. Check the model's `supportsStructuredOutput` field via `GET /api/v1/models/descriptors`.
{% endhint %}

## Tool assignments

Templates can reference tools by their IDs in the `toolIds` array. When tools are assigned:

1. PromptShuttle includes the tool definitions in the LLM request
2. If the LLM returns tool calls, PromptShuttle executes them automatically
3. Tool results are appended to the conversation and the LLM is re-invoked
4. This loop continues until the LLM responds without tool calls (or `maxToolCalls` is reached)

See [Key Concepts: Tools](https://docs.promptshuttle.com/getting-started/key-concepts#tools) for the five tool types.

## Fallback models

Set `fallbacks` on a template to define backup models:

```json
{
  "llm": "anthropic/claude-sonnet-4-20250514",
  "fallbacks": ["openai/gpt-4o", "google/gemini-2.5-flash"]
}
```

If the primary model fails (rate limit, timeout, outage), PromptShuttle automatically tries the next model in the list. The response indicates if a fallback was used.
