Tools & Function Calling

Tools extend what an LLM can do during execution. When a template has tools assigned, PromptShuttle automatically manages the tool-calling loop: the LLM decides which tools to call, PromptShuttle executes them, feeds results back, and re-invokes the LLM until it produces a final answer.

How the tool loop works

1. Send prompt + tool definitions to LLM
2. LLM responds with tool_calls (or final text)
3. If tool_calls:
   a. Execute all tool calls in parallel
   b. Append results to conversation
   c. Go to step 1 (re-invoke LLM with results)
4. If final text: return response

The loop continues until the LLM stops calling tools or the maxToolCalls limit is reached.

Tool types

PromptShuttle supports five tool types, each suited to different use cases:

Type
What it does

External

Calls an HTTP endpoint with parameters from the LLM's tool call

Virtual

Enables provider-native capabilities (web search, code interpreter)

Agent

Invokes another flow as a sub-agent, enabling multi-agent orchestration

CritiqueLoop

Runs a producer/critic refinement loop to iteratively improve output

MCP

Calls a tool on an external Model Context Protocol server

External tools

External tools call HTTP endpoints. The LLM decides the arguments; PromptShuttle builds the request and returns the response.

Configuration

Field
Description

name

Tool name (shown to the LLM)

description

What the tool does (helps the LLM decide when to use it)

parameters

Parameter schema (name, type, description, required, enum values)

webUrl

Base HTTP endpoint URL

webUrls

Environment-specific URL overrides ({ "production": "https://...", "staging": "https://..." })

headers

Custom HTTP headers (e.g. API keys)

How it works

  1. The LLM returns a tool call with arguments (e.g. { "query": "weather in Berlin" })

  2. PromptShuttle sends an HTTP GET to the configured URL with arguments as query parameters

  3. The response body (up to 1MB) is returned to the LLM as the tool result

Parameter placeholders in URLs

External tool URLs support [[param]] placeholders that are substituted from the flow's request parameters:

If the flow was called with "parameters": { "region": "eu" }, the URL becomes https://api.example.com/v1/eu/search.

Environment-specific URLs

Use webUrls to point to different endpoints per environment:

Virtual tools (provider-native)

Virtual tools enable capabilities built into the LLM provider, such as web search and code execution. They are handled natively by the provider — PromptShuttle passes them through.

Available virtual tools

Tool ID
Provider
Description

web_search

OpenAI, xAI

Search the web for information

code_interpreter

OpenAI

Execute code in a sandbox

web_search_20250305

Anthropic

Claude web search

Using virtual tools

Virtual tools can be enabled in two ways:

  1. On the template — Create a tool with type: Virtual and virtualToolId: "web_search", then assign it to the template

  2. At request time — Pass vendorTools: ["web_search"] in the flow run request

Agent tools

Agent tools invoke another flow as a sub-agent. This is the foundation of multi-agent orchestration in PromptShuttle.

Configuration

Field
Description

agentTemplateId

The ID of the flow to invoke as a sub-agent

maxAgentDepth

Maximum nesting depth for this specific agent (overrides tenant default)

How it works

  1. The LLM calls the agent tool with arguments

  2. PromptShuttle creates a child request linked to the parent

  3. The child flow executes with its own tool-calling loop

  4. The child's text response is returned to the parent LLM as the tool result

  5. Cost is tracked both per-agent (direct) and across the tree (cumulative)

Safety mechanisms

  • Depth limits — Default max depth is 10. Configurable per tool, per tenant, or per request.

  • Cycle detection — PromptShuttle tracks the ancestor chain and prevents an agent from calling itself (directly or indirectly).

  • Cost limits — Per-request cost ceilings apply across the entire agent tree.

Context tools

When a template is running as a sub-agent, PromptShuttle automatically injects context tools:

Tool
Description

get_context

Returns execution metadata: depth, agent path, cost used, budget remaining

get_original_input

Returns the root request's original user messages

get_state

Read from shared state (lexically scoped — child values shadow parent)

set_state

Write to shared state (visible to child agents)

These let agents be context-aware without hardcoding parent/child relationships.

CritiqueLoop tools

CritiqueLoop tools implement an iterative refinement pattern: a "producer" flow generates output, then a "critic" flow evaluates it. If the critic rejects the output, the producer tries again with the critic's feedback.

Configuration

Field
Description

producerFlowId

Flow ID of the producer

criticFlowId

Flow ID of the critic

maxLoopIterations

Maximum refinement iterations (default 3)

How it works

The critic always runs with temperature: 0 for consistent evaluation.

Critic response format

The critic flow must return JSON matching this schema:

If approved is false, the feedback string is passed back to the producer for the next iteration.

MCP tools

MCP tools call tools on external Model Context Protocolarrow-up-right servers. This lets you integrate with any MCP-compatible tool server.

See MCP Server Integration for setup details.

Managing tools via API

Method
Endpoint
Description

GET

/api/v1/tools

List all tools

GET

/api/v1/tools/{id}

Get a tool by ID

POST

/api/v1/tools

Create a new tool

PUT

/api/v1/tools/{id}

Update a tool

DELETE

/api/v1/tools/{id}

Delete a tool

GET

/api/v1/tools/{id}/usage

List flows that reference this tool

Tool parameter schema

Each tool defines its parameters with a list of properties:

Property types: String, Number. Use isList: true for array parameters. Use enum to constrain values.

Last updated