Runs
Allows you to start a new run, i.e. resolve templates and send the fully formatted prompt to an LLM of your choice.
Select the active template set version based on
version
andenvironment
as described in FlowsTake the prompt
entrypoint
and start recursively replacing tokens given inparameters
. Parameters always precede descending into a template of the same nameSend the prompt to the LLM chosen as default for the selected entrypoint in the respective version, unless
overrideModel
is specified. The model must be one of ModelsWaits until the request is finished and returns both the formatted request and the LLM response, along with some metadata
post
Path parameters
flowIdstringRequired
Body
environmentstring | nullableOptional
overrideModelstring | nullableOptional
tagsstring[] | nullableOptional
entrypointstring | nullableOptional
versionstring | nullableOptional
Responses
200
OK
post
POST /api/v1/flows/{flowId}/runs HTTP/1.1
Host:
Content-Type: application/json
Accept: */*
Content-Length: 146
{
"parameters": {
"ANY_ADDITIONAL_PROPERTY": "text"
},
"environment": "text",
"overrideModel": "text",
"tags": [
"text"
],
"entrypoint": "text",
"version": "text"
}
200
OK
{
"creditsUsed": 1,
"milliseconds": 1,
"id": "text",
"timestamp": "2025-07-01T18:34:57.202Z",
"flowId": "text",
"tenantId": "text",
"userId": "text",
"flowRequest": {
"parameters": {
"ANY_ADDITIONAL_PROPERTY": "text"
},
"environment": "text",
"overrideModel": "text",
"tags": [
"text"
],
"entrypoint": "text",
"version": "text"
},
"requests": [
{
"id": "text",
"idCreated": "2025-07-01T18:34:57.202Z",
"shuttleRequestId": "text",
"messages": [
{
"role": "client",
"text": "text"
}
],
"model": "text",
"maxTokens": 1,
"seed": 1,
"temperature": 1,
"topP": 1,
"topK": 1
}
],
"responses": [
{
"model": "text",
"provider": "text",
"usage": {
"tokensIn": 1,
"tokensOut": 1,
"tokensTotal": 1,
"costUsd": 1,
"costCredits": 1
},
"textResponse": "text",
"verbatimResponse": "text"
}
],
"creditsLeft": 1
}
Last updated