PromptShuttle API
  • Quickstart
  • About
  • API reference
    • Models
    • Flows
      • Parameters
      • Runs
    • Chat
      • Completions
    • Inference
    • Specification
Powered by GitBook
On this page
  1. API reference
  2. Flows

Runs

Allows you to start a new run, i.e. resolve templates and send the fully formatted prompt to an LLM of your choice.

PreviousParametersNextChat

Last updated 11 months ago

  • Select the active template set version based on version and environment as described in Flows

  • Take the prompt entrypoint and start recursively replacing tokens given in parameters. Parameters always precede descending into a template of the same name

  • Send the prompt to the LLM chosen as default for the selected entrypoint in the respective version, unless overrideModel is specified. The model must be one of Models

  • Waits until the request is finished and returns both the formatted request and the LLM response, along with some metadata

post
Path parameters
flowIdstringRequired
Body
environmentstring | nullableOptional
overrideModelstring | nullableOptional
tagsstring[] | nullableOptional
entrypointstring | nullableOptional
versionstring | nullableOptional
Responses
200
OK
post
POST /api/v1/flows/{flowId}/runs HTTP/1.1
Host: 
Content-Type: application/json
Accept: */*
Content-Length: 146

{
  "parameters": {
    "ANY_ADDITIONAL_PROPERTY": "text"
  },
  "environment": "text",
  "overrideModel": "text",
  "tags": [
    "text"
  ],
  "entrypoint": "text",
  "version": "text"
}
200

OK

{
  "creditsUsed": 1,
  "milliseconds": 1,
  "id": "text",
  "timestamp": "2025-05-31T18:32:38.576Z",
  "flowId": "text",
  "tenantId": "text",
  "userId": "text",
  "flowRequest": {
    "parameters": {
      "ANY_ADDITIONAL_PROPERTY": "text"
    },
    "environment": "text",
    "overrideModel": "text",
    "tags": [
      "text"
    ],
    "entrypoint": "text",
    "version": "text"
  },
  "requests": [
    {
      "id": "text",
      "idCreated": "2025-05-31T18:32:38.576Z",
      "shuttleRequestId": "text",
      "messages": [
        {
          "role": "client",
          "text": "text"
        }
      ],
      "model": "text",
      "maxTokens": 1,
      "seed": 1,
      "temperature": 1,
      "topP": 1,
      "topK": 1
    }
  ],
  "responses": [
    {
      "model": "text",
      "provider": "text",
      "usage": {
        "tokensIn": 1,
        "tokensOut": 1,
        "tokensTotal": 1,
        "costUsd": 1,
        "costCredits": 1
      },
      "textResponse": "text",
      "verbatimResponse": "text"
    }
  ],
  "creditsLeft": 1
}