Quickstart

Get a response from an LLM in under two minutes.

Prerequisites

Option 1: OpenAI-compatible endpoint

If you already use the OpenAI SDK, just swap the base URL and key:

curl https://app.promptshuttle.com/api/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4o",
    "messages": [
      {"role": "user", "content": [{"type": "text", "text": "Say hello!"}]}
    ]
  }'

The response follows the standard OpenAI chat completion format. See the full endpoint reference for all options.

Option 2: Create and run a flow

Flows let you version your prompts, add parameters, and route across environments.

1. Create a flow

In the PromptShuttle UIarrow-up-right, click Flows > Create Flow. Give it a title (e.g. "Product Description Generator") — a slug name is auto-generated.

2. Edit the template

In the flow editor, write your prompt template:

Parameters use double square brackets: [[parameter_name]]. PromptShuttle auto-detects them.

Select a model (e.g. openai/gpt-4o) and save.

3. Activate for an environment

Go to the flow's environment settings and activate your version for an environment (e.g. production).

4. Execute via API

The response includes the LLM output, token usage, cost, and any warnings about unresolved parameters. See Flow Execution API for the full reference.

Next steps

Last updated