Skip to content
GitHub

Providers

Prompd supports multiple LLM providers through a unified interface, including built-in providers and custom OpenAI-compatible APIs.

  • Provider ID: openai
  • Models: gpt-4o, gpt-4o-mini, gpt-4, gpt-4-turbo, gpt-3.5-turbo
  • Authentication: OPENAI_API_KEY environment variable
Terminal window
export OPENAI_API_KEY="sk-your-openai-key"
prompd run example.prmd --provider openai --model gpt-4o
  • Provider ID: anthropic
  • Models: claude-3-5-sonnet-20241022, claude-3-opus-20240229, claude-3-sonnet-20240229, claude-3-haiku-20240307
  • Authentication: ANTHROPIC_API_KEY environment variable
Terminal window
export ANTHROPIC_API_KEY="sk-ant-your-anthropic-key"
prompd run example.prmd --provider anthropic --model claude-3-5-sonnet-20241022
  • Provider ID: ollama
  • Models: Any model in your local Ollama instance (llama3.2, qwen2.5, mixtral, phi3, codellama, etc.)
  • API Endpoint: http://localhost:11434/v1
Terminal window
ollama pull llama3.2
prompd run example.prmd --provider ollama --model llama3.2

Add any OpenAI-compatible API as a custom provider.

Terminal window
prompd config provider add <name> <base_url> <models...> [--api-key KEY]

Groq (Fast Inference):

Terminal window
prompd config provider add groq https://api.groq.com/openai/v1 \
llama-3.1-8b-instant llama-3.1-70b-versatile mixtral-8x7b-32768 \
--api-key gsk_your_groq_api_key

Together AI:

Terminal window
prompd config provider add together https://api.together.xyz/v1 \
"mistralai/Mixtral-8x7B-Instruct-v0.1" \
"meta-llama/Llama-2-70b-chat-hf" \
--api-key your_together_api_key

LM Studio (Local GUI):

Terminal window
prompd config provider add lmstudio http://localhost:1234/v1 local-model

Fireworks AI:

Terminal window
prompd config provider add fireworks https://api.fireworks.ai/inference/v1 \
"accounts/fireworks/models/llama-v3-70b-instruct" \
--api-key fw_your_fireworks_key

OpenRouter (Multiple Models via One API):

Terminal window
prompd config provider add openrouter https://openrouter.ai/api/v1 \
"anthropic/claude-3-sonnet" "openai/gpt-4" \
--api-key sk_or_your_openrouter_key

Many LLM providers support the OpenAI Chat Completions API format:

ProviderBase URLNotes
Groqhttps://api.groq.com/openai/v1Ultra-fast inference
Together AIhttps://api.together.xyz/v1Many open source models
Fireworks AIhttps://api.fireworks.ai/inference/v1Fast inference
OpenRouterhttps://openrouter.ai/api/v1Access many models via one API
Perplexityhttps://api.perplexity.aiSearch-augmented models
LM Studiohttp://localhost:1234/v1Local models with GUI
Ollamahttp://localhost:11434/v1Local models, CLI-based
Terminal window
# List all providers
prompd config provider list
# Show provider details
prompd config provider show <name>
# Update API key
prompd config provider setkey <name> <api_key>
# Remove a custom provider
prompd config provider remove <name>

Provider settings are stored in ~/.prompd/config.yaml:

default_provider: openai
default_model: gpt-4o
api_keys:
openai: "sk-your-openai-key"
anthropic: "sk-ant-your-anthropic-key"
custom_providers:
groq:
base_url: "https://api.groq.com/openai/v1"
api_key: "gsk-your-groq-key"
type: "openai-compatible"
models:
- "llama-3.1-8b-instant"
- "llama-3.1-70b-versatile"
enabled: true
Terminal window
export OPENAI_API_KEY="sk-your-openai-key"
export ANTHROPIC_API_KEY="sk-ant-your-anthropic-key"
export GROQ_API_KEY="gsk-your-groq-key"

When resolving API keys, prompd checks in this order:

  1. Command line --api-key parameter
  2. Provider-specific key in config file
  3. Environment variable {PROVIDER_NAME}_API_KEY

Configure provider-specific parameters in .prmd files:

---
name: analysis-prompt
parameters:
- name: code
type: string
required: true
providers:
openai:
model: gpt-4o
temperature: 0.1
max_tokens: 2000
anthropic:
model: claude-3-5-sonnet-20241022
temperature: 0.2
max_tokens: 4000
groq:
model: llama-3.1-8b-instant
temperature: 0.1
---
Analyze this code: {code}

Connection Issues:

Terminal window
prompd run prompt.prmd --provider <name> --verbose

Authentication Problems:

Terminal window
prompd config provider show <name> # Verify API key is set

Model Not Available:

Terminal window
prompd config provider show <name> # Check available models
  • Environment Variables: Store keys in environment variables for production
  • Config Files: Use config files only for development/testing
  • Version Control: Never commit API keys to repositories
  • HTTPS Only: All provider endpoints must use HTTPS (except local instances)
  • Rotation: Rotate API keys regularly