Providers
Prompd supports multiple LLM providers through a unified interface, including built-in providers and custom OpenAI-compatible APIs.
Built-in Providers
Section titled “Built-in Providers”OpenAI
Section titled “OpenAI”- Provider ID:
openai - Models:
gpt-4o,gpt-4o-mini,gpt-4,gpt-4-turbo,gpt-3.5-turbo - Authentication:
OPENAI_API_KEYenvironment variable
export OPENAI_API_KEY="sk-your-openai-key"prompd run example.prmd --provider openai --model gpt-4oAnthropic
Section titled “Anthropic”- Provider ID:
anthropic - Models:
claude-3-5-sonnet-20241022,claude-3-opus-20240229,claude-3-sonnet-20240229,claude-3-haiku-20240307 - Authentication:
ANTHROPIC_API_KEYenvironment variable
export ANTHROPIC_API_KEY="sk-ant-your-anthropic-key"prompd run example.prmd --provider anthropic --model claude-3-5-sonnet-20241022Ollama (Local)
Section titled “Ollama (Local)”- Provider ID:
ollama - Models: Any model in your local Ollama instance (llama3.2, qwen2.5, mixtral, phi3, codellama, etc.)
- API Endpoint:
http://localhost:11434/v1
ollama pull llama3.2prompd run example.prmd --provider ollama --model llama3.2Custom Provider Management
Section titled “Custom Provider Management”Add any OpenAI-compatible API as a custom provider.
Adding Custom Providers
Section titled “Adding Custom Providers”prompd config provider add <name> <base_url> <models...> [--api-key KEY]Provider Examples
Section titled “Provider Examples”Groq (Fast Inference):
prompd config provider add groq https://api.groq.com/openai/v1 \ llama-3.1-8b-instant llama-3.1-70b-versatile mixtral-8x7b-32768 \ --api-key gsk_your_groq_api_keyTogether AI:
prompd config provider add together https://api.together.xyz/v1 \ "mistralai/Mixtral-8x7B-Instruct-v0.1" \ "meta-llama/Llama-2-70b-chat-hf" \ --api-key your_together_api_keyLM Studio (Local GUI):
prompd config provider add lmstudio http://localhost:1234/v1 local-modelFireworks AI:
prompd config provider add fireworks https://api.fireworks.ai/inference/v1 \ "accounts/fireworks/models/llama-v3-70b-instruct" \ --api-key fw_your_fireworks_keyOpenRouter (Multiple Models via One API):
prompd config provider add openrouter https://openrouter.ai/api/v1 \ "anthropic/claude-3-sonnet" "openai/gpt-4" \ --api-key sk_or_your_openrouter_keyOpenAI-Compatible APIs
Section titled “OpenAI-Compatible APIs”Many LLM providers support the OpenAI Chat Completions API format:
| Provider | Base URL | Notes |
|---|---|---|
| Groq | https://api.groq.com/openai/v1 | Ultra-fast inference |
| Together AI | https://api.together.xyz/v1 | Many open source models |
| Fireworks AI | https://api.fireworks.ai/inference/v1 | Fast inference |
| OpenRouter | https://openrouter.ai/api/v1 | Access many models via one API |
| Perplexity | https://api.perplexity.ai | Search-augmented models |
| LM Studio | http://localhost:1234/v1 | Local models with GUI |
| Ollama | http://localhost:11434/v1 | Local models, CLI-based |
Provider Management Commands
Section titled “Provider Management Commands”# List all providersprompd config provider list
# Show provider detailsprompd config provider show <name>
# Update API keyprompd config provider setkey <name> <api_key>
# Remove a custom providerprompd config provider remove <name>Configuration
Section titled “Configuration”Configuration File
Section titled “Configuration File”Provider settings are stored in ~/.prompd/config.yaml:
default_provider: openaidefault_model: gpt-4o
api_keys: openai: "sk-your-openai-key" anthropic: "sk-ant-your-anthropic-key"
custom_providers: groq: base_url: "https://api.groq.com/openai/v1" api_key: "gsk-your-groq-key" type: "openai-compatible" models: - "llama-3.1-8b-instant" - "llama-3.1-70b-versatile" enabled: trueEnvironment Variables
Section titled “Environment Variables”export OPENAI_API_KEY="sk-your-openai-key"export ANTHROPIC_API_KEY="sk-ant-your-anthropic-key"export GROQ_API_KEY="gsk-your-groq-key"API Key Priority
Section titled “API Key Priority”When resolving API keys, prompd checks in this order:
- Command line
--api-keyparameter - Provider-specific key in config file
- Environment variable
{PROVIDER_NAME}_API_KEY
Provider-Specific Configurations
Section titled “Provider-Specific Configurations”Configure provider-specific parameters in .prmd files:
---name: analysis-promptparameters: - name: code type: string required: true
providers: openai: model: gpt-4o temperature: 0.1 max_tokens: 2000 anthropic: model: claude-3-5-sonnet-20241022 temperature: 0.2 max_tokens: 4000 groq: model: llama-3.1-8b-instant temperature: 0.1---
Analyze this code: {code}Troubleshooting
Section titled “Troubleshooting”Connection Issues:
prompd run prompt.prmd --provider <name> --verboseAuthentication Problems:
prompd config provider show <name> # Verify API key is setModel Not Available:
prompd config provider show <name> # Check available modelsSecurity Best Practices
Section titled “Security Best Practices”- Environment Variables: Store keys in environment variables for production
- Config Files: Use config files only for development/testing
- Version Control: Never commit API keys to repositories
- HTTPS Only: All provider endpoints must use HTTPS (except local instances)
- Rotation: Rotate API keys regularly