Configuration Guide
Customize Sweet! CLI to match your workflow. Configure AI providers, model settings, and preferences. Authenticate via sweet login and use the billing relay for seamless AI access without API key management.
Configuration File
Sweet! CLI stores configuration in ~/.sweet/config.json. You can edit this file directly or use the CLI configuration commands.
Default configuration structure:
{
"ai": {
"provider": "openai",
"model": "gpt-4o",
"temperature": 0.7,
"max_tokens": 4000
},
"editor": {
"preferred": "code",
"fallback": "vim"
},
"git": {
"auto_commit": true,
"commit_message_format": "feat: {summary}"
},
"workflow": {
"auto_todos": true,
"confirm_destructive": true,
"timeout_minutes": 30
}
}
AI Providers
Sweet! CLI supports multiple AI providers via the billing relay. Configure your preferred provider via environment variable or config file. Authentication is automatic; no API keys required.
OpenAI
sweet --provider openai --model gpt-4o start "Your prompt here"
Anthropic (Claude)
sweet --provider anthropic --model claude-3-5-sonnet-20241022 start "Your prompt here"
Google (Gemini)
DeepSeek
Local Models (Ollama)
Local models (Ollama) are not currently supported. Use one of the cloud providers above.
Environment Variables
You can override configuration via environment variables:
SWEET_AI_PROVIDER- AI provider (deepseek, nebius, fireworks)SWEET_AI_MODEL- Model nameSWEET_MAX_TOKENS- Maximum tokens per responseSWEET_TEMPERATURE- Creativity/randomness (0.0-1.0)SWEET_AUTO_COMMIT- Auto-commit changes (true/false)SWEET_EDITOR- Preferred text editor
Per-Project Configuration
Create a .sweetrc file in your project root to override settings for specific projects:
{
"ai": {
"provider": "openai",
"model": "gpt-4-turbo"
},
"workflow": {
"auto_todos": false,
"confirm_destructive": false
},
"project_specific": {
"lint_on_save": true,
"test_command": "npm test"
}
}
Configuration Methods
Sweet! CLI can be configured via environment variables, command-line flags, or configuration files.
Environment Variables
Sweet! CLI can be configured via environment variables for preferences like AI provider and model selection. See the Environment Variables section for a complete list.
Command-Line Flags
Override configuration per invocation:
sweet --provider deepseek --model deepseek-chat start "Your prompt here"
Configuration File
Create a ~/.sweet/config.yaml file:
ai:
provider: deepseek
model: deepseek-chat
ui:
live_refresh_per_second: 20
Advanced Settings
Proxy Configuration
If you need to use a proxy for API requests:
export HTTP_PROXY="http://proxy.example.com:8080"
export HTTPS_PROXY="http://proxy.example.com:8080"
Custom API Endpoints
For self-hosted models or custom endpoints:
export OPENAI_BASE_URL="http://localhost:8080/v1"
export SWEET_AI_PROVIDER="openai"
export SWEET_AI_MODEL="custom-model"
Logging Configuration
Control log verbosity:
sweet --log --log-file ~/.sweet/debug.log "Your prompt"
Troubleshooting Configuration
If configuration isn't working as expected:
- Check command-line options:
sweet --help - Verify environment variables:
printenv | grep SWEET - Check config file permissions:
ls -la ~/.sweet/ - Remove configuration file:
rm ~/.sweet/config.yaml - See Troubleshooting guide for more help
For further assistance, join our Discord community.