-
Notifications
You must be signed in to change notification settings - Fork 0
Configuration
This guide covers how to configure Chat Linux Client, including API keys, local models, and application settings.
- API Keys
- Local Models (Ollama)
- Application Settings
- Privacy Settings
- Configuration File Location
- Environment Variables
Chat Linux Client supports multiple AI providers. Configure API keys through the application settings or environment variables.
- Groq - Ultra-low latency inference
- HuggingFace - Open-source models
- OpenRouter - Multi-model routing
- OpenAI - GPT models
- Open the application
- Click on Settings in the menu bar
- Navigate to the Providers tab
- Select a provider from the dropdown
- Enter your API key in the key field
- Click Save
Create a .env file in the project root (copy from .env.example):
cp .env.example .env
nano .envAdd your API keys:
# Groq API Key
GROQ_API_KEY=gsk_your_actual_api_key_here
# HuggingFace API Key
HUGGINGFACE_API_KEY=hf_your_actual_api_key_here
# OpenRouter API Key
OPENROUTER_API_KEY=sk-or-your_actual_api_key_here
# OpenAI API Key
OPENAI_API_KEY=sk-your_actual_api_key_here- Groq: https://console.groq.com/ (Free tier available)
- HuggingFace: https://huggingface.co/settings/tokens (Free for many models)
- OpenRouter: https://openrouter.ai/keys (Pay-per-use)
- OpenAI: https://platform.openai.com/account/api-keys (Pay-per-use)
Ollama provides local AI models that work offline without API keys.
curl -fsSL https://ollama.ai/install.sh | sh# Lightweight model (1.3GB)
ollama pull llama3.2:1b
# Balanced model (1.9GB)
ollama pull qwen2.5:3b
# Capable model (2.2GB)
ollama pull phi3.5:3.8b
# Large model (4.4GB)
ollama pull mistral:7bOllama is automatically detected by Chat Linux Client if:
- Ollama is running (
ollama serve) - Models are installed
- Default URL is
http://localhost:11434
To use a custom Ollama URL, set the environment variable:
OLLAMA_BASE_URL=http://your-custom-url:11434Configure application behavior through the Settings dialog.
-
Temperature: Controls response randomness (0.0 - 2.0)
- Lower: More focused, deterministic responses
- Higher: More creative, varied responses
- Max Tokens: Maximum response length (0 = unlimited)
Choose your preferred model from the dropdown:
- Models are listed as
provider/model-name - Local models start with
ollama/ - Cloud models show their provider prefix
Select how models are chosen:
- OFFLINE_FIRST: Prefer local Ollama models
- SPEED_OPTIMAL: Prefer Groq for speed
- COST_OPTIMAL: Prefer free/local options
- QUALITY_OPTIMAL: Prefer larger models
Enable encryption for chat history:
- Open Settings
- Navigate to Privacy tab
- Enable Encrypt Chats
- Set a password when prompted
- Click Save
Important: Remember your encryption password. Lost passwords cannot be recovered.
API keys are encrypted and stored locally. To enhance security:
- Set the
CHAT_CLIENT_PASSWORDenvironment variable - Or enable password-based encryption in Settings
Automatically delete API keys when the application closes:
- Open Settings
- Navigate to Privacy tab
- Enable Delete API Keys on Exit
- Click Save
Note: You'll need to re-enter keys on next launch.
Configuration is stored at:
~/.config/chat-linux-client/config.json
You can edit the configuration file directly:
{
"providers": {
"groq": {
"enabled": true,
"api_key": "your_api_key_here",
"base_url": "https://api.groq.com/openai/v1"
},
"ollama": {
"enabled": true,
"base_url": "http://localhost:11434"
}
},
"chat": {
"temperature": 0.7,
"max_tokens": null,
"routing_strategy": "offline_first"
},
"privacy": {
"encrypt_chats": false,
"delete_api_keys_on_exit": false
}
}Warning: Manual editing may cause issues. Use the UI settings when possible.
Override configuration with environment variables:
# API Keys
GROQ_API_KEY=your_key
HUGGINGFACE_API_KEY=your_key
OPENROUTER_API_KEY=your_key
OPENAI_API_KEY=your_key
# Ollama
OLLAMA_BASE_URL=http://localhost:11434
# Application
LOG_LEVEL=INFO
THEME=dark
FONT_SIZE=12
# Privacy
ENCRYPT_CHATS=false
DISABLE_TELEMETRY=true
# Development
DEBUG=false
LOG_TO_FILE=true- Verify the key is correct
- Check the key has proper permissions
- Ensure the provider account is active
- Check network connectivity
- Verify the provider is enabled in settings
- Check API key is configured (for cloud providers)
- Ensure Ollama is running (for local models)
- Run system checks:
python main.py --check-system
- Check write permissions for
~/.config/chat-linux-client/ - Ensure the directory exists
- Check disk space