-
Notifications
You must be signed in to change notification settings - Fork 0
FAQ
Frequently asked questions about Chat Linux Client.
Chat Linux Client is a privacy-first, multi-provider AI desktop client for Linux that unifies multiple AI providers and local models into a single conversational interface.
Yes! Chat Linux Client is open source and free to use. Local models (via Ollama) are completely free. Some cloud providers have free tiers, while others require payment.
Currently, Chat Linux Client supports Linux distributions including Ubuntu 20.04+, Fedora 35+, and Arch Linux. Windows and macOS support may be added in the future.
Yes! With Ollama installed, you can use local models entirely offline without any internet connection.
- Multi-provider: Supports multiple AI providers in one interface
- Privacy-first: No telemetry, local storage, optional encryption
- Local models: Full offline capability with Ollama
- Intelligent routing: Automatic model selection based on needs
- Open source: Fully transparent and customizable
The easiest way is using the installation script:
git clone https://github.com/yourusername/chat-linux-client.git
cd chat-linux-client
./scripts/install.sh
./scripts/run.shSee the Installation guide for detailed instructions.
Yes! The application can be installed in your home directory without sudo privileges.
Ollama is optional but recommended for offline capability. Without Ollama, you'll need to use cloud providers with API keys.
- Python 3.8 or higher
- 4GB RAM minimum (8GB recommended)
- 500MB free storage
- Linux (Ubuntu 20.04+, Fedora 35+, Arch Linux)
You can add API keys through:
- Settings dialog: Open Settings → Providers tab
-
Environment variables: Add to
.envfile -
Configuration file: Edit
~/.config/chat-linux-client/config.json
See the Configuration guide for details.
API keys are encrypted and stored locally at ~/.config/chat-linux-client/api_keys.enc. They are never sent anywhere except to the AI provider APIs.
Currently, only one API key per provider is supported. You can switch between providers in the settings.
Open Settings → Chat tab → Select your preferred routing strategy:
- OFFLINE_FIRST: Prefer local models
- SPEED_OPTIMAL: Prefer fast models
- COST_OPTIMAL: Prefer free options
- QUALITY_OPTIMAL: Prefer capable models
Click the New Chat button in the toolbar or press Ctrl+N.
Currently, Chat Linux Client supports one active chat session at a time. You can switch between past chats using the History panel.
Open the chat you want to export → Select File > Export Chat → Choose format and location.
Yes! Open the History panel and use the search box to find specific conversations.
- 0.0-0.3: Focused, deterministic responses
- 0.4-0.7: Balanced responses (default)
- 0.8-1.0: Creative, varied responses
- 1.0+: Very creative, less predictable
Max tokens limits the length of AI responses. Set to 0 for unlimited, or specify a number (e.g., 500 for short responses, 2000 for longer ones).
- Ollama: Local models (free, offline)
- Groq: Ultra-fast cloud AI (free tier)
- HuggingFace: Open-source models (free tier)
- OpenRouter: Multi-model access (pay-per-use)
- OpenAI: GPT models (pay-per-use)
- For privacy: Use Ollama (local models)
- For speed: Use Groq
- For cost: Use Ollama or free tiers
- For quality: Use OpenAI or OpenRouter
No! Ollama works without API keys. Cloud providers require API keys, but many have free tiers.
The intelligent routing can automatically select providers based on your strategy. You can also manually select specific models from different providers.
Visit https://console.groq.com/, sign up, and create an API key. Groq offers a generous free tier.
Visit https://platform.openai.com/account/api-keys, sign up, and create an API key. OpenAI requires payment.
Yes! Chat Linux Client is privacy-first:
- No telemetry or analytics
- Local data storage
- Optional encryption for chats and keys
- Direct connection to providers (no intermediaries)
No! Chat Linux Client collects no telemetry, analytics, or usage data.
Chat history is stored locally at ~/.local/share/chat-linux-client/chat_history.db (SQLite database).
Yes! Enable chat encryption in Settings → Privacy tab. You'll need to set a password.
Encrypted data cannot be recovered without the password. You'll need to delete the encrypted files and start fresh.
API keys are encrypted using Fernet symmetric encryption and stored locally. They are only sent to provider APIs over HTTPS.
Yes, but you may need to configure proxy settings if your network requires them.
Run system checks:
python main.py --check-systemCheck logs at ~/.local/share/chat-linux-client/logs/
- Verify provider is enabled in settings
- Check API key is configured (for cloud providers)
- Ensure Ollama is running (for local models)
- Run system checks
- Verify the key is correct (no extra spaces)
- Check provider account is active
- Ensure key has proper permissions
- Try regenerating the key
- Use a faster model (lighter local model or Groq)
- Reduce max tokens in settings
- Check network connection
- Close other applications
- Use lighter models (e.g., llama3.2:1b)
- Clear chat history regularly
- Reduce max tokens setting
- Restart application periodically
Delete configuration and data:
rm -rf ~/.config/chat-linux-client
rm -rf ~/.local/share/chat-linux-clientThen reconfigure the application.
- Check the Troubleshooting guide
- Run system checks:
python main.py --check-system - Create an issue on GitHub
- Check existing issues and discussions
Yes! Contributions are welcome. See the Development guide for details.
Not yet, but it's on the roadmap. The current focus is on Linux desktop.
Yes! With Ollama, you can use any model available in the Ollama library. For cloud providers, you can add custom providers by extending the codebase.
Not yet, but voice interface is planned for a future release.
Currently, Chat Linux Client is designed for desktop use with a GUI. A headless/server version may be developed in the future.
Updates are released as needed for bug fixes, new features, and provider updates. Watch the repository for releases.
Chat Linux Client is open source and free. If you'd like to support development, consider contributing code, reporting issues, or spreading the word.