-
Notifications
You must be signed in to change notification settings - Fork 0
Usage
Auto Bot Solutions edited this page Apr 26, 2026
·
1 revision
This guide covers how to use Chat Linux Client for your daily AI interactions.
- Getting Started
- Basic Chat
- Model Selection
- Advanced Features
- Chat History
- Keyboard Shortcuts
- Tips and Best Practices
# Using the run script
./scripts/run.sh
# Or directly with Python
python main.pyOn first launch:
- The application will check system requirements
- Available models will be loaded into the dropdown
- You'll see the main chat interface
- Type your message in the input field at the bottom
- Press Enter or click the Send button
- The AI response will appear in the chat area
- Responses are rendered with Markdown formatting
- Code blocks are highlighted
- Streaming responses appear token-by-token in real-time
Click the New Chat button in the toolbar or:
- Press
Ctrl+N(Linux) - Select File > New Chat from the menu
- Click the model dropdown in the toolbar
- Select your preferred model from the list
- Models are listed as
provider/model-name
-
Local Models (
ollama/*): Work offline, no API key needed - Cloud Models: Require API keys, often more capable
For Speed:
-
ollama/llama3.2:1b- Fastest, lightweight -
groq/llama-3.1-8b-instant- Ultra-fast cloud
For Quality:
-
ollama/mistral:7b- Best local model -
openai/gpt-4o- Best cloud model (requires API key)
For Cost:
-
ollama/*- Free, runs locally -
huggingface/*- Many free models
Enable automatic model selection based on your needs:
- Open Settings
- Choose a Routing Strategy:
- OFFLINE_FIRST: Prefer local models
- SPEED_OPTIMAL: Prefer fast models
- COST_OPTIMAL: Prefer free options
- QUALITY_OPTIMAL: Prefer capable models
Adjust response creativity:
- Open Settings
- Set Temperature (0.0 - 2.0):
- 0.0-0.3: Focused, deterministic
- 0.4-0.7: Balanced (default)
- 0.8-1.0: Creative, varied
- 1.0+: Very creative, less predictable
Limit response length:
- Open Settings
- Set Max Tokens:
- 0: Unlimited (default)
- 100-500: Short responses
- 1000-2000: Medium responses
- 4000+: Long responses
- Click History in the menu
- Select a chat from the list
- The conversation will load in the main window
- Open the chat you want to export
- Select File > Export Chat
- Choose a location and format (Markdown, JSON, or Plain Text)
- Open the History panel
- Right-click on a chat
- Select Delete
- Confirm the deletion
- Open the History panel
- Use the search box
- Type keywords to find specific conversations
-
Enter- Send message -
Shift+Enter- New line in message -
Ctrl+N- New chat -
Ctrl+W- Close current chat
-
Ctrl+H- Open history -
Ctrl+,- Open settings -
Ctrl+Q- Quit application
-
Ctrl+C- Copy selected text -
Ctrl+V- Paste text -
Ctrl+A- Select all
- Be specific: Clear, detailed prompts get better answers
- Provide context: Include relevant background information
- Use examples: Show what you want with examples
- Iterate: Refine your question based on responses
- Use local models for sensitive information
- Enable chat encryption for private conversations
- Review chat history before exporting
- Clear history periodically if needed
- Use lightweight models for simple queries
- Use capable models for complex tasks
- Close unused chats to free memory
- Restart application if it becomes slow
- Use local models when possible (free)
- Monitor token usage with cloud providers
- Set max tokens to limit response length
- Use cost-optimal routing for automatic savings
You: Write a Python function to sort a list of dictionaries by a specific key
AI: [Provides code with explanation]
You: Help me write a professional email about project delays
AI: [Drafts email with appropriate tone]
You: Explain quantum computing in simple terms
AI: [Provides accessible explanation]
You: Give me 10 ideas for a mobile app that helps people learn languages
AI: [Lists creative ideas with brief descriptions]
- Try a faster model (lighter local model or Groq)
- Reduce max tokens in settings
- Check network connection for cloud models
- Close other applications to free resources
- Increase temperature for more creativity
- Try a more capable model
- Provide more context in your prompt
- Use examples to guide the AI
- Check system logs:
~/.local/share/chat-linux-client/logs/ - Run system checks:
python main.py --check-system - Ensure all dependencies are installed
- Try reinstalling the application