Choose your installation route based on your setup and use case.
→ Docker Compose - Multi-container setup, production-ready
- ✅ All features working
- ✅ Clear separation of services
- ✅ Easy to scale
- ✅ Works on Mac, Windows, Linux
- ⏱️ 5 minutes to running
→ Single Container - Deprecated, will be removed in v2
⚠️ Deprecated — please use Docker Compose instead- Still supported until v2 release
→ From Source - Clone repo, set up locally
- ✅ Full control over code
- ✅ Easy to debug
- ✅ Can modify and test
⚠️ Requires Python 3.11+, Node.js- ⏱️ 10 minutes to running
- RAM: 4GB
- Storage: 2GB for app + space for documents
- CPU: Any modern processor
- Network: Internet (optional for offline setup)
- RAM: 8GB+
- Storage: 10GB+ for documents and models
- CPU: Multi-core processor
- GPU: Optional (speeds up local AI models)
- OpenAI - GPT-4, GPT-4o, fast and capable
- Anthropic (Claude) - Claude 3.5 Sonnet, excellent reasoning
- Google Gemini - Multimodal, cost-effective
- Groq - Ultra-fast inference
- Others: Mistral, DeepSeek, xAI, OpenRouter
Cost: Usually $0.01-$0.10 per 1K tokens Speed: Fast (sub-second) Privacy: Your data sent to cloud
- Ollama - Run open-source models locally
- LM Studio - Desktop app for local models
- Hugging Face models - Download and run
Cost: $0 (just electricity) Speed: Depends on your hardware (slow to medium) Privacy: 100% offline
Already know which way to go? Pick your installation path:
- Docker Compose - Most users
- Single Container - Deprecated
- From Source - Developers
Privacy-first? Any installation method works with Ollama for 100% local AI. See Local Quick Start.
Before installing, you'll need:
- Docker (for Docker routes) or Node.js 18+ (for source)
- AI Provider API key (OpenAI, Anthropic, etc.) OR willingness to use free local models
- At least 4GB RAM available
- Stable internet (or offline setup with Ollama)
- Install Docker Desktop
- Follow Docker Compose installation
- Follow the step-by-step guide
- Access at
http://localhost:8502
- Have Python 3.11+, Node.js 18+, Git installed
- Follow From Source
- Run
make start-all - Access at
http://localhost:8502(frontend) orhttp://localhost:5055(API)
Once you're up and running:
- Configure Models - Choose your AI provider in Settings
- Create First Notebook - Start organizing research
- Add Sources - PDFs, web links, documents
- Explore Features - Chat, search, transformations
- Read Full Guide - User Guide
Having issues? Check the troubleshooting section in your chosen installation guide, or see Quick Fixes.
- Discord: Join community
- GitHub Issues: Report problems
- Docs: See Full Documentation
Installing for production use? See additional resources:
Ready to install? Pick a route above! ⬆️