Skip to content

Latest commit

 

History

History
150 lines (103 loc) · 4.19 KB

File metadata and controls

150 lines (103 loc) · 4.19 KB

Installation Guide

Choose your installation route based on your setup and use case.

Quick Decision: Which Route?

🚀 I want the easiest setup (Recommended for most)

Docker Compose - Multi-container setup, production-ready

  • ✅ All features working
  • ✅ Clear separation of services
  • ✅ Easy to scale
  • ✅ Works on Mac, Windows, Linux
  • ⏱️ 5 minutes to running

🏠 I want everything in one container (Deprecated)

Single Container - Deprecated, will be removed in v2

  • ⚠️ Deprecated — please use Docker Compose instead
  • Still supported until v2 release

👨‍💻 I want to develop/contribute (Developers only)

From Source - Clone repo, set up locally

  • ✅ Full control over code
  • ✅ Easy to debug
  • ✅ Can modify and test
  • ⚠️ Requires Python 3.11+, Node.js
  • ⏱️ 10 minutes to running

System Requirements

Minimum

  • RAM: 4GB
  • Storage: 2GB for app + space for documents
  • CPU: Any modern processor
  • Network: Internet (optional for offline setup)

Recommended

  • RAM: 8GB+
  • Storage: 10GB+ for documents and models
  • CPU: Multi-core processor
  • GPU: Optional (speeds up local AI models)

AI Provider Options

Cloud-Based (Pay-as-you-go)

  • OpenAI - GPT-4, GPT-4o, fast and capable
  • Anthropic (Claude) - Claude 3.5 Sonnet, excellent reasoning
  • Google Gemini - Multimodal, cost-effective
  • Groq - Ultra-fast inference
  • Others: Mistral, DeepSeek, xAI, OpenRouter

Cost: Usually $0.01-$0.10 per 1K tokens Speed: Fast (sub-second) Privacy: Your data sent to cloud

Local (Free, Private)

  • Ollama - Run open-source models locally
  • LM Studio - Desktop app for local models
  • Hugging Face models - Download and run

Cost: $0 (just electricity) Speed: Depends on your hardware (slow to medium) Privacy: 100% offline


Choose a Route

Already know which way to go? Pick your installation path:

Privacy-first? Any installation method works with Ollama for 100% local AI. See Local Quick Start.


Pre-Installation Checklist

Before installing, you'll need:

  • Docker (for Docker routes) or Node.js 18+ (for source)
  • AI Provider API key (OpenAI, Anthropic, etc.) OR willingness to use free local models
  • At least 4GB RAM available
  • Stable internet (or offline setup with Ollama)

Detailed Installation Instructions

For Docker Users

  1. Install Docker Desktop
  2. Follow Docker Compose installation
  3. Follow the step-by-step guide
  4. Access at http://localhost:8502

For Source Installation (Developers)

  1. Have Python 3.11+, Node.js 18+, Git installed
  2. Follow From Source
  3. Run make start-all
  4. Access at http://localhost:8502 (frontend) or http://localhost:5055 (API)

After Installation

Once you're up and running:

  1. Configure Models - Choose your AI provider in Settings
  2. Create First Notebook - Start organizing research
  3. Add Sources - PDFs, web links, documents
  4. Explore Features - Chat, search, transformations
  5. Read Full Guide - User Guide

Troubleshooting During Installation

Having issues? Check the troubleshooting section in your chosen installation guide, or see Quick Fixes.


Need Help?


Production Deployment

Installing for production use? See additional resources:


Ready to install? Pick a route above! ⬆️