An AI-powered log analysis system that automatically monitors, vectorizes, and analyzes Laravel application logs to detect incidents and assess severity levels.
- Real-time Log Monitoring: MCP server integration for continuous log file watching
- AI-Powered Analysis: Uses LLMs (Claude, GPT, etc.) via Prism to analyze log entries
- Semantic Search: Vector embeddings enable similarity-based log grouping
- Incident Detection: Automatic severity classification (low, medium, high, critical)
- Critical Alert System: Dismissible banners for critical incidents with viewed status tracking
- Test Error Generator: One-click generation of 5-10 random test logs with automatic AI analysis
- Interactive Dashboard: Livewire-powered UI with real-time updates and polling
- Asynchronous Processing: Queue-based architecture for scalable analysis
- Laravel 12: Latest Laravel framework
- Prism v0.96: Multi-provider LLM integration
- Overpass v0.7: Python ML bridge for embeddings
- Livewire 3 + Volt: Interactive, real-time dashboard
- Laravel MCP v0.3: Model Context Protocol server
- SQLite: Vector embeddings stored as JSON for future semantic search
- Tailwind CSS v4: Modern styling with dark mode
- Sentence Transformers: all-MiniLM-L6-v2 for 384-dim embeddings
- PHP 8.2 or higher
- Composer
- Node.js & NPM
- Python 3.9 or higher
- SQLite 3.x
-
Clone the repository
git clone <repository-url> cd LogAnalysisAi
-
Install PHP dependencies
composer install
-
Install Node dependencies
npm install
-
Set up environment file
cp .env.example .env php artisan key:generate
-
Configure database
The default SQLite database is already configured in
.env:DB_CONNECTION=sqlite DB_DATABASE=database/database.sqlite
Create the database file:
touch database/database.sqlite
-
Run migrations
php artisan migrate
-
Install Python dependencies for Overpass
cd overpass-ai pip3 install -r requirements.txt cd ..
Note: The first run will download the sentence-transformers model (~80MB).
Configure your preferred LLM provider in .env:
For Anthropic Claude:
LOG_ANALYSIS_PROVIDER=anthropic
LOG_ANALYSIS_MODEL=claude-3-haiku-20240307
ANTHROPIC_API_KEY=your_api_key_hereFor OpenAI:
LOG_ANALYSIS_PROVIDER=openai
LOG_ANALYSIS_MODEL=gpt-4-turbo
OPENAI_API_KEY=your_api_key_hereAdjust AI analysis settings:
LOG_ANALYSIS_MAX_TOKENS=200
LOG_ANALYSIS_TEMPERATURE=0.3Overpass is pre-configured to use the overpass-ai directory. Ensure Python 3.9+ is active:
python3 --version # Should show Python 3.9 or higherStart all services at once with a single command:
composer run devThis starts: web server, queue worker, log viewer (Pail), and Vite dev server.
Alternatively, run services separately:
1. Start the Development Server
php artisan serve2. Start the Queue Worker In a separate terminal:
php artisan queue:work3. Build Frontend Assets For development:
npm run devFor production:
npm run build4. MCP Server (Optional) The MCP server runs automatically with the web server. It's accessible at:
http://loganalysisai.test/mcp/log-watcher
To test it with MCP Inspector:
php artisan mcp:inspectorVisit http://loganalysisai.test (if using Laravel Herd) or http://localhost:8000 to access the interactive dashboard showing:
- Recent log entries (latest 10)
- AI-detected incidents (latest 5)
- Critical alert banners for unviewed incidents
- Real-time refresh capability (auto-polls every 10 seconds)
The dashboard includes a "Generate Test Errors" button that creates 5-10 random log entries with varying severity levels (critical, error, warning, info, debug). These logs are automatically:
- Written to
storage/logs/laravel.log - Processed through the analysis pipeline
- Vectorized using sentence-transformers
- Analyzed by your configured LLM
- Displayed in the dashboard with severity classification
This is the easiest way to test the complete AI analysis workflow.
When the system detects critical incidents (like disk space warnings or database connection pool exhaustion), they appear as red alert banners at the top of the dashboard. You can dismiss these alerts by clicking the X button, which marks them as "viewed" and prevents them from reappearing.
The application includes an MCP (Model Context Protocol) server for AI assistant integration. It's available at /mcp/log-watcher and provides:
- Tools:
monitor_logs- Process recent log entries with AI - Resources:
log_entries- Access analyzed log data
You can manually trigger log analysis:
use App\Actions\LogMonitor;
$monitor = new LogMonitor();
$monitor->handleNewLine('[2024-10-22 12:00:00] production.ERROR: Database connection failed');The best way to test the full analysis pipeline is to generate real log entries:
php artisan tinker// Generate log entries that will be analyzed by AI
Log::error('Database connection failed');
Log::warning('High memory usage detected: 95%');
Log::error('API rate limit exceeded for endpoint /users');
Log::critical('Payment gateway timeout - transaction failed');
exitThese logs will automatically:
- Be detected by the log monitor
- Trigger the analysis queue job
- Get vectorized via Overpass
- Be analyzed by Prism AI
- Create incidents visible in the dashboard
Alternatively, use factories to create sample data (bypasses analysis):
// Create log entries without analysis
App\Models\LogEntry::factory()->count(20)->create();
// Create incidents directly
App\Models\Incident::factory()->count(5)->create();Format code with Laravel Pint:
vendor/bin/pintCheck specific files:
vendor/bin/pint --dirtyRun Tinker REPL:
php artisan tinkerClear all caches:
php artisan optimize:clearapp/
├── Actions/ # Domain logic classes
│ ├── LogMonitor.php
│ ├── LogVectorizer.php
│ ├── LogAnalyzer.php
│ └── IncidentManager.php
├── Jobs/ # Queue jobs
│ └── AnalyzeLogJob.php
├── Mcp/ # MCP server components
│ ├── Servers/
│ ├── Tools/
│ └── Resources/
└── Models/ # Eloquent models
├── LogEntry.php
├── LogVector.php
└── Incident.php
overpass-ai/ # Python ML bridge
├── main.py
├── vectorize.py
└── requirements.txt
resources/views/livewire/ # Volt components
└── log-dashboard.blade.php
The application follows a clean architecture pattern:
- Log Ingestion:
LogMonitorcreatesLogEntryrecords - Async Processing:
AnalyzeLogJobis dispatched to queue - Vectorization:
LogVectorizergenerates embeddings via Overpass - AI Analysis:
LogAnalyzeruses Prism to assess severity - Incident Creation:
IncidentManagerstores analysis results - Dashboard Display: Livewire shows real-time insights
Ensure the queue worker is running:
php artisan queue:work --verboseReinstall Python dependencies:
cd overpass-ai
pip3 install --upgrade -r requirements.txtIf you see "ModuleNotFoundError: No module named 'sentence_transformers'", ensure you've installed the requirements.
Check your API key configuration and ensure you have credits/quota:
php artisan tinker
>>> config('prism.providers.anthropic.api_key')SQLite has limited concurrency. For production, consider PostgreSQL or MySQL.
This project is designed as an advanced tutorial for integrating AI capabilities into Laravel applications. It demonstrates:
- Multi-provider LLM integration via Prism
- Python ML bridge with Overpass
- MCP server implementation
- Real-time Livewire dashboards
- Queue-based async processing
- Active Pattern for domain logic organization
A comprehensive tutorial covering the architecture, patterns, and implementation details is available in .ai/tutorial.md. The tutorial is written for advanced Laravel developers and covers:
- The Active Pattern and why it's used over traditional service classes
- Python/PHP bridge architecture with Overpass
- Vector storage using SQLite JSON columns
- LLM integration with structured outputs via Prism
- MCP server implementation for AI accessibility
- Complete request flow from log ingestion to incident detection
Note: This is an educational project. Automated testing is intentionally omitted to focus on core functionality.
This project is open-sourced software licensed under the MIT license.
