The open-source framework for AI SRE agents, and the training and evaluation environment they need to improve. Connect the 60+ tools you already run, define your own workflows, and investigate incidents on your own infrastructure.
Quickstart · Docs · FAQ · Security
🚧 Public Alpha: Core workflows are usable for early exploration, though not yet fully stable. The project is in active development, and APIs and integrations may evolve
- Why OpenSRE?
- Install
- Quick Start
- Deployment
- How OpenSRE Works
- Benchmark
- Capabilities & integrations
- Contributing & development
- Security
- Telemetry
- License
- Citations
When something breaks in production, the evidence is scattered across logs, metrics, traces, runbooks, and Slack threads. OpenSRE is an open-source framework for AI SRE agents that resolve production incidents, built to run on your own infrastructure.
We do that because SWE-bench1 gave coding agents scalable training data and clear feedback. Production incident response still lacks an equivalent.
Distributed failures are slower, noisier, and harder to simulate and evaluate than local code tasks, which is why AI SRE, and AI for production debugging more broadly, remains unsolved.
OpenSRE is building that missing layer:
an open reinforcement learning environment for agentic infrastructure incident response, with end-to-end tests and synthetic incident simulations for realistic production failures
We do that by:
- building easy-to-deploy, customizable AI SRE agents for production incident investigation and response
- running scored synthetic RCA suites that check root-cause accuracy, required evidence, and adversarial red herrings (tests/synthetic)
- running real-world end-to-end tests across cloud-backed scenarios including Kubernetes, EC2, CloudWatch, Lambda, ECS Fargate, and Flink (tests/e2e)
- keeping semantic test-catalog naming so e2e vs synthetic and local vs cloud boundaries stay obvious (tests/README.md)
Our mission is to build AI SRE agents on top of this, scale it to thousands of realistic infrastructure failure scenarios, and establish OpenSRE as the benchmark and training ground for AI SRE.
1 https://arxiv.org/abs/2310.06770
The root installer URL auto-detects Unix shell vs PowerShell. Add --main when you want the latest rolling build from main instead of the latest stable release.
Latest stable release:
curl -fsSL https://install.opensre.com | bashLatest build from main:
curl -fsSL https://install.opensre.com | bash -s -- --mainHomebrew:
brew tap tracer-cloud/tap
brew install tracer-cloud/tap/opensreWindows (PowerShell):
irm https://install.opensre.com | iexConfigure once, then pick how you want to run investigations:
opensre onboardInteractive shell — with no subcommand, opensre starts a REPL (TTY required). Describe incidents in plain language, stream investigations, and use slash commands such as /help, /status, /clear, /reset, /trust, /effort, /exit. /effort sets reasoning depth for OpenAI and Codex providers (low, medium, high, xhigh, or max; other providers ignore it). Ctrl+C cancels an in-flight investigation without losing session state.
opensreOne-shot investigation — run the agent once against an alert file:
opensre investigate -i tests/e2e/kubernetes/fixtures/datadog_k8s_alert.jsonOther useful commands:
opensre update
opensre uninstall # remove opensre and all local dataThe official hosted path is LangGraph Platform: connect this repo, keep langgraph.json at the root, and set LLM_PROVIDER plus the matching API key (see .env.example). Railway remains supported as a self-hosted alternative; hosted Postgres and Redis (DATABASE_URI, REDIS_URI) are required for that layout.
Full deployment steps, Railway notes, and opensre remote ops → docs/DEVELOPMENT.md
When an alert fires, OpenSRE automatically:
- Fetches the alert context and correlated logs, metrics, and traces
- Reasons across your connected systems to identify anomalies
- Generates a structured investigation report with probable root cause
- Suggests next steps and, optionally, executes remediation actions
- Posts a summary directly to Slack or PagerDuty — no context switching needed
Regenerate numbers with make benchmark; refresh this table from cached results via make benchmark-update-readme. See docs/DEVELOPMENT.md for details.
No benchmark results yet.
| 🔍 Structured incident investigation | Correlated root-cause analysis across all your signals |
| 📋 Runbook-aware reasoning | OpenSRE reads your runbooks and applies them automatically |
| 🔮 Predictive failure detection | Catch emerging issues before they page you |
| 🔗 Evidence-backed root cause | Every conclusion is linked to the data behind it |
| 🤖 Full LLM flexibility | Bring your own model — Anthropic, OpenAI, Ollama, Gemini, OpenRouter, NVIDIA NIM |
OpenSRE connects to 60+ tools across LLMs, observability, cloud infrastructure, data platforms, incident management, and MCP. The full matrix (with roadmap links) lives in the product docs; a detailed catalog is also maintained in-repo as the project grows.
OpenSRE connects to 60+ tools and services across the modern cloud stack, from LLM providers and observability platforms to infrastructure, databases, and incident management.
| Category | Integrations | Roadmap |
|---|---|---|
| AI / LLM Providers | Anthropic · OpenAI · Ollama · Google Gemini · OpenRouter · NVIDIA NIM · Bedrock | |
| Observability | Splunk · New Relic · Victoria Logs | |
| Infrastructure | Helm · ArgoCD | |
| Database | MongoDB · ClickHouse · PostgreSQL · MySQL · MariaDB · MongoDB Atlas · Azure SQL · Snowflake | RDS |
| Data Platform | Apache Airflow · Apache Kafka · Apache Spark · Prefect · RabbitMQ | |
| Dev Tools | ||
| Incident Management | Trello · ServiceNow · incident.io · Linear | |
| Communication | Notion · Teams · WhatsApp · Confluence | |
| Agent Deployment | ||
| Protocols |
OpenSRE is community-built. Looking for a safe first contribution? Browse good first issue tickets or see the Good First Issues guide. See CONTRIBUTING.md for the full workflow.
Local environment: SETUP.md (all platforms, Windows, MCP/OpenClaw).
Developing in this repo: docs/DEVELOPMENT.md (install from source, CI parity checks, dev container, benchmark, deployment detail, telemetry reference).
Thanks goes to these amazing people:
OpenSRE is designed with production environments in mind: structured and auditable LLM prompts, local transcript handling by default, and no silent bulk export of raw logs. See SECURITY.md for responsible disclosure.
PostHog (product analytics) and Sentry (errors) are opt-out. Quick disable:
export OPENSRE_NO_TELEMETRY=1Full matrix, DSN override, and local event logging → docs/DEVELOPMENT.md
Apache 2.0 — see LICENSE.
