Structured boardroom debates, mandatory red team review, and governance guardrails that actually enforce decisions. Not advice. Enforcement.
The Problem
Every multi-agent framework helps AI agents do things. None make sure they should.
Agents execute actions with no record of who decided what, why, or what alternatives were considered. When things go wrong, there is no audit trail.
Groupthink is the default. Without a structured red team to challenge assumptions and stress-test proposals, agents rubber-stamp each other's outputs.
Nothing stops an agent from deploying to production at 3am, approving its own code, or spending the entire budget. Policies exist on paper, not in code.
How It Works
From unchecked agents to structured, auditable decision-making in minutes.
Configure your council: C-suite executives, domain specialists, and a mandatory red team. Add custom agents for your industry.
Submit any decision topic. 17 agents debate across 6 structured phases: opening, executive council, advisory, critical review, open debate, and synthesis.
DevilsAdvocate and Skeptic stress-test every conclusion. They cannot be disabled. This is non-negotiable adversarial review, built into the core.
5 verdicts: PASS, FLAG, BLOCK, ESCALATE_TO_HUMAN, and HALT. Rules are code, not suggestions. Add custom rules via Python or YAML.
Quick Start
Install the package, set your API key, and convene your first boardroom meeting. Every response includes a structured synthesis, vote tally, confidence score, and actionable items.
from aegis_gov import Boardroom boardroom = Boardroom() result = boardroom.convene( topic="Should we deploy the new ML model to production?", category="STRATEGIC", ) print(result.synthesis) # CEO's final decision print(result.vote_summary) # {"approve": 7, "conditional": 2, ...} print(result.confidence) # 0.85
Features
Production-ready governance primitives. Not a toy. Not a demo. Real enforcement.
17 AI agents with distinct roles debate every decision through CEO opening, executive council, advisory input, critical review, open debate, and CEO synthesis.
DevilsAdvocate challenges assumptions and demands evidence. Skeptic explores alternatives and detects groupthink. Neither can be disabled.
5 built-in governance rules with 5 verdict levels: PASS, FLAG, BLOCK, ESCALATE_TO_HUMAN, HALT. Add custom rules via Python or YAML config.
Version-controlled governance document defining human sovereignty, decision categories, role separation, and confidence scoring requirements.
Add governance review to pull requests in your CI/CD pipeline. Fail builds on BLOCK verdicts. One YAML file, zero configuration drift.
Works with Anthropic Claude, OpenAI GPT, and local models via Ollama. Swap providers with a single parameter. No vendor lock-in.
Why AEGIS
AEGIS is not a replacement for task frameworks. It is the governance layer you add on top.
| Capability | AEGIS | CrewAI | AutoGen | LangGraph | MetaGPT |
|---|---|---|---|---|---|
| Governance rule engine | Yes | No | No | No | No |
| Mandatory red team review | Yes | No | No | No | No |
| Constitutional manifesto | Yes | No | No | No | No |
| Decision audit trail | Yes | Partial | No | No | Partial |
| Verdict enforcement (BLOCK/HALT) | Yes | No | No | No | No |
| Human escalation gates | Yes | Manual | Manual | Manual | Manual |
| LLM-agnostic | Yes | Yes | Yes | Yes | No |
Compliance Ready
Audit trails, decision categorization, and human escalation gates map directly to major AI governance standards.
Article 14 mandates human oversight of high-risk AI systems. AEGIS provides structured human-in-the-loop escalation gates and full decision audit trails.
Article 14: Human OversightThe AI Risk Management Framework requires governance mechanisms, risk identification, and continuous monitoring. AEGIS maps to the Govern and Manage functions.
AI RMF 1.0: Govern + ManageAI Management Systems certification requires documented AI policies, risk assessment, and performance evaluation. AEGIS provides the technical implementation layer.
ISO/IEC 42001: AI ManagementGet Started
Three ways to add governance to your AI agent system.
# Install with your preferred LLM provider pip install aegis-gov[anthropic] pip install aegis-gov[openai] pip install aegis-gov[all] # Generate starter config aegis init # Run your first review aegis convene "Your decision topic"
git clone https://github.com/pyonkichi369/aegis-oss.git cd aegis-oss cp .env.example .env # Add your ANTHROPIC_API_KEY to .env docker compose up # API at http://localhost:8000/docs
# .github/workflows/aegis-review.yml name: AEGIS Governance Review on: pull_request: types: [opened, synchronize] jobs: review: runs-on: ubuntu-latest steps: - uses: pyonkichi369/aegis-oss@v1 with: api-key: ${{ secrets.ANTHROPIC_API_KEY }} category: TACTICAL fail-on: BLOCK
Open Source
Apache 2.0 licensed. Production ready. Zero vendor lock-in.
Social Proof
Validated by the best