Open Source · Apache 2.0

Your AI agents make decisions.
Who governs them?

Structured boardroom debates, mandatory red team review, and governance guardrails that actually enforce decisions. Not advice. Enforcement.

Star on GitHub Read Docs
17
AI Agents
6
Debate Phases
5
Verdict Levels
32
Tests Passing

AI agents are making critical decisions
without oversight

Every multi-agent framework helps AI agents do things. None make sure they should.

No Accountability

Agents execute actions with no record of who decided what, why, or what alternatives were considered. When things go wrong, there is no audit trail.

No Adversarial Review

Groupthink is the default. Without a structured red team to challenge assumptions and stress-test proposals, agents rubber-stamp each other's outputs.

No Guardrails

Nothing stops an agent from deploying to production at 3am, approving its own code, or spending the entire budget. Policies exist on paper, not in code.

Four steps to governed AI

From unchecked agents to structured, auditable decision-making in minutes.

1

Define Your Agents

Configure your council: C-suite executives, domain specialists, and a mandatory red team. Add custom agents for your industry.

2

Convene a Boardroom

Submit any decision topic. 17 agents debate across 6 structured phases: opening, executive council, advisory, critical review, open debate, and synthesis.

3

Red Team Challenges

DevilsAdvocate and Skeptic stress-test every conclusion. They cannot be disabled. This is non-negotiable adversarial review, built into the core.

4

Rule Engine Enforces

5 verdicts: PASS, FLAG, BLOCK, ESCALATE_TO_HUMAN, and HALT. Rules are code, not suggestions. Add custom rules via Python or YAML.

Five lines to
governed decisions

Install the package, set your API key, and convene your first boardroom meeting. Every response includes a structured synthesis, vote tally, confidence score, and actionable items.

Anthropic OpenAI Ollama
quick_start.py
from aegis_gov import Boardroom

boardroom = Boardroom()
result = boardroom.convene(
    topic="Should we deploy the new ML model to production?",
    category="STRATEGIC",
)

print(result.synthesis)       # CEO's final decision
print(result.vote_summary)    # {"approve": 7, "conditional": 2, ...}
print(result.confidence)      # 0.85

Everything you need for
AI governance

Production-ready governance primitives. Not a toy. Not a demo. Real enforcement.

6-Phase Boardroom

17 AI agents with distinct roles debate every decision through CEO opening, executive council, advisory input, critical review, open debate, and CEO synthesis.

Mandatory Red Team

DevilsAdvocate challenges assumptions and demands evidence. Skeptic explores alternatives and detects groupthink. Neither can be disabled.

Rule Engine

5 built-in governance rules with 5 verdict levels: PASS, FLAG, BLOCK, ESCALATE_TO_HUMAN, HALT. Add custom rules via Python or YAML config.

Constitutional Manifesto

Version-controlled governance document defining human sovereignty, decision categories, role separation, and confidence scoring requirements.

GitHub Action

Add governance review to pull requests in your CI/CD pipeline. Fail builds on BLOCK verdicts. One YAML file, zero configuration drift.

LLM Agnostic

Works with Anthropic Claude, OpenAI GPT, and local models via Ollama. Swap providers with a single parameter. No vendor lock-in.

The governance layer
other frameworks are missing

AEGIS is not a replacement for task frameworks. It is the governance layer you add on top.

CapabilityAEGISCrewAIAutoGenLangGraphMetaGPT
Governance rule engineYesNoNoNoNo
Mandatory red team reviewYesNoNoNoNo
Constitutional manifestoYesNoNoNoNo
Decision audit trailYesPartialNoNoPartial
Verdict enforcement (BLOCK/HALT)YesNoNoNoNo
Human escalation gatesYesManualManualManualManual
LLM-agnosticYesYesYesYesNo

Built for regulated industries

Audit trails, decision categorization, and human escalation gates map directly to major AI governance standards.

EU AI Act

Article 14 mandates human oversight of high-risk AI systems. AEGIS provides structured human-in-the-loop escalation gates and full decision audit trails.

Article 14: Human Oversight

NIST AI RMF

The AI Risk Management Framework requires governance mechanisms, risk identification, and continuous monitoring. AEGIS maps to the Govern and Manage functions.

AI RMF 1.0: Govern + Manage

ISO/IEC 42001

AI Management Systems certification requires documented AI policies, risk assessment, and performance evaluation. AEGIS provides the technical implementation layer.

ISO/IEC 42001: AI Management

Install in seconds

Three ways to add governance to your AI agent system.

pip

Recommended
# Install with your preferred LLM provider
pip install aegis-gov[anthropic]
pip install aegis-gov[openai]
pip install aegis-gov[all]

# Generate starter config
aegis init

# Run your first review
aegis convene "Your decision topic"

Docker

git clone https://github.com/pyonkichi369/aegis-oss.git
cd aegis-oss
cp .env.example .env
# Add your ANTHROPIC_API_KEY to .env

docker compose up

# API at http://localhost:8000/docs

GitHub Action

# .github/workflows/aegis-review.yml
name: AEGIS Governance Review
on:
  pull_request:
    types: [opened, synchronize]
jobs:
  review:
    runs-on: ubuntu-latest
    steps:
      - uses: pyonkichi369/aegis-oss@v1
        with:
          api-key: ${{ secrets.ANTHROPIC_API_KEY }}
          category: TACTICAL
          fail-on: BLOCK

Start governing your AI agents today

Apache 2.0 licensed. Production ready. Zero vendor lock-in.

Star on GitHub