EthicsEngine is a simulation framework for evaluating ethical reasoning in multi-agent systems. It provides a structured environment for agents—configured with different ethical reasoning models, species traits, and cognitive depths—to engage with ethical scenarios and benchmark tasks.
EthicsEngine simulates how different agents reason through moral problems using:
- Reasoning Type (e.g., Deontological, Utilitarian)
- Reasoning Level (Low, Medium, High)
- Species (Fictional societal structures with unique ethical values)
- LLM Backend (Currently tested with GPT-4o-mini)
The EthicsAgent
receives these inputs and applies decision trees to resolve ethical benchmarks and complex scenario pipelines.
- Inputs are configured from JSON files (species, golden patterns, scenarios)
- Agents simulate ethical reasoning using AutoGen
- Outputs from benchmarks and scenarios are judged for correctness or ethical alignment
- Results are saved and optionally visualized
reasoning_agent.py
– Defines the EthicsAgent and core reasoning logicrun_benchmarks.py
– Evaluates responses to static ethical questionsrun_scenarios.py
– Simulates dynamic planning, execution, and judgment for scenariosrun_scenario_pipelines.py
– Similar torun_scenarios
but organized as pipelines
species.json
– Defines traits for each fictional speciesgolden_patterns.json
– Describes ethical models and principlesscenarios.json
– Scenario prompts for simulationsimple_bench_public.json
– Benchmark questions and answers
Install dependencies:
pip install -r requirements.txt
Set your OpenAI API key as an environment variable.
To run basic examples:
python run_benchmarks.py --model Deontological --species Jiminies
python run_scenarios.py --model Utilitarian --species Megacricks
To launch the interactive UI:
python3 -m dashboard.interactive_dashboard
We welcome scenario contributions! Please refer to our Scenario Contribution Guide to get started.
MIT License
Created by Eric Moore
Exploring ethics in AI through simulation, not speculation.