Your Multi-Provider AI Code Analysis and .cursorrules Generator π
Features β’ Requirements β’ Installation β’ Usage β’ Configuration β’ Architecture β’ Output β’ Contributing
CursorRules Architect V2 is an advanced multi-agent system that analyzes your codebase using a powerful combination of AI models from Anthropic, OpenAI, DeepSeek, and Google. It performs a comprehensive six-phase analysis to understand your project's structure, dependencies, patterns, and architectural decisions. The result is a detailed report and automatically generated .cursorrules
and .cursorignore
files customized for your project.
- π Multi-Provider Support - Leverage AI models from Anthropic, OpenAI, DeepSeek, and Google Gemini
- π§ Enhanced Reasoning - Different reasoning modes (enabled/disabled, low/medium/high, temperature)
- π€ Dynamic Agents - Creates specialized analysis agents based on your specific codebase
- π Six-Phase Analysis - Structured pipeline that builds comprehensive understanding
- π Async Processing - Parallel agent execution for faster analysis
- π Detailed Metrics - Track analysis time and token usage
- π Comprehensive Documentation - Generated reports for each phase and component
- π¨ Intelligent Rule Generation - Creates optimal
.cursorrules
files for your coding style - π Multi-Format Output - Separate markdown files for each analysis phase
- π« Smart Exclusions - Customizable patterns to focus analysis on relevant files
- π§ Fully Configurable - Easy to customize which models are used for each phase
- Python 3.8+
- API keys for at least one of the supported providers:
- Anthropic API key with access to
claude-3-7-sonnet-20250219
- OpenAI API key with access to
o1
,o3-mini
, orgpt-4.1
- DeepSeek API key with access to DeepSeek Reasoner
- Google API key with access to
gemini-2.0-flash
orgemini-2.5-pro-exp-03-25
- Anthropic API key with access to
- Dependencies:
anthropic
for Anthropic API accessopenai
for OpenAI API accessgoogle-generativeai
for Google Gemini API accessrich
for beautiful terminal outputclick
for CLI interfacepathlib
for path manipulationasyncio
for async operations
-
Clone the Repository
git clone https://github.com/slyycooper/cursorrules-architect.git cd cursorrules-architect
-
Install Dependencies
pip install -r requirements.txt
-
Set Up API Keys
# Linux/macOS export ANTHROPIC_API_KEY='your-anthropic-api-key' export OPENAI_API_KEY='your-openai-api-key' export DEEPSEEK_API_KEY='your-deepseek-api-key' export GEMINI_API_KEY='your-gemini-api-key' # Windows set ANTHROPIC_API_KEY=your-anthropic-api-key set OPENAI_API_KEY=your-openai-api-key set DEEPSEEK_API_KEY=your-deepseek-api-key set GEMINI_API_KEY=your-gemini-api-key
Alternatively, create a
.env
file in the project root:ANTHROPIC_API_KEY=your-anthropic-api-key OPENAI_API_KEY=your-openai-api-key DEEPSEEK_API_KEY=your-deepseek-api-key GEMINI_API_KEY=your-gemini-api-key
python main.py -p /path/to/your/project
# Specify output location (deprecated, now uses standardized output)
python main.py -p /path/to/your/project -o output.txt
CursorRules Architect V2 allows you to customize which AI models are used for each analysis phase through the config/agents.py
file.
The system defines several predefined model configurations you can use:
# Anthropic Configurations
CLAUDE_BASIC = ModelConfig(
provider=ModelProvider.ANTHROPIC,
model_name="claude-3-7-sonnet-20250219",
reasoning=ReasoningMode.DISABLED
)
CLAUDE_WITH_REASONING = ModelConfig(
provider=ModelProvider.ANTHROPIC,
model_name="claude-3-7-sonnet-20250219",
reasoning=ReasoningMode.ENABLED
)
# OpenAI Configurations
O1_HIGH = ModelConfig(
provider=ModelProvider.OPENAI,
model_name="o1",
reasoning=ReasoningMode.HIGH
)
O3_MINI_MEDIUM = ModelConfig(
provider=ModelProvider.OPENAI,
model_name="o3-mini",
reasoning=ReasoningMode.MEDIUM
)
GPT4_1_CREATIVE = ModelConfig(
provider=ModelProvider.OPENAI,
model_name="gpt-4.1",
reasoning=ReasoningMode.TEMPERATURE,
temperature=0.9
)
# DeepSeek Configurations
DEEPSEEK_REASONER = ModelConfig(
provider=ModelProvider.DEEPSEEK,
model_name="deepseek-reasoner",
reasoning=ReasoningMode.ENABLED
)
# Gemini Configurations
GEMINI_BASIC = ModelConfig(
provider=ModelProvider.GEMINI,
model_name="gemini-2.0-flash",
reasoning=ReasoningMode.DISABLED
)
GEMINI_WITH_REASONING = ModelConfig(
provider=ModelProvider.GEMINI,
model_name="gemini-2.5-pro-exp-03-25",
reasoning=ReasoningMode.ENABLED
)
To change which model is used for each phase, simply update the MODEL_CONFIG
dictionary:
MODEL_CONFIG = {
"phase1": GEMINI_BASIC, # Use Gemini-2.0-flash for Phase 1
"phase2": GEMINI_WITH_REASONING, # Use Gemini-2.5-pro with reasoning for Phase 2
"phase3": CLAUDE_WITH_REASONING, # Use Claude with reasoning for Phase 3
"phase4": O1_HIGH, # Use OpenAI's o1 with high reasoning for Phase 4
"phase5": DEEPSEEK_REASONER, # Use DeepSeek Reasoner for Phase 5
"final": CLAUDE_WITH_REASONING, # Use Claude with reasoning for final analysis
}
You can customize which files and directories are excluded from analysis by modifying config/exclusions.py
:
EXCLUDED_DIRS = {
'node_modules', '.next', '.git', 'venv', '__pycache__',
'dist', 'build', '.vscode', '.idea', 'coverage',
# Add your custom directories here
}
EXCLUDED_FILES = {
'package-lock.json', 'yarn.lock', '.DS_Store', '.env',
# Add your custom files here
}
EXCLUDED_EXTENSIONS = {
'.jpg', '.jpeg', '.png', '.gif', '.ico', '.svg',
'.pyc', '.pyo', '.pyd', '.so', '.db', '.sqlite',
# Add your custom extensions here
}
CursorRules Architect V2 follows a sophisticated multi-phase analysis approach:
The system is built on a BaseArchitect
abstract class that standardizes how different AI model providers are integrated:
AnthropicArchitect
- Interface to Anthropic's Claude modelsOpenAIArchitect
- Interface to OpenAI's models (o1, o3-mini, gpt-4.1)DeepSeekArchitect
- Interface to DeepSeek's reasoning modelsGeminiArchitect
- Interface to Google's Gemini models
Each architect implements standardized methods:
analyze()
- Runs general analysiscreate_analysis_plan()
- Creates a detailed analysis plan (Phase 2)synthesize_findings()
- Synthesizes findings from deep analysis (Phase 4)consolidate_results()
- Consolidates all analysis results (Phase 5)final_analysis()
- Provides final architectural insights
Performs initial exploration of the project structure, dependencies, and technology stack using specialized agents:
- Structure Agent: Analyzes directory and file organization
- Dependency Agent: Investigates package dependencies
- Tech Stack Agent: Identifies frameworks and technologies
Creates a detailed analysis plan using findings from Phase 1:
- Defines specialized agents with specific responsibilities
- Assigns files to relevant agents based on expertise
- Provides detailed instructions for deeper analysis
- Outputs an XML-structured plan that guides Phase 3
The heart of the system - dynamically creates specialized agents based on Phase 2's output:
- Each agent focuses on its assigned files and responsibilities
- Agents run in parallel for efficiency
- Performs in-depth analysis of code patterns, architecture, and dependencies
- Falls back to predefined agents if Phase 2 doesn't provide valid definitions
Synthesizes findings from Phase 3 into cohesive insights:
- Integrates agent findings into a holistic view
- Identifies relationships between components
- Highlights key architectural patterns
- Updates analysis directions
Consolidates results from all previous phases into a comprehensive report:
- Organizes findings by component/module
- Creates comprehensive documentation
- Prepares data for final analysis
Provides high-level insights and recommendations:
- System structure mapping
- Architecture pattern identification
- Relationship documentation
- Improvement recommendations
The system supports different reasoning modes depending on the model:
-
For Anthropic models:
ENABLED
- Use extended thinking capabilityDISABLED
- Standard inference
-
For OpenAI models:
- For O1 and O3-mini:
LOW
/MEDIUM
/HIGH
- Different reasoning effort levels
- For gpt-4.1:
TEMPERATURE
- Use temperature-based sampling
- For O1 and O3-mini:
-
For DeepSeek models:
- Always uses
ENABLED
reasoning mode
- Always uses
-
For Gemini models:
ENABLED
- Uses the thinking-enabled experimental model variantDISABLED
- Standard inference
cursorrules-architect/
βββ config/ # Configuration settings
β βββ agents.py # Model and agent configuration
β βββ exclusions.py # Exclusion patterns for analysis
β βββ prompts/ # Centralized prompt templates
β βββ phase_1_prompts.py # Phase 1 agent prompts
β βββ phase_2_prompts.py # Phase 2 planning prompts
β βββ phase_4_
8E35
prompts.py # Phase 4 synthesis prompts
β βββ phase_5_prompts.py # Phase 5 consolidation prompts
β βββ final_analysis_prompt.py # Final analysis prompts
βββ core/ # Core functionality
β βββ agents/ # Agent implementations
β β βββ anthropic.py # Anthropic agent implementation
β β βββ base.py # Base architect abstract class
β β βββ deepseek.py # DeepSeek agent implementation
β β βββ gemini.py # Google Gemini agent implementation
β β βββ openai.py # OpenAI agent implementation
β βββ analysis/ # Analysis phase implementations
β β βββ final_analysis.py # Final Analysis phase
β β βββ phase_1.py # Initial Discovery phase
β β βββ phase_2.py # Methodical Planning phase
β β βββ phase_3.py # Deep Analysis phase
β β βββ phase_4.py # Synthesis phase
β β βββ phase_5.py # Consolidation phase
β βββ types/ # Type definitions
β β βββ agent_config.py # Agent configuration types
β βββ utils/ # Utility functions and tools
β βββ file_creation/ # File creation utilities
β β βββ cursorignore.py # .cursorignore management
β β βββ cursorrules.py # .cursorrules management
β β βββ phases_output.py # Phase output saving
β βββ tools/ # Tool utilities
β βββ agent_parser.py # Parser for Phase 2 output
β βββ file_retriever.py # File content retrieval
β βββ tree_generator.py # Directory tree generation
βββ main.py # Main entry point
βββ requirements.txt # Project dependencies
CursorRules Architect V2 generates a rich set of output files:
your-project/
βββ .cursorrules # Generated rules file for Cursor IDE
βββ .cursorignore # Generated ignore patterns for Cursor IDE
βββ phases_output/ # Detailed phase outputs
βββ phase1_discovery.md # Initial agent findings
βββ phase2_planning.md # Planning document with agent assignments
βββ phase3_analysis.md # Deep analysis results from dynamic agents
βββ phase4_synthesis.md # Synthesized findings
βββ phase5_consolidation.md # Consolidated report
βββ final_analysis.md # Final recommendations
βββ complete_report.md # Overview of all phases
βββ metrics.md # Analysis metrics
The system tracks performance metrics for the analysis:
- Total analysis time
- Token usage for phases using reasoning models
- Per-agent execution times
Check out cursorrules-tools for additional utilities that can help with Cursor IDE development. This collection includes tools for managing .cursorrules
and .cursorignore
files, generating codebase snapshots, analyzing dependencies, and more.
The system's key innovation is the dynamic agent creation process:
-
Phase 2 (Planning):
- Creates an XML-structured output defining specialized agents
- Each agent is assigned responsibilities and specific files
-
Agent Parser:
- Parses the XML output from Phase 2
- Creates a structured representation of agent definitions
- Includes fallback mechanisms for handling parsing issues
-
Phase 3 (Dynamic Analysis):
- Creates AI agents based on the extracted definitions
- Each agent only analyzes its assigned files
- Uses custom-formatted prompts for each agent's role
You can run the system with one or more AI providers:
- Anthropic-only: Set all phases to use Claude models
- OpenAI-only: Set all phases to use o1, o3-mini, or gpt-4.1
- DeepSeek-only: Set all phases to use DeepSeek Reasoner
- Gemini-only: Set all phases to use Google Gemini models
- Mix and match: Use different providers for different phases
For advanced users, you can modify the prompt templates in the config/prompts/
directory to customize how agents analyze your code.
We welcome contributions! Here's how you can help:
- Fork the Repository: Create your own fork to work on
- Make Your Changes: Implement your feature or bug fix
- Run Tests: Ensure your changes don't break existing functionality
- Submit a Pull Request: Send us your contributions for review
See CONTRIBUTING.md for detailed guidelines.
MIT License - see LICENSE for details.
Built with π using Claude-3.7-Sonnet, o1, DeepSeek Reasoner, and Google Gemini