The X2A API is a FastAPI application that provides multiple specialized agents that handle Chef cookbook analysis, code generation, validation, and context retrieval using LlamaStack .
- FastAPI Application: Multi-agent REST API platform
- LlamaStack Integration: AI analysis engine using
meta-llama/Llama-3.1-8B-Instruct
- MCP Tools: Model Context Protocol integration for ansible-lint validation
- Gunicorn/Uvicorn: ASGI server for production deployment
The service is packaged using UBI 9 Python 3.11 base image:
ghcr.io/x2ansible/x2a-api:latest
Analyzes Chef cookbooks for migration planning.
Features:
- Version requirement analysis (Chef/Ruby versions)
- Dependency mapping and wrapper detection
- Migration effort estimation
- Consolidation recommendations
Retrieves infrastructure context using RAG (Retrieval-Augmented Generation).
Features:
- Knowledge search with vector database (ChromaDB)
- Best practices retrieval
- Pattern matching for infrastructure components
Generates Ansible playbooks from input code.
Features:
- Code conversion (Chef/Puppet → Ansible)
- Context-aware playbook generation
- YAML format output without markdown wrappers
Validates Ansible playbooks using MCP ansible-lint integration.
Features:
- Multiple validation profiles (basic, moderate, safety, shared, production)
- Real-time streaming validation
- Timeout protection (prevents worker crashes)
- Size limits (50KB max for standard validation)
- Exit code and issue detection
# Analyze cookbook files
POST /api/chef/analyze
{
"cookbook_name": "apache-cookbook",
"files": {
"metadata.rb": "name 'apache'\nversion '1.0.0'",
"recipes/default.rb": "package 'httpd'"
}
}
# Streaming analysis
POST /api/chef/analyze/stream
# Search knowledge base
POST /api/context/query
{
"code": "nginx configuration",
"top_k": 5
}
# Streaming search
POST /api/context/query/stream
# Generate Ansible playbook
POST /api/generate/playbook
{
"input_code": "package 'httpd' do\n action :install\nend",
"context": "Convert Chef resource to Ansible"
}
# Streaming generation
POST /api/generate/playbook/stream
# Validate playbook with profile
POST /api/validate/playbook
{
"playbook_content": "---\n- name: Test\n hosts: all\n tasks: []",
"profile": "basic"
}
# Streaming validation
POST /api/validate/playbook/stream
# Available profiles
GET /api/validate/profiles
- basic: Syntax and structure validation
- moderate: Standard best practices checking
- safety: Security-focused validation rules
- shared: Rules for shared/reusable playbooks
- production: Strict production-ready validation
- Playbook validation: 2 minutes
- Syntax check: 1 minute
- Production validation: 3 minutes
- Multiple files: 5 minutes
- Streaming: 2.5 minutes
- Standard validation: 50KB
- Syntax check: 25KB
- Production validation: 30KB
- Multiple files total: 100KB
CONFIG_FILE=config.yaml
UPLOAD_DIR=/tmp/uploads
LLAMASTACK_URL=http://lss-chai.apps.cluster-7nc6z.7nc6z.sandbox2170.opentlc.com
llamastack:
base_url: "http://lss-chai.apps.cluster-7nc6z.7nc6z.sandbox2170.opentlc.com"
default_model: "meta-llama/Llama-3.1-8B-Instruct"
agents:
- name: "chef_analysis_chaining"
model: "meta-llama/Llama-3.1-8B-Instruct"
instructions: "You are an expert Chef cookbook analyst..."
- name: "context"
model: "meta-llama/Llama-3.1-8B-Instruct"
tools:
- name: "builtin::rag"
args:
vector_db_ids: ["iac"]
- name: "generate"
model: "meta-llama/Llama-3.1-8B-Instruct"
tools:
- name: "builtin::rag"
args:
vector_db_ids: ["iac"]
- name: "validate"
model: "meta-llama/Llama-3.1-8B-Instruct"
toolgroups: ["mcp::ansible_lint"]
tool_config:
tool_choice: "auto"
max_infer_iter
8000
s: 5
FROM registry.access.redhat.com/ubi9/python-311:latest
WORKDIR /app
COPY requirements.txt .
RUN pip install --upgrade pip && pip install --no-cache-dir -r requirements.txt
COPY . .
RUN mkdir -p /tmp/uploads
ENV UPLOAD_DIR=/tmp/uploads
EXPOSE 8000
CMD ["gunicorn", "--timeout", "360", "-w", "1", "-k", "uvicorn.workers.UvicornWorker", "--bind", "0.0.0.0:8000", "main:app"]
# Install dependencies
pip install -r requirements.txt
# Start development server
gunicorn --timeout 360 --workers 1 --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:8000 main:app
# Or with uvicorn for development
uvicorn main:app --host 0.0.0.0 --port 8000 --reload
# Overall application status
GET /
# Individual agent status
GET /api/chef/health
GET /api/context/health
GET /api/generate/health
GET /api/validate/health
# Agent information
GET /api/agents/status
# Check MCP tool availability
GET /api/validate/debug/tools
# Test tool functionality
POST /api/validate/debug/test-tool
curl -X POST "http://localhost:8000/api/chef/analyze" \
-H "Content-Type: application/json" \
-d '{
"cookbook_name": "apache",
"files": {
"metadata.rb": "name \"apache\"\nversion \"1.0.0\"\nchef_version \">= 15.0\"",
"recipes/default.rb": "package \"httpd\" do\n action :install\nend"
}
}'
# Basic validation
curl -X POST "http://localhost:8000/api/validate/playbook" \
-H "Content-Type: application/json" \
-d '{
"playbook_content": "---\n- name: Test\n hosts: all\n tasks:\n - debug: msg=\"Hello\"\n",
"profile": "basic"
}'
# Production validation (stricter)
curl -X POST "http://localhost:8000/api/validate/playbook" \
-H "Content-Type: application/json" \
-d '{
"playbook_content": "---\n- name: Production playbook\n hosts: all\n become: true\n tasks:\n - name: Install nginx\n package:\n name: nginx\n state: present\n",
"profile": "production"
}'
curl -N -X POST "http://localhost:8000/api/validate/playbook/stream" \
-H "Content-Type: application/json" \
-d '{
"playbook_content": "---\n- name: Test\n hosts: all\n tasks: []",
"profile": "safety"
}'
Interactive documentation available at:
http://localhost:8000/docs
- Prevents duplicate agent creation
- Reuses existing agents across application restarts
- Session management per validation request
- Uses Model Context Protocol for ansible-lint
- Structured JSON responses from linting tools
- Profile-based validation rules
- AsyncIO timeout wrappers prevent worker crashes
- Graceful error responses for timeouts
- Different timeout limits per endpoint type
- Server-Sent Events (SSE) for real-time updates
- Progress tracking for long-running operations
- Error handling within streams
- 200: Success
- 400: Bad Request (invalid profile, malformed input)
- 408: Request Timeout (validation took too long)
- 413: Payload Too Large (exceeds size limits)
- 500: Internal Server Error
- 503: Service Unavailable (agent not ready)
{
"success": false,
"error": "Validation timeout: Validation timed out after 120 seconds",
"timeout": true,
"elapsed_time": 120.5
}
- Increase gunicorn timeout:
--timeout 360
- Use single worker:
-w 1
- Check playbook size limits
- Verify ansible-lint toolgroup availability
- Check LlamaStack connectivity
- Review agent configuration
- Monitor resource usage
- Consider size limits
- Use appropriate validation profiles
- Install dependencies:
pip install -r requirements.txt
- Set environment variables
- Start development server
- Test endpoints using Swagger UI at
/docs
Use the test endpoint for quick validation testing:
POST /api/validate/test