AIGNE Framework is a functional AI application development framework designed to simplify and accelerate the process of building modern applications. It combines functional programming features, powerful artificial intelligence capabilities, and modular design principles to help developers easily create scalable solutions. AIGNE Framework is also deeply integrated with the Blocklet ecosystem, providing developers with a wealth of tools and resources.
- Modular Design: With a clear modular structure, developers can easily organize code, improve development efficiency, and simplify maintenance.
- TypeScript Support: Comprehensive TypeScript type definitions are provided, ensuring type safety and enhancing the developer experience.
- Multiple AI Model Support: Built-in support for OpenAI, Gemini, Claude, Nova and other mainstream AI models, easily extensible to support additional models.
- Flexible Workflow Patterns: Support for sequential, concurrent, routing, handoff and other workflow patterns to meet various complex application requirements.
- MCP Protocol Integration: Seamless integration with external systems and services through the Model Context Protocol.
- Code Execution Capabilities: Support for executing dynamically generated code in a secure sandbox, enabling more powerful automation capabilities.
- Blocklet Ecosystem Integration: Closely integrated with the Blocklet ecosystem, providing developers with a one-stop solution for development and deployment.
npm install @aigne/core
yarn add @aigne/core
pnpm add @aigne/core
import { AIAgent, AIGNE } from "@aigne/core";
import { OpenAIChatModel } from "@aigne/core/models/openai-chat-model.js";
const { OPENAI_API_KEY } = process.env;
const model = new OpenAIChatModel({
apiKey: OPENAI_API_KEY,
});
function transfer_to_b() {
return agentB;
}
const agentA = AIAgent.from({
name: "AgentA",
instructions: "You are a helpful agent.",
outputKey: "A",
skills: [transfer_to_b],
});
const agentB = AIAgent.from({
name: "AgentB",
instructions: "Only speak in Haikus.",
outputKey: "B",
});
const aigne = new AIGNE({ model });
const userAgent = aigne.invoke(agentA);
const result1 = await userAgent.invoke("transfer to agent b");
console.log(result1);
// Output:
// {
// B: "Transfer now complete, \nAgent B is here to help. \nWhat do you need, friend?",
// }
const result2 = await userAgent.invoke("It's a beautiful day");
console.log(result2);
// Output:
// {
// B: "Sunshine warms the earth, \nGentle breeze whispers softly, \nNature sings with joy. ",
// }
- examples - Example project demonstrating how to use different agents to handle various tasks.
- packages/core - Core package providing the foundation for building AIGNE applications.
- packages/agent-library - AIGNE agent library, providing a variety of specialized agents for different tasks.
- packages/cli - Command-line interface for AIGNE Framework, providing tools for project management and deployment.
- Cookbook (中文): Practical recipes and patterns for AIGNE Framework API usage
- CLI Guide (中文): Comprehensive guide to the AIGNE CLI tool
- Agent Development Guide (中文): Guide to developing AIGNE agents using YAML/JS configuration files
- API References
AIGNE Framework supports various workflow patterns to address different AI application needs. Each workflow pattern is optimized for specific use cases:
Use Cases: Processing multi-step tasks that require a specific execution order, such as content generation pipelines, multi-stage data processing, etc.
flowchart LR
in(In)
out(Out)
conceptExtractor(Concept Extractor)
writer(Writer)
formatProof(Format Proof)
in --> conceptExtractor --> writer --> formatProof --> out
classDef inputOutput fill:#f9f0ed,stroke:#debbae,stroke-width:2px,color:#b35b39,font-weight:bolder;
classDef processing fill:#F0F4EB,stroke:#C2D7A7,stroke-width:2px,color:#6B8F3C,font-weight:bolder;
class in inputOutput
class out inputOutput
class conceptExtractor processing
class writer processing
class formatProof processing
Use Cases: Scenarios requiring simultaneous processing of multiple independent tasks to improve efficiency, such as parallel data analysis, multi-dimensional content evaluation, etc.
flowchart LR
in(In)
out(Out)
featureExtractor(Feature Extractor)
audienceAnalyzer(Audience Analyzer)
aggregator(Aggregator)
in --> featureExtractor --> aggregator
in --> audienceAnalyzer --> aggregator
aggregator --> out
classDef inputOutput fill:#f9f0ed,stroke:#debbae,stroke-width:2px,color:#b35b39,font-weight:bolder;
classDef processing fill:#F0F4EB,stroke:#C2D7A7,stroke-width:2px,color:#6B8F3C,font-weight:bolder;
class in inputOutput
class out inputOutput
class featureExtractor processing
class audienceAnalyzer processing
class aggregator processing
Use Cases: Scenarios where requests need to be routed to different specialized processors based on input content type, such as intelligent customer service systems, multi-functional assistants, etc.
flowchart LR
in(In)
out(Out)
triage(Triage)
productSupport(Product Support)
feedback(Feedback)
other(Other)
in ==> triage
triage ==> productSupport ==> out
triage -.-> feedback -.-> out
triage -.-> other -.-> out
classDef inputOutput fill:#f9f0ed,stroke:#debbae,stroke-width:2px,color:#b35b39,font-weight:bolder;
classDef processing fill:#F0F4EB,stroke:#C2D7A7,stroke-width:2px,color:#6B8F3C,font-weight:bolder;
class in inputOutput
class out inputOutput
class triage processing
class productSupport processing
class feedback processing
class other processing
Use Cases: Scenarios requiring control transfer between different specialized agents to solve complex problems, such as expert collaboration systems, etc.
flowchart LR
in(In)
out(Out)
agentA(Agent A)
agentB(Agent B)
in --> agentA --transfer to b--> agentB --> out
classDef inputOutput fill:#f9f0ed,stroke:#debbae,stroke-width:2px,color:#b35b39,font-weight:bolder;
classDef processing fill:#F0F4EB,stroke:#C2D7A7,stroke-width:2px,color:#6B8F3C,font-weight:bolder;
class in inputOutput
class out inputOutput
class agentA processing
class agentB processing
Use Cases: Scenarios requiring self-assessment and iterative improvement of output quality, such as code reviews, content quality control, etc.
flowchart LR
in(In)
out(Out)
coder(Coder)
reviewer(Reviewer)
in --Ideas--> coder ==Solution==> reviewer --Approved--> out
reviewer ==Rejected==> coder
classDef inputOutput fill:#f9f0ed,stroke:#debbae,stroke-width:2px,color:#b35b39,font-weight:bolder;
classDef processing fill:#F0F4EB,stroke:#C2D7A7,stroke-width:2px,color:#6B8F3C,font-weight:bolder;
class in inputOutput
class out inputOutput
class coder processing
class reviewer processing
Use Cases: Scenarios requiring dynamically generated code execution to solve problems, such as automated data analysis, algorithmic problem solving, etc.
flowchart LR
in(In)
out(Out)
coder(Coder)
sandbox(Sandbox)
coder -.-> sandbox
sandbox -.-> coder
in ==> coder ==> out
classDef inputOutput fill:#f9f0ed,stroke:#debbae,stroke-width:2px,color:#b35b39,font-weight:bolder;
classDef processing fill:#F0F4EB,stroke:#C2D7A7,stroke-width:2px,color:#6B8F3C,font-weight:bolder;
class in inputOutput
class out inputOutput
class coder processing
class sandbox processing
- Puppeteer MCP Server - Learn how to leverage Puppeteer for automated web scraping through the AIGNE Framework.
- SQLite MCP Server - Explore database operations by connecting to SQLite through the Model Context Protocol.
- Github - Interact with GitHub repositories using the GitHub MCP Server.
- Workflow Router - Implement intelligent routing logic to direct requests to appropriate handlers based on content.
- Workflow Sequential - Build step-by-step processing pipelines with guaranteed execution order.
- Workflow Concurrency - Optimize performance by processing multiple tasks simultaneously with parallel execution.
- Workflow Handoff - Create seamless transitions between specialized agents to solve complex problems.
- Workflow Reflection - Enable self-improvement through output evaluation and refinement capabilities.
- Workflow Orchestration - Coordinate multiple agents working together in sophisticated processing pipelines.
- Workflow Code Execution - Safely execute dynamically generated code within AI-driven workflows.
- Workflow Group Chat - Share messages and interact with multiple agents in a group chat environment.
AIGNE Framework is an open source project and welcomes community contributions. We use release-please for version management and release automation.
- Contributing Guidelines: See CONTRIBUTING.md
- Release Process: See RELEASING.md
This project is licensed under the Elastic-2.0 - see the LICENSE file for details.
AIGNE Framework has a vibrant developer community offering various support channels:
- Documentation Center: Comprehensive official documentation to help developers get started quickly.
- Technical Forum: Exchange experiences with global developers and solve technical problems.