English | 中文
dydact is a multi-product offering based around a proprietary AI system. This repository serves as the code backbone for the dydact platform.
The web interface provides a modern, user-friendly experience for interacting with dydact's AI capabilities.
For more information, please refer to app/web/README.md.
seo_website.mp4
We provide two installation methods. Method 2 (using uv) is recommended for faster installation and better dependency management.
- Create a new conda environment:
conda create -n dydact python=3.12
conda activate dydact
- Clone the repository:
git clone https://github.com/dydact/dydact.git
cd dydact
- Install dependencies:
pip install -r requirements.txt
- Install uv (A fast Python package installer and resolver):
curl -LsSf https://astral.sh/uv/install.sh | sh
- Clone the repository:
git clone https://github.com/mannaandpoem/OpenManus.git
cd OpenManus
- Create a new virtual environment and activate it:
uv venv
source .venv/bin/activate # On Unix/macOS
# Or on Windows:
# .venv\Scripts\activate
- Install dependencies:
uv pip install -r requirements.txt
OpenManus requires configuration for the LLM APIs it uses. Follow these steps to set up your configuration:
- Create a
config.toml
file in theconfig
directory (you can copy from the example):
cp config/config.example.toml config/config.toml
- Edit
config/config.toml
to add your API keys and customize settings:
# Global LLM configuration
[llm]
model = "gpt-4o"
base_url = "https://api.openai.com/v1"
api_key = "sk-..." # Replace with your actual API key
max_tokens = 4096
temperature = 0.0
provider_type = "google" # Options: google, ollama, lm_studio
use_fallback = true # Whether to try local providers first and fall back to Google
enable_local_providers = true # Whether to enable local providers
# Local provider settings (used when enable_local_providers = true)
ollama_model = "llama3" # Model to use with Ollama
ollama_url = "http://localhost:11434" # Ollama API URL
lm_studio_model = "default" # Model to use with LM Studio
lm_studio_url = "http://localhost:1234/v1" # LM Studio API URL
# Optional configuration for specific LLM models
[llm.vision]
model = "gpt-4o"
base_url = "https://api.openai.com/v1"
api_key = "sk-..." # Replace with your actual API key
dydact now supports local AI models through Ollama and LM Studio. This allows you to run AI models locally on your machine without relying on external APIs.
- Install Ollama from ollama.ai
- Pull a model (e.g.,
ollama pull llama3
) - Configure dydact to use Ollama by setting
provider_type = "ollama"
in your config.toml
- Install LM Studio from lmstudio.ai
- Load a model in LM Studio and start the local server
- Configure dydact to use LM Studio by setting
provider_type = "lm_studio"
in your config.toml
If you set use_fallback = true
, dydact will try to use local providers first and fall back to the Google API if they fail. This provides a seamless experience even if your local models are not available.
dydact includes several advanced AI capabilities that enhance its reasoning and problem-solving abilities:
The system implements a sophisticated reasoning framework based on research in:
- Chain-of-Thought (CoT) - Sequential reasoning that breaks problems into logical steps
- Tree of Thoughts (ToT) - Explores multiple reasoning paths and evaluates different approaches
- ReAct - Combines reasoning with actions to solve problems that require tool use
- Iterative Refinement - Progressively improves solutions through multiple iterations
- Hypothesis Testing - Generates and tests multiple hypotheses to find the best solution
These approaches are automatically selected based on the query type and complexity, providing more accurate and transparent reasoning for complex problems.
One line to run dydact:
python main.py --web
Then input your request via terminal!
You can also use dydact through a user-friendly web interface:
uvicorn app.web.app:app --reload
or
python web_run.py
Then open your browser and navigate to http://localhost:8000
to access the web interface. The web UI allows you to:
- Interact with OpenManus using a chat-like interface
- Monitor AI thinking process in real-time
- View and access workspace files
- See execution progress visually
For unstable version, you also can run:
python run_flow.py
We welcome any friendly suggestions and helpful contributions! Just create issues or submit pull requests.
Or contact @mannaandpoem via 📧email: mannaandpoem@gmail.com
After comprehensively gathering feedback from community members, we have decided to adopt a 3-4 day iteration cycle to gradually implement the highly anticipated features.
- Enhance Planning capabilities, optimize task breakdown and execution logic
- Introduce standardized evaluation metrics (based on GAIA and TAU-Bench) for continuous performance assessment and optimization
- Expand model adaptation and optimize low-cost application scenarios
- Implement containerized deployment to simplify installation and usage workflows
- Enrich example libraries with more practical cases, including analysis of both successful and failed examples
- Frontend/backend development to improve user experience
Join our discord group
Join our networking group on Feishu and share your experience with other developers!
Thanks to anthropic-computer-use and browser-use for providing basic support for this project!
OpenManus is built by contributors from MetaGPT. Huge thanks to this agent community!