Cloi is a local, context-aware agent designed to streamline your debugging process. Operating entirely on your machine, it ensures that your code and data remain private and secure. With your permission, Cloi can analyze errors and apply fixes directly to your codebase.
Disclaimer: Cloi is an experimental project under beta development. It may contain bugs, and we recommend reviewing all changes before accepting agentic suggestions. That said, help us improve Cloi by filing issues or submitting PRs, see down below for more info.
Install globally:
npm install -g @cloi-ai/cloi
No API key needed, runs completely locally.
Navigate to your project directory and call Cloi when you run into an error.
cloi
/debug - Auto-patch errors iteratively using LLM
/model - Pick Ollama model
/history - Pick from recent shell commands
/logging - Enable/disable terminal output logging (zsh only)
/help - Show this help
/exit - Quit
Cloi provides automatic terminal output logging to help analyze runtime errors without re-running potentially harmful commands.
Features:
- Automatically logs ALL terminal commands and their complete output
- You don't need to type any special prefix - everything is captured automatically
- Logs are stored in
~/.cloi/terminal_output.log
- Captures detailed error messages and stack traces
- Keeps log file under 1MB with automatic rotation
- Only available for zsh users
- Requires explicit permission
- Can be enabled/disabled at any time
Usage:
- You'll be prompted to enable this feature on first run
- Use
/logging
in interactive mode to enable/disable - Or run
cloi --setup-logging
to configure it directly - Restart your terminal after enabling logging
How it works:
- Adds ZSH hooks to your
.zshrc
file to automatically capture all commands - Uses preexec/precmd hooks to log commands before and after execution
- Uses
tee
to show output in the terminal while logging it - Captures both stdout and stderr for complete error information
- Adds timestamps and command information for context
- Manages log file size by rotating logs when they exceed 1MB
Cloi is built for developers who live in the terminal and value privacy:
- 100% Local – Your code never leaves your machine. No API key needed.
- Automates Fixes (Beta) – Analyze errors and apply patches with a single command.
- Safe Changes – Review all diffs before applying. Full control to accept or reject.
- Customizable – Ships with Phi-4. Swap between your locally installed Ollama models.
- Free to Use – Extensible architecture. Fork, contribute, and customize to your needs.
🖥️ Hardware |
• Memory: 8GB RAM minimum (16GB+ recommended) • Storage: 10GB+ free space (Phi-4 model: ~9.1GB) • Processor: Tested on M2 and M3 |
💻 Software |
• OS: macOS (Big Sur 11.0+) • Runtime: Node.js 14+ and Python 3.6+ • Shell: Zsh, Fish, Bash (limited testing) • Dependencies: Ollama (automatically installed if needed) |
We welcome contributions from the community! By contributing to this project, you agree to the following guidelines:
- Scope: Contributions should align with the project's goals of providing a secure, local AI debugging assistant
- License: All contributions must be licensed under the GNU General Public License v3.0 (GPL-3.0)
- Copyleft: Any derivative works must also be distributed under the GPL-3.0 license
For more detailed information on contributing, please refer to the CONTRIBUTING.md file.
- Feature: Added automatic terminal output logging (zsh only)
- Logs terminal output to
~/.terminal_output.log
for better error analysis - Requires explicit user permission before modifying
.zshrc
- Configure with
/logging
command orcloi --setup-logging
- Logs terminal output to
- Feature: Included new
phi4-reasoning:plus
,qwen3
, and ALL your locally installed Ollama models into boxen/model
for easy access.
- Bug Fix: Resolved dependency issue in package.json
- Updated Ollama dependency from deprecated version 0.1.1 to 0.5.15 which resolved ollama.chat API endpoints
- Thank you @kingkongfft and @undecomposed for alerting by submitting this issue.
- Feature: Integrated structured outputs from Ollama's latest API
- Creates more robust patches with JSON-based formatting
- Falls back to traditional LLM calls if the structured API isn't available
- Feature: Implemented CLI model selection via
--model
flag- Specify your preferred Ollama model right from the command line
- Credit to @canadaduane for the insightful suggestion!
- UI Enhancement: The
/model
command now displays ALL you locally installed Ollama models - Refactor: Internal architecture adjustments to maintain conceptual integrity
- Migrated
history.js
to the utils directory where it semantically belongs - Repositioned
traceback.js
to core since it's foundational to the debugging pipeline
- Migrated
- Improvements: Purged lingering references to our project's original name "FigAI" and cleaned the CLI
--help
interface