A powerful AI-powered coding assistant for VS Code that helps developers write, analyze, refactor, and optimize code.
- π€ AI Chat: Interact with AI models directly in VS Code
- π Code Analysis: Get detailed explanations and insights about your code
- π οΈ Code Refactoring: Improve your code with AI-powered suggestions
- π Documentation Generation: Create comprehensive documentation for your codebase
- π§ͺ Test Generation: Automatically generate unit tests for your code
- π Performance Optimization: Receive suggestions to optimize and improve performance
- π Code Issue Detection: Find potential bugs, vulnerabilities, and code smells
- π Bug Finder: Automatically detect and fix terminal errors with AI
- Monitors terminal output for error messages in real-time
- Provides AI-powered error analysis and solutions
- Allows one-click application of suggested fixes
- Includes permanent status bar indicator for monitoring status
Byte supports multiple AI providers to fit your preference and needs:
- OpenAI (GPT-3.5-Turbo, GPT-4, GPT-4-Turbo)
- Google Gemini (Gemini 1.5 Flash, Gemini 1.5 Pro)
- Anthropic Claude (Claude 3 Opus, Claude 3 Sonnet, Claude 3 Haiku)
- Local models (via Ollama - supports Llama3, CodeLlama, Mistral, Mixtral, Neural-Chat, Phi)
- Open VS Code
- Go to Extensions (Ctrl+Shift+X or Cmd+Shift+X)
- Search for "Byte AI Assistant"
- Click Install
Download the .vsix
file from the releases page and install it using:
code --install-extension byte-0.1.1.vsix
# Clone the repository
git clone https://github.com/tuncer-byte/byte.git
# Navigate to the directory
cd byte
# Install dependencies
npm install
# Build and package the extension
npm run vscode:prepublish
# Install the extension from the local .vsix file
code --install-extension byte-0.1.1.vsix
- Install the extension from the VS Code marketplace or using the manual method above
- Open the Byte panel by clicking the Byte icon in the activity bar
- Configure your AI provider using the
/configure
command in the chat panel- You'll need an API key for most providers (OpenAI, Anthropic, Google)
- For local models, ensure Ollama is running with your preferred model
- Start using the features via:
- Chat panel in the sidebar
- Right-click context menu on selected code
- Command palette (Ctrl+Shift+P or Cmd+Shift+P)
- Keyboard shortcuts
Use the chat panel to have conversations with the AI. You can:
- Ask general coding questions
- Request code examples
- Discuss software architecture
- Use slash commands for specialized tasks
Select any code in your editor and use "Analyze Selected Code" to get:
- Line-by-line explanation of how the code works
- Potential issues and improvement suggestions
- Best practices recommendations
- Time and space complexity analysis
The Bug Finder feature automatically monitors your terminal for errors and provides AI-powered solutions.
-
Start Error Monitoring
- Use the command palette (
Ctrl+Shift+P
orCmd+Shift+P
) - Select "Byte: Start Terminal Error Monitoring"
- A notification will appear, and a status bar indicator will show monitoring is active
- Use the command palette (
-
Automatic Error Detection
- When an error occurs in the terminal, it's automatically detected
- You'll receive a notification with the option to analyze the error with AI
- Click "Analyze Error with AI" to get a solution
-
Manual Error Analysis
- Use the command "Byte: Analyze Error Message" to manually analyze errors
- Paste the error message when prompted
- The AI will analyze the error and suggest solutions
-
Solution Panel
- The AI solution is displayed in a panel showing:
- Root cause of the error
- Technical explanation
- Step-by-step solution
- Preventive measures for the future
- You can apply suggested commands or code changes directly from the panel
- The AI solution is displayed in a panel showing:
-
Stop Monitoring
- Use the command "Byte: Stop Terminal Error Monitoring" to stop
- Or click on the status bar indicator to turn it off
Command | Description |
---|---|
Byte: Open AI Chat |
Open the main chat panel |
Byte: Explain Selected Code |
Get an explanation of the selected code |
Byte: Refactor Selected Code |
Get suggestions to improve the selected code |
Byte: Generate Documentation |
Generate documentation for the selected code |
Byte: Optimize Code |
Get performance optimization suggestions |
Byte: Generate Unit Tests |
Create unit tests for the selected code |
Byte: Add Comments to Code |
Add detailed comments to the selected code |
Byte: Analyze Code Issues |
Find potential bugs and code smells |
Byte: Configure AI Service |
Set up your preferred AI provider and API key |
Command | Description |
---|---|
/explain |
Explain the selected code |
/refactor |
Get refactoring suggestions |
/docs |
Generate documentation |
/optimize |
Get performance optimization suggestions |
/comments |
Add detailed comments to code |
/issues |
Find potential bugs and code smells |
/tests |
Generate unit tests |
/help |
See a list of all available commands |
Command | Description |
---|---|
Byte: Start Terminal Error Monitoring |
Begin monitoring terminal for errors |
Byte: Stop Terminal Error Monitoring |
Stop the error monitoring process |
Byte: Analyze Error Message |
Manually analyze an error message |
Shortcut | Command | Description |
---|---|---|
Ctrl+Alt+I / Cmd+Alt+I |
Analyze Selected Code | Analyze the currently selected code |
Ctrl+Alt+Q / Cmd+Alt+Q |
Ask Question About Selected Code | Ask a specific question about the selected code |
Ctrl+Alt+C / Cmd+Alt+C |
Open Code Analysis Chat | Open the inline code analysis chat |
Byte AI Assistant provides several configuration options:
Setting | Description |
---|---|
byte.provider |
AI service provider (openai, gemini, local, anthropic) |
byte.openai.apiKey |
OpenAI API key |
byte.openai.model |
OpenAI model (gpt-3.5-turbo, gpt-4, gpt-4-turbo) |
byte.gemini.apiKey |
Google Gemini API key |
byte.gemini.model |
Gemini model (gemini-1.5-flash, gemini-1.5-pro) |
byte.anthropic.apiKey |
Anthropic API key |
byte.anthropic.model |
Anthropic model (claude-3-haiku, claude-3-sonnet, claude-3-opus) |
byte.local.endpoint |
Ollama service endpoint URL |
byte.local.model |
Local model name (llama3, codellama, mistral, mixtral, neural-chat, phi) |
byte.saveHistory |
Save chat history between sessions |
byte.cache.enabled |
Enable API response caching to reduce token usage |
byte.autoSwitch |
Enable automatic model switching based on task complexity |
You can configure these settings through:
- VS Code Settings UI
/configure
command in the chat- Directly editing settings.json
- Node.js (v16+)
- npm or yarn
- Visual Studio Code
The codebase follows a modular architecture:
src/
βββ commands/ # Command handling and management
β βββ handlers/ # Command handler implementations
β βββ utils/ # Command-related utilities
β βββ index.ts # CommandManager that registers all commands
β βββ types.ts # Command-related type definitions
β
βββ services/ # Core services
β βββ ai/ # AI service and providers
β β βββ providers/ # Implementation for different AI providers
β β βββ types/ # Type definitions for AI-related features
β β βββ utils/ # AI-related utility functions
β β
β βββ bug-finder/ # Bug finder functionality
β βββ utils/ # Error parsing and analysis utilities
β βββ types.ts # Bug finder type definitions
β
βββ views/ # UI components and panels
β βββ chat/ # Main chat panel implementation
β βββ inline-chat/ # Inline code chat functionality
β
βββ extension.ts # Main extension entry point
βββ utils.ts # Shared utility functions
- extension.ts: The main entry point that activates the extension and initializes all services
- AI Service: Manages communication with different AI providers (OpenAI, Gemini, Claude, Local)
- Command Manager: Registers and handles all VS Code commands
- Chat Panel: Implements the main chat interface in the sidebar
- Inline Code Chat: Implements the focused code analysis chat interface
- Bug Finder Service: Monitors terminal output for errors and provides AI-powered solutions
# Watch mode for development
npm run watch
# Build for production
npm run compile
# Lint the code
npm run lint
# Run tests
npm run test
To debug the extension:
- Open the project in VS Code
- Press F5 to launch a new instance of VS Code with the extension loaded
- You can set breakpoints in your code for debugging
For features like the Bug Finder that use proposed VS Code APIs, you need to enable them in development:
code --enable-proposed-api byte.byte
- Your code is processed according to the privacy policy of the AI provider you choose
- Code is sent to AI providers only when you explicitly request analysis
- No code is stored or logged by the extension itself
- API keys are stored securely in the VS Code secret storage
- You can use local models via Ollama for complete privacy
- The extension implements rate limiting and caching to reduce API usage and costs
This project is licensed under the MIT License - see the LICENSE file for details.
Contributions are welcome! Please feel free to submit a Pull Request or open issues for bug reports and feature requests.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Make your changes
- Run the linter (
npm run lint
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project follows these coding conventions:
- Use TypeScript for all code
- Follow the ESLint configuration provided
- Use async/await for asynchronous operations
- Add appropriate JSDoc comments to public APIs
- Write unit tests for new functionality
Please ensure your contributions include tests. Run the existing test suite to make sure your changes don't break existing functionality:
npm run test
- Thanks to the VS Code team for their excellent extension API
- Thanks to the AI provider teams at OpenAI, Google, and Anthropic
- Thanks to all contributors and users for helping improve this extension
We welcome feedback and suggestions! There are several ways to reach out:
- Bug Reports and Feature Requests: Open an issue on GitHub
- Questions and Discussions: Start a discussion in the GitHub repository
- Direct Contact: Reach out to the maintainer via email at tuncerbostancibasi@gmail.com
See the CHANGELOG.md file for details about version updates and changes.