A boilerplate project designed to enhance LLM usability directly within your CLI.
Stop context-switching and bring the power of LLMs to your terminal workflows. cli-llm
provides a structured starting point to build personalized LLM tools tailored to your needs.
Working in the terminal often involves repetitive text manipulation or generation tasks – asking, refining technical writing, formatting commit messages, and more. cli-llm
aims to streamline these tasks by integrating LLM capabilities directly into your CLI environment.
Instead of copying text to a separate web UI or application, you can use pre-defined modes within cli-llm
to instantly apply LLM processing to your input, keeping you focused and efficient.
- Supercharge CLI Workflows: Integrate LLM assistance without leaving your terminal.
- Rich CLI Experience: Uses the
rich
library for a modern, visually appealing interface with formatted panels and colors. - Customizable Presets: Easily define and switch between frequently used LLM interaction modes. The included example demonstrates modes for:
- Asking (
ask
) - Formal Writing (
writing
) - Commit Messages (
commit
)
- Asking (
- Mode Persistence: Stay in a specific interaction mode for multiple messages until you explicitly switch.
- DSPy Powered: Leverages the DSPy framework for robust LLM interaction, allowing structured programming over simple prompting.
- Modular & Extensible: The codebase is broken into logical modules, making it easy to add new features or change existing ones.
- DSPy: The core framework for programming foundation models.
- Rich: For beautiful and readable formatting in the terminal.
- LLM Backend: Configurable to use various LLM providers (e.g., Gemini, OpenAI) supported by DSPy.
The project is organized into several modules to separate concerns and make it easier to extend:
/
├── main.py # Main application entry point, handles user interaction and mode switching.
├── config.py # Central configuration for the application (e.g., LM settings).
├── features.py # Defines the available features (modes) and their configurations.
├── signatures.py # Contains all DSPy signatures for different tasks.
├── handlers.py # Functions to process and format the responses from the LLM.
└── display.py # Manages the CLI display using the `rich` library for better UI.
Follow these steps to get a local copy up and running.
- uv (Python package installer)
- Clone the repository:
git clone https://github.com/jgkym/cli-llm cd cli-llm
- Install dependencies:
uv sync --upgrade
- Set up Environment Variables:
- Create an environment file:
touch .env
- Edit the
.env
file and add your API key. For Gemini:# .env GEMINI_API_KEY=YOUR_API_KEY_HERE
- Create an environment file:
- Run the main script:
make run #Or uv run python3 -m main
- Select an initial mode: Enter the number corresponding to the feature you want to use. (e.g.,
0
forSummarize
,1
forRefine
,2
forYourNewFeature
) - Enter your text: Type or paste the text you want the LLM to process.
- View the output: The script will print the refined text in a clean, formatted panel.
- Switch modes: Enter
0
,1
, or2
at the prompt to change the active mode. - Exit: Press
Enter
on an empty line.
The modular design makes it easy to use your own features. If you're new to DSPy, there's no need to worry! It's very straightforward to learn, and the official guide is provided to help you get started.
You can configure LLMs and other settings in config.py
. See this link for details on LLM settings.
In signatures.py
, create a new class inheriting from dspy.Signature
that defines the input and output fields for your new feature.
# signatures.py
class YourNewSignature(dspy.Signature):
"""A brief description of what this signature does."""
input_text: str = dspy.InputField(desc="Description of the input")
output_text: str = dspy.OutputField(desc="Description of the output")
In handlers.py
, add a new function to format and display the output from your new signature. Use the functions from display.py
to maintain a consistent UI. You can also directly allow a lambda
function to handle the response.
# handlers.py
from display import print_refined_output
def handle_your_new_feature(response: any) -> None:
"""Formats and prints the response for your new feature."""
print_refined_output("Title for Your Feature", response.output_text)
In features.py
, import your new signature and handler. Then, in the activate_features
function, add a new Feature
instance to the list. The order in the list determines the selection number.
# features.py
from signatures import YourNewSignature
from handlers import handle_your_new_feature
def activate_features() -> List[Feature]:
features = [
# ... existing features
Feature(
name="your_feature_name",
description="A short description for the menu",
signature=YourNewSignature,
input_field="input_text", # Must match the InputField in your signature
response_handler=handle_your_new_feature,
number=777, # The number for selection in the menu
),
]
# ... rest of the function
That's it! The application will automatically pick up the new feature the next time you run it.
To fully leverage the power of cli-llm
, you can create a shortcut to quickly bring up the background terminal using Hammerspoon
. Here's a sample configuration that uses Warp
and binds cmd + e + e
to instantly bring the terminal to the front.
Contributions are welcome! Please feel free to submit pull requests or open issues to suggest improvements or report bugs.
Distributed under the MIT License. See LICENSE
file for more information.