A boilerplate project designed to enhance LLM usability directly within your CLI.
Stop context-switching and bring the power of LLMs to your terminal workflows. cli-llm
provides a structured starting point to build personalized LLM tools tailored to your needs.
Working in the terminal often involves repetitive text manipulation or generation tasks – asking, refining technical writing, formatting commit messages, and more. cli-llm
aims to streamline these tasks by integrating LLM capabilities directly into your CLI environment.
Instead of copying text to a separate web UI or application, you can use pre-defined modes within cli-llm
to instantly apply LLM processing to your input, keeping you focused and efficient.
- Supercharge CLI Workflows: Integrate LLM assistance without leaving your terminal.
- Customizable Presets: Easily define and switch between frequently used LLM interaction modes (e.g., question & answer, technical writing improvement, commit message generation). The included example demonstrates modes for:
- Asking (
ask
) - Formal Writing (
writing
) - Commit Messages (
commit
)
- Asking (
- Mode Persistence: Stay in a specific interaction mode (like 'writing' or 'commit') for multiple messages until you explicitly switch.
- DSPy Powered: Leverages the DSPy framework for robust LLM interaction, allowing structured programming over simple prompting.
- Boilerplate Structure: Provides a clean foundation to add your own custom LLM-powered CLI features.
- DSPy: The core framework for programming foundation models. It allows for structured interaction, optimization ("compilation"), and composability of LLM modules.
- LLM Backend: Configurable to use various LLM providers (e.g., Gemini, OpenAI, Anthropic) supported by DSPy via environment variables. The current example uses Gemini.
Follow these steps to get a local copy up and running.
- uv (Python package installer)
-
Clone the repository (or download the files):
git clone https://github.com/jgkym/cli-llm cd cli-llm
-
Install dependencies:
uv sync
-
Set up Environment Variables:
- Create a environment file:
touch .env
- Edit the
.env
file and add your API key(s). For the current example using Gemini:(Adapt variable names if using different providers/configurations in DSPy)# .env GEMINI_API_KEY=YOUR_API_KEY_HERE
- Create a environment file:
- Run the main script:
python main.py
- Select an initial mode: Enter
1
,2
, or3
when prompted. - Enter your text: Type or paste the text you want the LLM to process according to the current mode.
- View the output: The script will print the refined text, an explanation, and potentially suggestions.
- Continue messaging: Enter more text to process using the same mode.
- Switch modes: Enter
1
,2
, or3
at the prompt to change the active LLM processing mode. - Exit: Press
Enter
on an empty line.
- API Keys: Managed via the
.env
file. - LLM Models: Model names (e.g.,
'gemini/gemini-2.5-flash-preview-04-17'
) are specified directly within themain.py
script when initializingdspy.LM
. You can change these to other models supported by your provider and DSPy. - DSPy Settings: Temperature and other LM parameters are also set during
dspy.LM
initialization inmain.py
.
Adding a new feature/mode is straightforward:
- Define a Signature: Create a new class inheriting from
dspy.Signature
inmain.py
(or a separate file) that defines the input and output fields for your new feature. - Create a Predictor: Initialize a DSPy module (e.g.,
dspy.ChainOfThought(YourNewSignature)
) inmain.py
. - Add Mode Mapping: Assign a number to your new mode in the
mode_map
andmode_descriptions
dictionaries inmain.py
. - Update Main Loop: Add an
elif
condition in thewhile True
loop inmain.py
to handle the new mode number, call your new predictor, and useprint_response
(or custom logic) to display the results.
Contributions are welcome! Please feel free to submit pull requests or open issues to suggest improvements or report bugs.
Distributed under the MIT License. See LICENSE
file for more information.