Create AI-powered endpoints the easy way.
Use Vasto to instantly turn your local Ollama models into versatile, custom HTTP APIs for development, testing, and local integration.
Vasto is a desktop application designed to bridge the gap between your locally running Ollama Large Language Models (LLMs) and your development projects. Instead of writing boilerplate server code to interact with Ollama's API for every different task, Vasto provides a simple graphical interface to:
- Define Endpoints: Specify custom routes (e.g.,
/summarize
,/generate-blog
), HTTP methods (GET, POST), and the Ollama model to use. - Structure I/O: Define the expected JSON input structure (from URL parameters, query strings, or request body) and the desired JSON output structure.
- Activate & Serve: Vasto runs a local HTTP server that listens for requests on your defined endpoints, interacts with Ollama using your specified model and prompt structure (derived from the I/O definitions), and returns the structured JSON output.
Essentially, Vasto acts as a configurable proxy and management layer for your local Ollama instance, exposing specific model capabilities as standardized web APIs.
- ⏱️ Rapid Prototyping: Go from idea to a working AI endpoint in minutes. Perfect for quickly testing concepts that leverage local LLMs without complex setup.
- 🧩 No More Boilerplate: Stop writing repetitive server code for common LLM tasks. Vasto handles the HTTP server, request parsing, Ollama interaction, and response formatting.
- 🎯 Standardized I/O: Define clear JSON schemas for inputs and outputs. Ensures consistent and predictable API behavior, making integration easier.
- 🏠 100% Local & Private: Runs entirely on your machine, connecting directly to your local Ollama instance. Your models, prompts, and data stay completely private.
- ⚙️ Easy Management: Create, update, activate/deactivate, and delete endpoints through a user-friendly interface.
- 🔌 Direct connection to your local Ollama instance.
- 🧠 Compatible with any Ollama model you have installed (
ollama list
). ↔️ Define custom routes and HTTP methods (GET
,POST
).- 📝 Specify precise JSON input and output structures.
- 📥 Handles input from URL parameters, query strings, and request bodies.
- ⚙️ Manage endpoints: create, update, toggle activation, delete.
- 🔒 Secure endpoints with optional API Key (Bearer Token) authentication.
- 💾 Persistent configuration stored locally in
endpoints.json
. - 📦 Works as a packaged application (Installer/Portable).
- 📋 Quickly list available Ollama models directly within Vasto's UI.
- Ollama installed and running on your local machine. Vasto needs Ollama to function.
Head over to the Releases Page to download the latest version for your operating system.
- Look for the appropriate asset (e.g.,
.exe
installer or.zip
portable for Windows).
- Installer: Run the downloaded
.exe
file and follow the prompts. - Portable: Unzip the downloaded file and run the
Vasto.exe
(or similar) executable inside the folder.
-
Launch Vasto: Start the application after installation.
-
Ensure Ollama is Running: Vasto needs to connect to a running Ollama instance.
-
Create an Endpoint:
- Click the "Create Endpoint" (or similar) button.
- Define the Route (e.g.,
/translate
). - Select the HTTP Method (e.g.,
POST
). - Choose the Ollama Model to use (Vasto can fetch the list from
ollama list
). - Define the Input JSON Schema (what data the endpoint expects).
- Define the Output JSON Schema (how the LLM response should be structured).
- (Optional) Configure an API Key.
- Save the endpoint.
-
Activate the Endpoint: Toggle the endpoint to "Active" in the endpoint list.
-
Test the Endpoint: Use an HTTP client like
curl
, Postman, or integrate it into your application.# Example for a POST endpoint at /translate expecting JSON input curl -X POST http://localhost:YOUR_VASTO_PORT/translate \ -H "Content-Type: application/json" \ # -H "Authorization: Bearer YOUR_API_KEY" # If API key is set -d '{ "text": "Hello world", "target_language": "Spanish" }' # Expected Response (based on output schema definition): # { # "translated_text": "Hola mundo" # }
(Note: Replace
YOUR_VASTO_PORT
with the port Vasto is running on, likely shown in the UI or logs. ReplaceYOUR_API_KEY
if you configured one)
Your endpoint definitions are saved locally in a file named endpoints.json
within Vasto's application data directory. While you can view this file, it's recommended to manage endpoints through the Vasto UI to avoid formatting errors.
- macOS Support (Packaged Application)
- Linux Support (Packaged Application / AppImage)
- Pre-built Endpoint Templates (e.g., Summarization, Translation, Chat)
- More Authentication Options
- Improved UI/UX based on feedback
- Streaming responses support
Contributions are welcome! Please feel free to submit a Pull Request or open an Issue for bugs, feature requests, or suggestions.
- Fork the repository.
- Create your feature branch (
git checkout -b feature/AmazingFeature
). - Commit your changes (
git commit -m 'Add some AmazingFeature'
). - Push to the branch (
git push origin feature/AmazingFeature
). - Open a Pull Request.
This project is licensed under the GNU Affero General Public License v3.0 - see the LICENSE file for details, or visit https://www.gnu.org/licenses/agpl-3.0.html.