The AI agent script CLI for Programmable Prompt Engine.
-
Updated
Apr 5, 2025 - TypeScript
8000
The AI agent script CLI for Programmable Prompt Engine.
OfflineAI is an artificial intelligence that operates offline and uses machine learning to perform various tasks based on the code provided. It is built using two powerful AI models by Mistral AI.
AscendAI is a local-first, self-mutating AGI skeleton—rigged with sovereign agents, recursive autonomy, and a will to adapt. GremlinGPT is its chaotic heart: learning, building, and evolving into AscendAI as it survives testing and bonds with its user. Not just a wrapper—a battle map with agency.
An excellent localized AI chat client application, cross-platform, compatible with all large models compatible with Ollama and OpenAI API. Local deployment protects your data privacy and can be used as Ollama client and OpenAI client.
Demo project showcasing the integration and usage of Chrome’s built-in Gemini Nano AI through the window.ai interface.
A voice assistant with local LLM as a backend
Efficient on-device offline AI model inference using MediaPipe with optimized model screening.
🦙 chat-o-llama: A lightweight yet powerful web interface for Ollama with markdown rendering, syntax highlighting, and intelligent conversation management. Zero external dependencies, perfect for privacy-focused local AI development.
AronaOS is an offline AI-powered personal assistant that helps you manage tasks, set reminders, and recall context-aware conversations. Built with Python, Flask, and local LLMs like Phi-3 Mini, it's designed for both privacy and productivity.
HeyGem — Your AI face, made free
Mobile-first frontend for TaskWizard — a smart AI-powered task breakdown planner. Built with Expo + React Native, connecting to a FastAPI + Ollama backend to generate actionable step-by-step roadmaps from any goal.
A real-time offline voice-to-voice AI assistant built for Raspberry Pi
Conversational AI, local, low-latency voice assistant for R 10000 aspberry Pi 5 with LED, gesture control, and streaming LLM replies.
AronaOS is your offline study and productivity assistant, designed to streamline task management and scheduling. This repository contains the source code for AronaOS Fragment, enabling a web-based experience without needing an internet connection. 🖥️🌐
A privacy-first VS Code extension that integrates with any local LLM through Ollama. Chat with AI models directly in your editor without sending data to the cloud.
AI OCR Tool | Webcam & Image Text Recognition with Astra | Offline Summarization
A real-time, offline voice assistant for Linux and Raspberry Pi. Uses local LLMs (via Ollama), speech-to-text (Vosk), and text-to-speech (Piper) for fast, wake-free voice interaction. No cloud. No APIs. Just Python, a mic, and your voice.
his is my own custom-built offline AI bot that lets you chat with PDFs and web pages using **local embeddings** and **local LLMs** like LLaMA 3. I built it step by step using LangChain, FAISS, HuggingFace, and Ollama — without relying on OpenAI or DeepSeek APIs anymore (they just kept failing or costing too much)
Official site for OmniBot - Run LLMs natively & privately in your browser
A private, local RAG (Retrieval-Augmented Generation) system using Flowise, Ollama, and open-source LLMs to chat with your documents securely and offline.
Add a description, image, and links to the offline-ai topic page so that developers can more easily learn about it.
To associate your repository with the offline-ai topic, visit your repo's landing page and select "manage topics."