- Melbourne, Australia
- poita66
Lists (2)
Sort Name ascending (A-Z)
Stars
📚 Benchmark your browser agent on ~2.5k READ and ACTION based tasks
An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs
Awesome list for LLM quantization
BOM, STL files and instructions for PAROL6 3D printed robot arm
mbake is a Makefile formatter and linter. It only took 50 years!
AI (Aaditri Informatics) is a system prompt named after my cherished daughter, Aaditri Anand. Its behavior is modeled on the collaborative learning approach I share with her, reflecting our bond an…
Code for SIGGRAPH 2022 paper: ComplexGen: CAD Reconstruction by B-Rep Chain Complex Generation
Fully Local Manus AI. No APIs, No $200 monthly bills. Enjoy an autonomous agent that thinks, browses the web, and code for the sole cost of electricity. 🔔 Official updates only via twitter @Martin9…
Text mode window environment. A terminal emulator and multiplexer with mouse support, overlapped windows and networked clients. Text-mode equivalent of X11 server + VNC server
OpenRecall is a fully open-source, privacy-first alternative to proprietary solutions like Microsoft's Windows Recall. With OpenRecall, you can easily access your digital history, enhancing your me…
Zero-config application builder that automatically analyzes and turns your code into an image
⚡ Lingo.dev - open-source, AI-powered i18n toolkit for instant localization with LLMs. Bring your own LLM or use Lingo.dev engine. Join discord: https://lingo.dev/go/discord
Official PyTorch implementation for "Large Language Diffusion Models"
Simple, hackable offline speech to text - using the VOSK-API.
Context Portal (ConPort): A memory bank MCP server building a project-specific knowledge graph to supercharge AI assistants. Enables powerful Retrieval Augmented Generation (RAG) for context-aware …
Minimal Linux OS with a Model Context Protocol (MCP) gateway to expose local capabilities to LLMs.
A curated list of awesome resources, tools, research papers, and projects related to the concept of Large Language Model Operating Systems (LLM-OS).
A cache for AI agents to learn and replay complex behaviors.
Real-time data transformation framework for AI. Ultra performant, with incremental processing.
Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.