8000 GitHub - Flowm/llm-stack: Docker compose config for local and hosted llms with multiple chat interfaces
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Flowm/llm-stack

Repository files navigation

LLM Stack

An all-in-one Docker Compose config for providing access to local and external LLMs with multiple chat interfaces.

Components

  • Caddy: Acts as central entrypoint for the whole stack
  • Ollama: Provides access to local LLM models
  • LiteLLM: OpenAI compatible API proxy for local Ollama provided models and upstream models
  • Multiple ChatGPT-style web interfaces for interacting with the LLM models

Models

  • Local (via Ollama)
    • local-google-gemma3
    • local-llama3-8b
  • OpenAI
    • openai-gpt-4-turbo
    • openai-gpt-4o
    • openai-o3
    • openai-o4-mini
  • Google
    • google-gemini-1.5-pro
  • Anthropic
    • anthropic-claude-3-7-sonnet
    • anthropic-claude-3-5-sonnet
  • Groq
    • groq-llama-3.3-70b
    • groq-llama-4-maverick

Chat Frontends

Getting Started

Prerequisites

  • Docker
  • Docker Compose
  • Git

Setup

  1. Clone this repository
  2. Copy the default config cp env.default .env and cp anythingllm.env.default .anythingllm.env
  3. Edit .env and add the relevant API keys
  4. Start the Docker Compose configuration: docker-compose up
  5. Access the Caddy webserver at http://localhost:3000

About

Docker compose config for local and hosted llms with multiple chat interfaces

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  
0