8000 Fix docker orphan warnings, hanging localai_default network and ollama containers, and optimize start time through dependencies. by ytkaczyk · Pull Request #35 · coleam00/local-ai-packaged · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Fix docker orphan warnings, hanging localai_default network and ollama containers, and optimize start time through dependencies. #35

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions README.md
8000
Original file line number Diff line number Diff line change
Expand Up @@ -254,10 +254,10 @@ To update all containers to their latest versions (n8n, Open WebUI, etc.), run t

```bash
# Stop all services
docker compose -p localai -f docker-compose.yml -f supabase/docker/docker-compose.yml down
docker compose -p localai -f docker-compose.yml --profile <your-profile> down

# Pull latest versions of all containers
docker compose -p localai -f docker-compose.yml -f supabase/docker/docker-compose.yml pull
docker compose -p localai -f docker-compose.yml --profile <your-profile> pull

# Start services again with your desired profile
python start_services.py --profile <your-profile>
Expand All @@ -266,6 +266,7 @@ python start_services.py --profile <your-profile>
Replace `<your-profile>` with one of: `cpu`, `gpu-nvidia`, `gpu-amd`, or `none`.

Note: The `start_services.py` script itself does not update containers - it only restarts them or pulls them if you are downloading these containers for the first time. To get the latest versions, you must explicitly run the commands above.
Note: Adding `--profile <your-profile>` to the docker compose commands ensure proper shutdown of the ollama containers, disposal of the `localai_default` network as well as update of the ollama docker images.

## Troubleshooting

Expand Down
82 changes: 81 additions & 1 deletion docker-compose.yml
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
include:
- ./supabase/docker/docker-compose.yml

volumes:
n8n_storage:
ollama_storage:
Expand Down Expand Up @@ -29,6 +32,14 @@ x-ollama: &service-ollama
- 11434:11434
volumes:
- ollama_storage:/root/.ollama
healthcheck:
test:
- CMD-SHELL
- bash -c '</dev/tcp/localhost/11434' || exit 1
interval: 30s
timeout: 5s
retries: 5
start_period: 15s

x-init-ollama: &init-ollama
image: ollama/ollama:latest
Expand Down Expand Up @@ -58,6 +69,15 @@ services:
volumes:
- ~/.flowise:/root/.flowise
entrypoint: /bin/sh -c "sleep 3; flowise start"
healthcheck:
test:
[
"CMD", "wget", "--spider", "--no-verbose", "-T","1", "http://flowise:3001/","-O","/dev/null"
]
interval: 30s
timeout: 5s
retries: 5
start_period: 15s

open-webui:
image: ghcr.io/open-webui/open-webui:main
Expand All @@ -69,6 +89,18 @@ services:
- "host.docker.internal:host-gateway"
volumes:
- open-webui:/app/backend/data
healthcheck:
test:
[
"CMD", "curl", "-sSfL", "--head", "-o", "/dev/null", "http://open-webui:8080/health"
]
interval: 30s
timeout: 5s
retries: 5
start_period: 15s
depends_on:
n8n:
condition: service_healthy

n8n-import:
<<: *service-n8n
Expand All @@ -79,6 +111,10 @@ services:
- "n8n import:credentials --separate --input=/backup/credentials && n8n import:workflow --separate --input=/backup/workflows"
volumes:
- ./n8n/backup:/backup
depends_on:
# Wait for supabase postgres to be ready.
db:
condition: service_healthy

n8n:
<<: *service-n8n
Expand All @@ -90,9 +126,24 @@ services:
- n8n_storage:/home/node/.n8n
- ./n8n/backup:/backup
- ./shared:/data/shared
healthcheck:
test:
[
"CMD", "wget", "--spider", "--no-verbose", "-T","1", "http://n8n:5678/","-O","/dev/null"
]
interval: 30s
timeout: 5s
retries: 5
start_period: 15s
depends_on:
n8n-import:
condition: service_completed_successfully
# Wait for supabase postgres to be ready.
db:
condition: service_healthy
# Wait for qdrant to be ready.
qdrant:
condition: service_healthy

qdrant:
image: qdrant/qdrant
Expand All @@ -102,6 +153,14 @@ services:
- 6333:6333
volumes:
- qdrant_storage:/qdrant/storage
healthcheck:
test:
- CMD-SHELL
- bash -c '</dev/tcp/localhost/6333' || exit 1
interval: 30s
timeout: 5s
retries: 5
start_period: 15s

caddy:
container_name: caddy
Expand Down Expand Up @@ -129,6 +188,15 @@ services:
options:
max-size: "1m"
max-file: "1"
healthcheck:
test:
[
"CMD", "wget", "--spider", "--no-verbose", "-T","1", "http://localhost:8001/","-O","/dev/null"
]
interval: 30s
timeout: 5s
retries: 5
start_period: 15s

redis:
container_name: redis
Expand All @@ -148,6 +216,12 @@ services:
options:
max-size: "1m"
max-file: "1"
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 30s
timeout: 5s
retries: 3
start_period: 15s

searxng:
container_name: searxng
Expand All @@ -171,7 +245,13 @@ services:
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
max-file: "1"
healthcheck:
test: ["CMD", "wget", "--spider", "--no-verbose", "-T","1", "http://localhost:8080/healthz","-O","/dev/null"]
interval: 30s
timeout: 5s
retries: 3
start_period: 15s

ollama-cpu:
profiles: ["cpu"]
Expand Down
33 changes: 10 additions & 23 deletions start_services.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,27 +46,21 @@ def prepare_supabase_env():
print("Copying .env in root to .env in supabase/docker...")
shutil.copyfile(env_example_path, env_path)

def stop_existing_containers():
def stop_existing_containers(profile=None):
"""Stop and remove existing containers for our unified project ('localai')."""
print("Stopping and removing existing containers for the unified project 'localai'...")
run_command([
"docker", "compose",
"-p", "localai",
cmd = ["docker", "compose", "-p", "localai"]
if profile and profile != "none":
cmd.extend(["--profile", profile])
cmd.extend([
"-f", "docker-compose.yml",
"-f", "supabase/docker/docker-compose.yml",
"down"
])

def start_supabase():
"""Start the Supabase services (using its compose file)."""
print("Starting Supabase services...")
run_command([
"docker", "compose", "-p", "localai", "-f", "supabase/docker/docker-compose.yml", "up", "-d"
])
run_command(cmd)

def start_local_ai(profile=None):
"""Start the local AI services (using its compose file)."""
print("Starting local AI services...")
print("Starting all local AI services...")
cmd = ["docker", "compose", "-p", "localai"]
if profile and profile != "none":
cmd.extend(["--profile", profile])
Expand Down Expand Up @@ -226,16 +220,9 @@ def main():
generate_searxng_secret_key()
check_and_fix_docker_compose_for_searxng()

stop_existing_containers()

# Start Supabase first
start_supabase()

# Give Supabase some time to initialize
print("Waiting for Supabase to initialize...")
time.sleep(10)

# Then start the local AI services
stop_existing_containers(args.profile)

# Start all local AI services
start_local_ai(args.profile)

if __name__ == "__main__":
Expand Down
0