Could not fetch Ollama models. Make sure the Ollama base URL is accessible with RAGapp. #234
Replies: 19 comments
-
@richardstevenhack please note that you want to reach from inside a Docker container your local host network, so:
|
Beta Was this translation helpful? Give feedback.
-
I read the linked piece. It basically says use any random IP. I used http://172.17.0.1 because that's what worked with AnythingLLM and that is what they recommend: https://docs.useanything.com/installation/self-hosted/local-docker
|
Beta Was this translation helpful? Give feedback.
-
Here is an attempt to use the requested IP: http://host.docker.internal:11434 Didn't work. So what does in Docker on Linux? |
Beta Was this translation helpful? Give feedback.
-
Ah, now I got your problem. Sorry for the confusion. So the issue is that On my Mac the following works: # docker run -it ragapp/ragapp bash
root@431f021b67e8:/app# curl -v http://host.docker.internal:11434
* Trying 192.168.65.254:11434...
* Connected to host.docker.internal (192.168.65.254) port 11434 (#0)
> GET / HTTP/1.1
> Host: host.docker.internal:11434
> User-Agent: curl/7.88.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: text/plain; charset=utf-8
< Date: Thu, 27 Jun 2024 12:05:20 GMT
< Content-Length: 17
<
* Connection #0 to host host.docker.internal left intact
Ollama is running can you try: # docker run -it ragapp/ragapp bash
root@ecaf3d0b4104:/app# curl -v http://172.17.0.1:11434 on your Linux machine? Anyone else having the same issue using Linux? |
Beta Was this translation helpful? Give feedback.
-
I got:
|
Beta Was this translation helpful? Give feedback.
-
The curl has no problem connecting to ollama from the command line outside the Docker container:
|
Beta Was this translation helpful? Give feedback.
-
I've seen reference to something called CORS (Cross-Origin Resource Sharing) somewhere in the documentation for one of my other GUI front ends. I never did anything about it even with that front end and it worked. Perhaps that's related? UPDATE: I just did what this article said to do on Linux: Reran ragapp, but no change. Still won't take docker-internal or 172 IP. |
Beta Was this translation helpful? Give feedback.
-
A lot of python tracebacks are generated while the app is sitting there doing its 120 second time out thing trying to get the models list. This is a portion of what that looks like, if it's any help: `Traceback (most recent call last): The above exception was the direct cause of the following exception: Traceback (most recent call last): This looks like the relevant section:
|
Beta Was this translation helpful? Give feedback.
-
I'm going to bed. I'll check in later. |
Beta Was this translation helpful? Give feedback.
-
Seems like a Docker network issue. Can you try it with the ubuntu image:
|
Beta Was this translation helpful? Give feedback.
-
Didn't work: `root@a48ff3649547:/# curl -v http://172.17.0.1:11434
|
Beta Was this translation helpful? Give feedback.
-
Here's a comparison: I run Lobe Chat as one of the GUIs I'm testing. I open the browser and connect to: http://localhost:3210. No problem, Lobe Chat opens. I go to Lobe Chat settings, select Ollama as the provider, and enter the URL as: https://127.0.0.1:11434 This was working before. I could go to the Just Chat, select Qwen2 and connect. Today it doesn't work. So something has gone wrong with Docker, I assume. I'm going to try to rebuild the Lobe Chat image from scratch and see if I can figure out what's going on with that. Might help in figuring out Ragapp. |
Beta Was this translation helpful? Give feedback.
-
Doing some research on Google, it appears almost no one can get Docker to connect to Ollama. Do a search for "docker can not connect to Ollama" and look at the number of returns. Clearly this is a mess. This is why I hate using Docker - it's a complicated mess if you want to connect to anything outside of it. A Flatpak or an AppImage is better. Based on an article on the Open-WebUI Github, they suggest: `If you're experiencing connection issues, it’s often due to the WebUI docker container not being able to reach the Ollama server at 127.0.0.1:11434 (host.docker.internal:11434) inside the container . Use the --network=host flag in your docker command to resolve this. Note that the port changes from 3000 to 8080, resulting in the link: http://localhost:8080. Example Docker Command: docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main ` |
Beta Was this translation helpful? Give feedback.
-
Holy shit, I got it working! This is what I did:
|
Beta Was this translation helpful? Give feedback.
-
That also worked for Lobe Chat! I don't know if connecting the Docker network to the host network is a security risk, but since I'm running on my own machine with the usual firewalls, I don't really care as long as this works. Also I'm running on openSUSE Tumbleweed Linux, so Windows or another Linux your mileage may vary. |
Beta Was this translation helpful? Give feedback.
-
Does it also work if you set in the 1. step just:
(because the 2. step will set the OLLAMA_BASE_URL) parameter |
Beta Was this translation helpful? Give feedback.
-
Yup, that works, too. Just tested it. |
Beta Was this translation helpful? Give feedback.
-
closing (referenced in README) |
Beta Was this translation helpful? Give feedback.
-
Hi, lately I got some issues when running RAGapp using Ollama model, here's you have to update base on @richardstevenhack comment:
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the bug
On fresh install, got this message:
Could not fetch Ollama models. Make sure the Ollama base URL is accessible with RAGapp.
Also popup says "Failed to fetch Ollama models"
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Expected a connection to Ollama and a list of available Ollama models to be displayed.
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
Smartphone (please complete the following information):
N/A
Additional context
Nothing to add except that every one of several other GUI front ends to Ollama I have gotten working with either Docker or AppImages or Flatpaks. A couple did not work due to issues with their implementation.
Beta Was this translation helpful? Give feedback.
All reactions