Ollama returning wrong output #168
Unanswered
cedricbulteel
asked this question in
Q&A
Replies: 2 comments
-
which model are you using for ollama? you'll need to use one that is fine-tuned on function calling. This makes all the difference |
Beta Was this translation helpful? Give feedback.
0 replies
-
I used mistral 7B but funnily enough, the problem solved itself by using llama 3.1 8B. So indeed it depends on the model |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the bug
When I use ollama as model provider and run a model locally, I get the following output
To Reproduce
Steps to reproduce the behavior:
Expected behavior
The correct output
Example when using Groq:

Desktop (please complete the following information):
Beta Was this translation helpful? Give feedback.
All reactions