-
Notifications
You must be signed in to change notification settings - Fork 22
Ollama #37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
May I ask which version you're using, 0.1.x? What error did you encounter? |
Hi, thanks for your answer. I'm using Ollama version is 0.6.5, the latest version. I just want to use it for completion and chat. I use many apps that work great with the available capabilities offered by Ollama. It's pretty amazing with the latest models like Gemma 3 and DeepSeek. In Obsidian, I'm using companion with great results for completion, but the app doesn't have a chat function and the completion context can't be easily adjusted to make it larger. I write stories and code, so I need to have maximum context for it to be coherent. :) Thanks again! |
Got it, I'll go learn about it. I've also tested Gemma 3 and DeepSeek R1, but the results don't seem ideal. Could you share which specific model you're using? Is it 30b or 70b? |
I'm certainly not saying that I get the same results as Gemini, ChatGPT or Claude... :) My GPU can only handle up to 12b-13b with a usable speed. For programming, I'm using Yi-coder 9b for coding and for writing and document analysis : Gemma 3 12b and 4b for multimodal, DeepSeek R1 7b for thinking, Fireblossom, Aya-expanse and Lama 3.1, all 8b for writing and translating. My use is mainly around language, writing, coding and I use my own information for knowledge. For me, having Ollama as an option is great for privacy and when I'm on the move with no Internet connection. It's also great on the bank and when I want to write a story that include NSFW sections. |
Great plugin! I can't seem to get Ollama working.
Thanks.
The text was updated successfully, but these errors were encountered: