-
Notifications
You must be signed in to change notification settings - Fork 4
Update README.md #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
doesn't build in linux yet.
Thanks for testing it! I briefly switched development to Windows because that was what was on the only machine I had a running CUDA setup on. I'll check whether your instructions work for Linux as written shortly. The patch was supposed to fix the space handling for SPM tokenization that the README talks about, but I don't think it works against latest llama.cpp versions anymore. I am trying to find a more durable solution. |
Where are we supposed to clone imgui and llama.cpp? I found the best so far (it doesn't work):
|
Actually, a better way might be to use git-submodules. What do you think? |
As per current README I get this error:
|
The build error should hopefully be fixed and build instructions updated. Thanks for pointing me at |
Instructions work fine. But there is a build error: autopen/tokentree.cpp:683:57: error: ‘max_element’ is not a member of ‘std’; did you mean
‘tuple_element’?
683 | float max_logit = *std::max_element(logits, logits+n_vocab);
| ^~~~~~~~~~~
| tuple_element |
That is solved by adding However there are many library linking (ld) issues with it still. |
closing it now. |
I added that include (to tokentree.cpp, since it doesn't seem necessary in the .h). What linking issues are you getting? Are you installing llama.cpp with |
I tried new instructions from updated README; but then it threw the error about Other than that, I have tried building llama.cpp statically or otherwise; for CUDA and CPU; just cannot get it to work. If I may suggest; try using docker for building. I did that for a C project here: https://github.com/IAmAnubhavSaini/kilo-text/blob/main/Dockerfile That may be a way to contain the variability and uncertainties. |
Huh. But I already added an include for Regarding libvulkan, could you try with a completely fresh checkout of llama.cpp? I can't reproduce your problem on a fresh Debian machine, and it might be the case that you have LLAMA_VULKAN or something stuck on in CMakeCache.txt from a previous build. Thanks! I think a dependency as heavy as Docker for building seems overkill. One of the big benefits of llama.cpp is specifically that it doesn't have the overabundance of transitive dependencies that typical "industrial-strength" ML frameworks require... |
On fresh clone and following instructions:
|
That's very strange. Can you run the following commands in the autopen root folder and tell me its output? find -name libllama.a -print
find -name libggml-base.a -print (I expect:
) Additionally, the output of |
Doesn't build in linux yet.