8000 Update README.md by IAmAnubhavSaini · Pull Request #1 · blackhole89/autopen · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Update README.md #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

IAmAnubhavSaini
Copy link

Doesn't build in linux yet.

doesn't build in linux yet.
@blackhole89
Copy link
Owner

Thanks for testing it! I briefly switched development to Windows because that was what was on the only machine I had a running CUDA setup on. I'll check whether your instructions work for Linux as written shortly.

The patch was supposed to fix the space handling for SPM tokenization that the README talks about, but I don't think it works against latest llama.cpp versions anymore. I am trying to find a more durable solution.

@IAmAnubhavSaini
Copy link
Author

Where are we supposed to clone imgui and llama.cpp?

I found the best so far (it doesn't work):

  • autopen (say, ~/src/autopen )
    • imgui ( ~/src/autopen/imgui )
  • llama.cpp ( ~/src/llama.cpp )

@IAmAnubhavSaini
Copy link
Author

Actually, a better way might be to use git-submodules. What do you think?

@IAmAnubhavSaini
Copy link
Author

As per current README I get this error:

autopen/llama.cpp/common/common.h:5:10: fatal error: llama-cpp.h: No such file or directory
    5 | #include "llama-cpp.h"
      |          ^~~~~~~~~~~~~
compilation terminated.

@blackhole89
Copy link
Owner

The build error should hopefully be fixed and build instructions updated. Thanks for pointing me at git-submodule - I hadn't tried using it before, but it seems reasonable and I hope I set it up correctly.

@IAmAnubhavSaini
Copy link
Author

Instructions work fine. But there is a build error:

autopen/tokentree.cpp:683:57: error: ‘max_element’ is not a member of ‘std’; did you mean
 ‘tuple_element’?                                                                                        
  683 |                                 float max_logit = *std::max_element(logits, logits+n_vocab);     
      |                                                         ^~~~~~~~~~~                              
      |                                                         tuple_element 

@IAmAnubhavSaini
Copy link
Author

That is solved by adding #include <algorithm> at the top of tokentree.h.

However there are many library linking (ld) issues with it still. libvulkan-dev needs to be installed; llama.cpp needs to be sudo make install or something else...

@IAmAnubhavSaini
Copy link
Author

closing it now.

@blackhole89
Copy link
Owner

I added that include (to tokentree.cpp, since it doesn't seem necessary in the .h).

What linking issues are you getting? Are you installing llama.cpp with git submodule as in the new instructions I wrote? I thought that its default configuration in the repo creates static libraries on Linux, which go into folders where my CMakeLists should already pick them up, and does not enable (and hence not depend on) Vulkan.

@IAmAnubhavSaini
Copy link
Author

I tried new instructions from updated README; but then it threw the error about algorithm and libvulkan.

Other than that, I have tried building llama.cpp statically or otherwise; for CUDA and CPU; just cannot get it to work.

If I may suggest; try using docker for building. I did that for a C project here: https://github.com/IAmAnubhavSaini/kilo-text/blob/main/Dockerfile

That may be a way to contain the variability and uncertainties.

@blackhole89
Copy link
Owner
blackhole89 commented Feb 16, 2025

Huh. But I already added an include for <algorithm> in tokentree.cpp last time. Are you up-to-date with the master branch?

Regarding libvulkan, could you try with a completely fresh checkout of llama.cpp? I can't reproduce your problem on a fresh Debian machine, and it might be the case that you have LLAMA_VULKAN or something stuck on in CMakeCache.txt from a previous build. Thanks!

I think a dependency as heavy as Docker for building seems overkill. One of the big benefits of llama.cpp is specifically that it doesn't have the overabundance of transitive dependencies that typical "industrial-strength" ML frameworks require...

@IAmAnubhavSaini
Copy link
Author

On fresh clone and following instructions:

[100%] Linking CXX executable cmake-build-Debug/output/autopen
/usr/bin/ld: cannot find -lllama: No such file or directory
/usr/bin/ld: have you installed the static version of the llama library ?
/usr/bin/ld: cannot find -lggml: No such file or directory
/usr/bin/ld: have you installed the static version of the ggml library ?
/usr/bin/ld: cannot find -lggml-base: No such file or directory
/usr/bin/ld: have you installed the static version of the ggml-base library ?
/usr/bin/ld: cannot find -lggml-cpu: No such file or directory
/usr/bin/ld: have you installed the static version of the ggml-cpu library ?
collect2: error: ld returned 1 exit status

@blackhole89
Copy link
Owner

That's very strange. Can you run the following commands in the autopen root folder and tell me its output?

find -name libllama.a -print
find -name libggml-base.a -print

(I expect:

./llama.cpp/src/libllama.a
./llama.cpp/ggml/src/libggml-base.a

)

Additionally, the output of make VERBOSE=1 (to see the exact build commands) may be useful.

@blackhole89 blackhole89 reopened this Feb 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0