8000 Fedora 42 install/build fails · Issue #3356 · instructlab/instructlab · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
Fedora 42 install/build fails #3356
Open
@mhrivnak

Description

@mhrivnak

Describe the bug
The instructions for installation on Linux AMD fail.

To Reproduce
Steps to reproduce the behavior:
Per #2422 I am installing instructlab instead of instructlab[rocm].

I run this:

python3.11 -m venv --upgrade-deps venv
source venv/bin/activate
pip cache remove llama_cpp_python
pip install 'instructlab' \
-C cmake.args="-DLLAMA_HIPBLAS=on" \
-C cmake.args="-DAMDGPU_TARGETS=all" \
-C cmake.args="-DCMAKE_C_COMPILER=clang-17" \
-C cmake.args="-DCMAKE_CXX_COMPILER=clang++-17" \
-C cmake.args="-DLLAMA_NATIVE=off"

Expected behavior
It works!

Actual behavior
It gets as far as building a llama_cpp_python wheel and then fails.

Building wheels for collected packages: llama_cpp_python
  Building wheel for llama_cpp_python (pyproject.toml) ... error
  error: subprocess-exited-with-error
  
  × Building wheel for llama_cpp_python (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [154 lines of output]
      *** scikit-build-core 0.11.2 using CMake 3.31.6 (wheel)
      *** Configuring CMake...
      loading initial cache file /tmp/tmpsg8h9u5g/build/CMakeInit.txt
      -- The C compiler identification is GNU 15.1.1
      -- The CXX compiler identification is GNU 15.1.1
      -- Detecting C compiler ABI info
      -- Detecting C compiler ABI info - done
      -- Check for working C compiler: /usr/bin/gcc - s
8000
kipped
      -- Detecting C compile features
      -- Detecting C compile features - done
      -- Detecting CXX compiler ABI info
      -- Detecting CXX compiler ABI info - done
      -- Check for working CXX compiler: /usr/bin/g++ - skipped
      -- Detecting CXX compile features
      -- Detecting CXX compile features - done
      -- Found Git: /usr/bin/git (found version "2.49.0")
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
      -- Found Threads: TRUE
      -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF
      -- CMAKE_SYSTEM_PROCESSOR: x86_64
      -- Including CPU backend
      -- Found OpenMP_C: -fopenmp (found version "4.5")
      -- Found OpenMP_CXX: -fopenmp (found version "4.5")
      -- Found OpenMP: TRUE (found version "4.5")
      -- x86 detected
      -- Adding CPU backend variant ggml-cpu: -march=native
      CMake Warning (dev) at CMakeLists.txt:13 (install):
        Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
      Call Stack (most recent call first):
        CMakeLists.txt:97 (llama_cpp_python_install_target)
      This warning is for project developers.  Use -Wno-dev to suppress it.
      
      CMake Warning (dev) at CMakeLists.txt:21 (install):
        Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
      Call Stack (most recent call first):
        CMakeLists.txt:97 (llama_cpp_python_install_target)
      This warning is for project developers.  Use -Wno-dev to suppress it.
      
      CMake Warning (dev) at CMakeLists.txt:13 (install):
        Target ggml has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
      Call Stack (most recent call first):
        CMakeLists.txt:98 (llama_cpp_python_install_target)
      This warning is for project developers.  Use -Wno-dev to suppress it.
      
      CMake Warning (dev) at CMakeLists.txt:21 (install):
        Target ggml has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
      Call Stack (most recent call first):
        CMakeLists.txt:98 (llama_cpp_python_install_target)
      This warning is for project developers.  Use -Wno-dev to suppress it.
      
      -- Configuring done (0.4s)
      -- Generating done (0.0s)
      -- Build files have been written to: /tmp/tmpsg8h9u5g/build
      *** Building project with Ninja...
      Change Dir: '/tmp/tmpsg8h9u5g/build'
      
      Run Build Command(s): ninja -v
      [1/60] /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/.. -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/ggml-cpu -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu++17 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -march=native -fopenmp -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-hbm.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-hbm.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-hbm.cpp.o -c /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-hbm.cpp
      [2/60] /usr/bin/g++ -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu++17 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o -c /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/ggml-threading.cpp
      [3/60] /usr/bin/gcc -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o -c /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/ggml-alloc.c
      [4/60] /usr/bin/g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/. -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/../include -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-hparams.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-hparams.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-hparams.cpp.o -c /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/llama-hparams.cpp
      [5/60] /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/.. -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/ggml-cpu -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu++17 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -march=native -fopenmp -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-traits.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-traits.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-traits.cpp.o -c /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-traits.cpp
      [6/60] /usr/bin/gcc -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/.. -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/ggml-cpu -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wdouble-promotion -march=native -fopenmp -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-quants.c.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-quants.c.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/ggml-cpu-quants.c.o -c /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/ggml-cpu/ggml-cpu-quants.c
      [7/60] /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/.. -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/ggml-cpu -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu++17 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -march=native -fopenmp -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/amx.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/amx.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/amx.cpp.o -c /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/ggml-cpu/amx/amx.cpp
      [8/60] /usr/bin/g++ -DGGML_BACKEND_BUILD -DGGML_BACKEND_SHARED -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -DGGML_USE_CPU_AARCH64 -DGGML_USE_LLAMAFILE -DGGML_USE_OPENMP -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_cpu_EXPORTS -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/.. -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/ggml-cpu -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu++17 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -march=native -fopenmp -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/mmq.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/mmq.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/mmq.cpp.o -c /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/ggml-cpu/amx/mmq.cpp
      [9/60] /usr/bin/g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/. -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/../include -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-arch.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-arch.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-arch.cpp.o -c /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/llama-arch.cpp
      [10/60] /usr/bin/g++ -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu++17 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o -c /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/ggml-opt.cpp
      [11/60] /usr/bin/g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/. -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/../include -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-impl.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-impl.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-impl.cpp.o -c /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/llama-impl.cpp
      [12/60] /usr/bin/g++ -DGGML_BUILD -DGGML_SCHED_MAX_COPIES=4 -DGGML_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -Dggml_base_EXPORTS -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/. -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -std=gnu++17 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o -MF vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o.d -o vendor/llama.cpp/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o -c /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/ggml-backend.cpp
      [13/60] /usr/bin/g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/. -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/../include -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-mmap.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-mmap.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-mmap.cpp.o -c /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/llama-mmap.cpp
      FAILED: vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-mmap.cpp.o
      /usr/bin/g++ -DGGML_BACKEND_SHARED -DGGML_SHARED -DGGML_USE_CPU -DLLAMA_BUILD -DLLAMA_SHARED -Dllama_EXPORTS -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/. -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/../include -I/tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/ggml/src/../include -O3 -DNDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wextra-semi -MD -MT vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-mmap.cpp.o -MF vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-mmap.cpp.o.d -o vendor/llama.cpp/src/CMakeFiles/llama.dir/llama-mmap.cpp.o -c /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/llama-mmap.cpp
      In file included from /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/llama-mmap.cpp:1:
      /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/llama-mmap.h:26:5: error: ‘uint32_t’ does not name a type
         26 |     uint32_t read_u32() const;
            |     ^~~~~~~~
      /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/llama-mmap.h:5:1: note: ‘uint32_t’ is defined in header ‘<cstdint>’; this is probably fixable by adding ‘#include <cstdint>’
          4 | #include <vector>
        +++ |+#include <cstdint>
          5 |
      /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/llama-mmap.h:29:20: error: ‘uint32_t’ has not been declared
         29 |     void write_u32(uint32_t val) const;
            |                    ^~~~~~~~
      /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/llama-mmap.h:29:20: note: ‘uint32_t’ is defined in header ‘<cstdint>’; this is probably fixable by adding ‘#include <cstdint>’
      /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/llama-mmap.cpp:259:10: error: no declaration matches ‘uint32_t llama_file::read_u32() const’
        259 | uint32_t llama_file::read_u32() const { return pimpl->read_u32(); }
            |          ^~~~~~~~~~
      /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/llama-mmap.cpp:259:10: note: no functions named ‘uint32_t llama_file::read_u32() const’
      /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/llama-mmap.h:14:8: note: ‘struct llama_file’ defined here
         14 | struct llama_file {
            |        ^~~~~~~~~~
      /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/llama-mmap.cpp:262:6: error: no declaration matches ‘void llama_file::write_u32(uint32_t) const’
        262 | void llama_file::write_u32(uint32_t val) const { pimpl->write_u32(val); }
            |      ^~~~~~~~~~
      /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/llama-mmap.h:29:10: note: candidate is: ‘void llama_file::write_u32(int) const’
         29 |     void write_u32(uint32_t val) const;
            |          ^~~~~~~~~
      /tmp/pip-install-gy_yobwa/llama-cpp-python_1b37f80ca3994882befad1141843605b/vendor/llama.cpp/src/llama-mmap.h:14:8: note: ‘struct llama_file’ defined here
         14 | struct llama_file {
            |        ^~~~~~~~~~

Device Info (please complete the following information):

  • Hardware Specs: Intel Core Ultra 7 265K, Radeon 6750 XT
  • OS Version: Fedora 42
  • Python Version: Python 3.11.12
  • InstructLab Version: Using cached instructlab-0.26.0-py3-none-any.whl.metadata

Additional context

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0