8000 Problems building `triton` v3.2.0 in offline mode · Issue #6919 · triton-lang/triton · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
Problems building triton v3.2.0 in offline mode #6919
Closed
@alpetukhov

Description

@alpetukhov

Dear authors,

I'm trying to build triton on a server with constrained internet access.
I've had no problem building the current main branch version using the following commands

export LLVM_BUILD_DIR=/home2/apetukho/llm-server/llvm-project/build
export TRITON_PTXAS_PATH=/usr/local/cuda-11.8/bin/ptxas
export TRITON_CUOBJDUMP_PATH=/usr/local/cuda-11.8/bin/cuobjdump                     
export TRITON_NVDISASM_PATH=/usr/local/cuda-11.8/bin/nvdisasm  
LLVM_INCLUDE_DIRS=$LLVM_BUILD_DIR/include \
  LLVM_LIBRARY_DIR=$LLVM_BUILD_DIR/lib \
  LLVM_SYSPATH=$LLVM_BUILD_DIR \
  TRITON_OFFLINE_BUILD=TRUE \
  JSON_SYSPATH=/home2/apetukho/llm-server/json \
  TRITON_CUPTI_INCLUDE_PATH=/usr/local/cuda-11.8/include \
  TRITON_CUDACRT_PATH=/usr/local/cuda-11.8/include \
  TRITON_CUDART_PATH=/usr/local/cuda-11.8/include \
  pip install -e .

The llvm is already built and the json project is unpacked in the corresponding directories.

However, if I try to build the 3.2.0 version (from the release/3.2.x branch) with the same commands (replacing pip install -e . with pip install -e python) I get the following error message

[70/230] /usr/bin/c++  -I/home2/apetukho/llm-server/triton/python/build/cmake.linux-x86_64-cpython-3.12/third_party/nvidia -I/home2/apetukho/llm-server/triton/third_party/nvidia -I/home2/apetukho/llm-server/triton/include -I/home2/apetukho/llm-server/triton/. -I/home2/apetukho/llm-server/llvm-project/mlir/include -I/home2/apetukho/llm-server/llvm-project/build/tools/mlir/include -I/home2/apetukho/llm-server/llvm-project/llvm/include -I/home2/apetukho/llm-server/llvm-project/build/include -I/home2/apetukho/llm-server/triton/python/build/cmake.linux-x86_64-cpython-3.12/include -I/home2/apetukho/llm-server/triton/third_party -I/home2/apetukho/llm-server/triton/python/build/cmake.linux-x86_64-cpython-3.12/third_party -I/home2/apetukho/llm-server/triton/python/src -I/usr/local/include/python3.12 -I/tmp/pip-build-env-vs6_kj04/overlay/lib/python3.12/site-packages/pybind11/include -I/home2/apetukho/llm-server/triton/third_party/nvidia/include -I/home2/apetukho/llm-server/triton/python/build/cmake.linux-x86_64-cpython-3.12/third_party/nvidia/include -D__STDC_FORMAT_MACROS  -fPIC -std=gnu++17 -Werror -Wno-covered-switch-default -fvisibility=hidden -O2 -g -std=gnu++17 -MD -MT third_party/nvidia/CMakeFiles/TritonNVIDIA.dir/triton_nvidia.cc.o -MF third_party/nvidia/CMakeFiles/TritonNVIDIA.dir/triton_nvidia.cc.o.d -o third_party/nvidia/CMakeFiles/TritonNVIDIA.dir/triton_nvidia.cc.o -c /home2/apetukho/llm-server/triton/third_party/nvidia/triton_nvidia.cc
      FAILED: third_party/nvidia/CMakeFiles/TritonNVIDIA.dir/triton_nvidia.cc.o
      /usr/bin/c++  -I/home2/apetukho/llm-server/triton/python/build/cmake.linux-x86_64-cpython-3.12/third_party/nvidia -I/home2/apetukho/llm-server/triton/third_party/nvidia -I/home2/apetukho/llm-server/triton/include -I/home2/apetukho/llm-server/triton/. -I/home2/apetukho/llm-server/llvm-project/mlir/include -I/home2/apetukho/llm-server/llvm-project/build/tools/mlir/include -I/home2/apetukho/llm-server/llvm-project/llvm/include -I/home2/apetukho/llm-server/llvm-project/build/include -I/home2/apetukho/llm-server/triton/python/build/cmake.linux-x86_64-cpython-3.12/include -I/home2/apetukho/llm-server/triton/third_party
-I/home2/apetukho/llm-server/triton/python/build/cmake.linux-x86_64-cpython-3.12/third_party -I/home2/apetukho/llm-server/triton/python/src -I/usr/local/include/python3.12 -I/tmp/pip-build-env-vs6_kj04/overlay/lib/python3.12/site-packages/pybind11/include -I/home2/apetukho/llm-server/triton/third_party/nvidia/include -I/home2/apetukho/llm-server/triton/python/build/cmake.linux-x86_64-cpython-3.12/third_party/nvidia/include -D__STDC_FORMAT_MACROS  -fPIC -std=gnu++17 -Werror -Wno-covered-switch-default -fvisibility=hidden -O2 -g -std=gnu++17 -MD -MT third_party/nvidia/CMakeFiles/TritonNVIDIA.dir/triton_nvidia.cc.o -MF third_party/nvidia/CMakeFiles/TritonNVIDIA.dir/triton_nvidia.cc.o.d -o third_party/nvidia/CMakeFiles/TritonNVIDIA.dir/triton_nvidia.cc.o -c /home2/apetukho/llm-server/triton/third_party/nvidia/triton_nvidia.cc
      In file included from /home2/apetukho/llm-server/triton/third_party/nvidia/include/cublas_instance.h:4,
                       from /home2/apetukho/llm-server/triton/third_party/nvidia/triton_nvidia.cc:4:
      /home2/apetukho/llm-server/triton/third_party/nvidia/include/cublas_types.h:6:10: fatal error: backend/include/cuda.h: No such file or directory
          6 | #include "backend/include/cuda.h"
            |          ^~~~~~~~~~~~~~~~~~~~~~~~
      compilation terminated.

I've tried changing both of the imports at
https://github.com/triton-lang/triton/blob/9641643da6c52000c807b5eeed05edaec4402a67/third_party/nvidia/include/cublas_types.h#L6-7
to

#include <cuda.h>
#include <driver_types.h>

hoping that it would be included from my local CUDA, but it still leads to the same error.

I've seen that at #5449 (comment) you've told that in offline mode I'm responsible for downloading packages and placing them in the correct folders, but I don't see why this way of building is not working for 3.2.0 but works for 3.3.0.

Can you please give me an advise on how I can overcome this issue and build triton 3.2.0 in offline mode?

Thanks in advance,
Aleksandr.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0