8000 Rollback CDI support for ECS by arnaldo2792 · Pull Request #480 · bottlerocket-os/bottlerocket-core-kit · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Rollback CDI support for ECS #480

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up 8000 for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Apr 22, 2025

Conversation

arnaldo2792
Copy link
Contributor
@arnaldo2792 arnaldo2792 commented Apr 22, 2025

Description of changes:

Rollback the three commits that enabled CDI for ECS. There is a bug in nvidia-ctk and it fails to find libcuda with this error:

pattern libcuda.so.535.247.01 not found

Testing done:

  • Launched aws-ecs-2-nvidia in aarch64 (instance type g5g.2xlarge), confirmed that it booted:
[root@admin]# apiclient get os
{
  "os": {
    "arch": "aarch64",
    "build_id": "1e4b15179-dirty",
    "pretty_name": "Bottlerocket OS 1.37.0 (aws-ecs-2-nvidia)",
    "variant_id": "aws-ecs-2-nvidia",
    "version_id": "1.37.0"
  }
}
[root@admin]# uname -a
Linux ip-172-31-31-172.us-west-2.compute.internal 6.1.132 #1 SMP Mon Apr 21 16:58:15 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
  • Confirmed that the faulty service isn't present anymore:
[root@admin]# sheltie systemctl status generate-cdi-specs.service
Unit generate-cdi-specs.service could not be found.
  • Confirmed that the Docker configuration doesn't include the CDI enablement:
[root@admin]# cat /.bottlerocket/rootfs/etc/docker/daemon.json
{
  "log-driver": "journald",
  "live-restore": true,
  "max-concurrent-downloads": 10,
  "storage-driver": "overlay2",
  "data-root": "/var/lib/docker",
  "runtimes": { "nvidia": { "path": "nvidia-container-runtime" } },
  "default-capabilities": ["CAP_AUDIT_WRITE",
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_MKNOD",
"CAP_NET_BIND_SERVICE",
"CAP_NET_RAW",
"CAP_SETFCAP",
"CAP_SETGID",
"CAP_SETPCAP",
"CAP_SETUID",
"CAP_SYS_CHROOT"],
  "default-ulimits": {
   "nofile":{ "Name": "nofile", "Hard": 4096, "Soft": 1024 }
  },
  "selinux-enabled": true
}
  • Confirmed that the nvidia-container-runtime section isn't present:
[root@admin]# cat /.bottlerocket/rootfs/etc/nvidia-container-runtime/config.toml
[nvidia-container-cli]
root = "/"
path = "/usr/bin/nvidia-container-cli"
environment = []
ldconfig = "@/sbin/ldconfig"
[root@admin]#
  • Confirmed that the CUDA samples work as expected:
[root@48f9c38e0ccd samples]# for s in $(ls); do ./${s}; done
GPU Device 0: "Turing" with compute capability 7.5

Running ........................................................

Overall Time For matrixMultiplyPerf

Printing Average of 20 measurements in (ms)
Size_KB  UMhint UMhntAs  UMeasy   0Copy MemCopy CpAsync CpHpglk CpPglAs
4         0.192   0.199   0.310   0.023   0.039   0.030   0.044   0.031
16        0.188   0.221   0.479   0.045   0.059   0.049   0.065   0.065
64        0.336   0.343   0.711   0.140   0.149   0.138   0.145   0.134
256       0.840   0.797   1.255   0.741   0.545   0.522   0.499   0.491
1024      3.081   2.799   3.532   4.642   2.196   2.117   2.031   2.022
4096     11.851  10.287  13.522  34.028   8.728   8.662   8.652   8.643
16384    50.883  47.651  60.822 307.175  42.563  42.538  42.655  42.613

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "NVIDIA T4G"
  CUDA Driver Version / Runtime Version          12.2 / 11.4
  CUDA Capability Major/Minor version number:    7.5
  Total amount of global memory:                 14931 MBytes (15655829504 bytes)
  (040) Multiprocessors, (064) CUDA Cores/MP:    2560 CUDA Cores
  GPU Max Clock rate:                            1590 MHz (1.59 GHz)
  Memory Clock rate:                             5001 Mhz
  Memory Bus Width:                              256-bit
  L2 Cache Size:                                 4194304 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total shared memory per multiprocessor:        65536 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1024
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 3 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Enabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Managed Memory:                Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 0 / 31
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 12.2, CUDA Runtime Version = 11.4, NumDevs = 1
Result = PASS
[globalToShmemAsyncCopy] - Starting...
GPU Device 0: "Turing" with compute capability 7.5

MatrixA(1280,1280), MatrixB(1280,1280)
Running kernel = 0 - AsyncCopyMultiStageLargeChunk
Computing result using CUDA Kernel...
done
Performance= 337.35 GFlop/s, Time= 12.433 msec, Size= 4194304000 Ops, WorkgroupSize= 256 threads/block
Checking computed result for correctness: Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
Initializing...
GPU Device 0: "Turing" with compute capability 7.5

M: 4096 (16 x 256)
N: 4096 (16 x 256)
K: 4096 (16 x 256)
Preparing data for GPU...
Required shared memory size: 64 Kb
Computing... using high performance kernel compute_gemm_imma
Time: 5.038272 ms
TOPS: 27.28
reductionMultiBlockCG Starting...

GPU Device 0: "Turing" with compute capability 7.5

33554432 elements
numThreads: 1024
numBlocks: 40

Launching SinglePass Multi Block Cooperative Groups kernel
Average time: 0.891541 ms
Bandwidth:    150.545852 GB/s

GPU result = 1.992401361465
CPU result = 1.992401361465
Starting shfl_scan
GPU Device 0: "Turing" with compute capability 7.5

> Detected Compute SM 7.5 hardware with 40 multi-processors
Starting shfl_scan
GPU Device 0: "Turing" with compute capability 7.5

> Detected Compute SM 7.5 hardware with 40 multi-processors
Computing Simple Sum test
---------------------------------------------------
Initialize test data [1, 1, 1...]
Scan summation for 65536 elements, 256 partial sums
Partial summing 256 elements with 1 blocks of size 256
Test Sum: 65536
Time (ms): 0.022848
65536 elements scanned in 0.022848 ms -> 2868.347412 MegaElements/s
CPU verify result diff (GPUvsCPU) = 0
CPU sum (naive) took 0.053440 ms

Computing Integral Image Test on size 1920 x 1080 synthetic data
---------------------------------------------------
Method: Fast  Time (GPU Timer): 0.052192 ms Diff = 0
Method: Vertical Scan  Time (GPU Timer): 0.109728 ms
CheckSum: 2073600, (expect 1920x1080=2073600)
./simpleAWBarrier starting...
GPU Device 0: "Turing" with compute capability 7.5

Launching normVecByDotProductAWBarrier kernel with numBlocks = 40 blockSize = 1024
Result = PASSED
./simpleAWBarrier completed, returned OK
simpleAtomicIntrinsics starting...
GPU Device 0: "Turing" with compute capability 7.5

Processing time: 116.276001 (ms)
simpleAtomicIntrinsics completed, returned OK
[simpleVoteIntrinsics]
GPU Device 0: "Turing" with compute capability 7.5

> GPU device has 40 Multi-Processors, SM 7.5 compute capabilities

[VOTE Kernel Test 1/3]
        Running <<Vote.Any>> kernel1 ...
        OK

[VOTE Kernel Test 2/3]
        Running <<Vote.All>> kernel2 ...
        OK

[VOTE Kernel Test 3/3]
        Running <<Vote.Any>> kernel3 ...
        OK
        Shutting down...
[Vector addition of 50000 elements]
Copy input data from the host memory to the CUDA device
CUDA kernel launch with 196 blocks of 256 threads
Copy output data from the CUDA device to the host memory
Test PASSED
Done
GPU Device 0: "Turing" with compute capability 7.5

CPU max matches GPU max

Warp Aggregated Atomics PASSED

Terms of contribution:

By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

@arnaldo2792 arnaldo2792 requested review from KCSesh and yeazelm April 22, 2025 18:49
Copy link
Contributor
@koooosh koooosh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like we are keeping this commit from the original PR: #471

Just a callout, I don't have an issue with it

@arnaldo2792
Copy link
Contributor Author

Yes, I kept that commit. I can roll it back if you want to but it doesn't had to do anything with the CDI support.

@koooosh
Copy link
Contributor
koooosh commented Apr 22, 2025

I agree, there's no need

@arnaldo2792 arnaldo2792 merged commit f9270e9 into bottlerocket-os:develop Apr 22, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants
0