8000 Missing CPU node attributes with topology injection · Issue #379 · NVIDIA/nccl · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Missing CPU node attributes with topology injection #379

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
rashikakheria opened this issue Aug 28, 2020 · 1 comment
8000 Closed

Missing CPU node attributes with topology injection #379

rashikakheria opened this issue Aug 28, 2020 · 1 comment

Comments

@rashikakheria
Copy link
Contributor

NCCL detects CPU nodes only when underlying GPU node is used. So in 2-socket system with each socket having 4 GPUs, if we use less than 5 GPUs then the other CPU or its attributes isn't detected. This will hit the following error when a topology is ingested using NCCL_TOPO_FILE.

graph/xml.h:77 NCCL WARN Attribute arch of node cpu not found
NCCL INFO graph/topo.cc:380 -> 3
NCCL INFO graph/topo.cc:466 -> 3
NCCL INFO graph/topo.cc:569 -> 3
NCCL INFO init.cc:581 -> 3
NCCL INFO init.cc:840 -> 3
NCCL INFO group.cc:73 -> 3 [Async thread]

To fix it, NCCL should force read information of all CPUs on the system if the topology provides it.

@sjeaugey
Copy link
Member
sjeaugey commented Oct 7, 2020

This should be fixed in 2.8 (see preview branch).

sjeaugey added a commit that referenced this issue Nov 6, 2020
Optimization for Tree allreduce on A100.
Improve aggregation performance.
Use shared buffers for inter-node send/recv.
Add NVTX profiling hooks.
Accelerate alltoall connections by merging communication for all
channels.
Add support for one hop communication through NVLink, for faster
send/recv communication on cubemesh topologies like DGX-1.
Improve alltoall scheduling to better balance intra/inter node
communication.
Increase send/recv parallelism by 8x, each warp sending or
receiving to a different peer.
Net: move to v4.
Net: make flush operation asynchronous to accelerate alltoall.
Net: define maximum number of requests.
Fix hang when using LL128 protocol after 2^31 steps.
Fix #379 : topology injection failing when using less GPUs than
described in the XML.
Fix #394 : protocol mismatch causing hangs or crashes when using
one GPU per node.
mackrorysd pushed a commit to mackrorysd/nccl that referenced this issue Apr 13, 2021
Optimization for Tree allreduce on A100.
Improve aggregation performance.
Use shared buffers for inter-node send/recv.
Add NVTX profiling hooks.
Accelerate alltoall connections by merging communication for all
channels.
Add support for one hop communication through NVLink, for faster
send/recv communication on cubemesh topologies like DGX-1.
Improve alltoall scheduling to better balance intra/inter node
communication.
Increase send/recv parallelism by 8x, each warp sending or
receiving to a different peer.
Net: move to v4.
Net: make flush operation asynchronous to accelerate alltoall.
Net: define maximum number of requests.
Fix hang when using LL128 protocol after 2^31 steps.
Fix NVIDIA#379 : topology injection failing when using less GPUs than
described in the XML.
Fix NVIDIA#394 : protocol mismatch causing hangs or crashes when using
one GPU per node.
yinwaii pushed a commit to yinwaii/nccl that referenced this issue Nov 17, 2022
Optimization for Tree allreduce on A100.
Improve aggregation performance.
Use shared buffers for inter-node send/recv.
Add NVTX profiling hooks.
Accelerate alltoall connections by merging communication for all
channels.
Add support for one hop communication through NVLink, for faster
send/recv communication on cubemesh topologies like DGX-1.
Improve alltoall scheduling to better balance intra/inter node
communication.
Increase send/recv parallelism by 8x, each warp sending or
receiving to a different peer.
Net: move to v4.
Net: make flush operation asynchronous to accelerate alltoall.
Net: define maximum number of requests.
Fix hang when using LL128 protocol after 2^31 steps.
Fix NVIDIA#379 : topology injection failing when using less GPUs than
described in the XML.
Fix NVIDIA#394 : protocol mismatch causing hangs or crashes when using
one GPU per node.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants
0