-
Notifications
You must be signed in to change notification settings - Fork 96
Explicitly set device id for each MPI call #1054
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
format! |
3ce00b4
to
3bb70a2
Compare
3ada760
to
befb2c1
Compare
format! |
1400001
to
e1b0025
Compare
9740cdb
to
ef2735a
Compare
37fb251
to
1564e23
Compare
SonarCloud Quality Gate failed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Some minor comments, but they shouldn't hold things up
core/device_hooks/cuda_hooks.cpp
Outdated
scoped_device_id CudaExecutor::get_scoped_device_id() const | ||
{ | ||
GKO_NOT_COMPILED(cuda); | ||
return {static_cast<OmpExecutor*>(nullptr), 0}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this even necessary? With the previous throw
, does the compiler still complain?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right, at least on my system it doesn't complain, so I will remove that and the others.
cuda/test/base/scoped_device_id.cu
Outdated
|
||
TEST_F(ScopedDeviceId, SetsId) | ||
{ | ||
auto new_device_id = std::max(hip->get_num_devices() - 1, 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we only run CI on systems with at most one GPU of a kind ATM, so this will probably not show anything?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will add a CI job on horeka using multiple gpus, since that is also interesting for our gpu-aware mpi stuff. Then these tests would make more sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds great! Could probably do the same thing on our AMD machine, since we already had gitlab-runner set up there before. cc @tcojean
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should indeed be usable since I yesterday fixed the machines and runners, and reenabled nla-gpu1 as a target.
include/ginkgo/core/base/mpi.hpp
Outdated
* @param num_devices the number of devices per node. | ||
* @return device id that this rank should use. | ||
*/ | ||
inline int map_rank_to_device_id(MPI_Comm comm, const int num_devices) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO that should move to a source file
include/ginkgo/core/base/mpi.hpp
Outdated
{ | ||
unsigned size = num_elems * sizeof(ValueType); | ||
if (c_type == create_type::create) { | ||
auto guard = this->exec_->get_scoped_device_id(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
even with GKO_NOT_IMPLEMENTED in the other branch, we could avoid the duplication by moving it up the scope
test/utils/executor.hpp
Outdated
@@ -66,92 +64,41 @@ void init_executor(std::shared_ptr<gko::ReferenceExecutor> ref, | |||
std::shared_ptr<gko::CudaExecutor>& exec) | |||
{ | |||
ASSERT_GT(gko::CudaExecutor::get_num_devices(), 0); | |||
exec = gko::CudaExecutor::create(0, ref); | |||
exec = gko::CudaExecutor::create( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be better if we had MPI-specific test utils, otherwise running the tests manually might have unexpected side-effects
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some minor comments, but otherwise looks good.
include/ginkgo/core/base/mpi.hpp
Outdated
if (auto str = std::getenv("MV2_COMM_WORLD_LOCAL_RANK")) { | ||
local_rank = std::stoi(str); | ||
} else if (auto str = std::getenv("OMPI_COMM_WORLD_LOCAL_RANK")) { | ||
local_rank = std::stoi(str); | ||
} else if (auto str = std::getenv("MPI_LOCALRANKID")) { | ||
local_rank = std::stoi(str); | ||
} else if (auto str = std::getenv("SLURM_LOCALID")) { | ||
local_rank = std::stoi(str); | ||
} else { | ||
local_rank = mpi_node_local_rank(comm); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem with this approach is that this list is not exhaustive. I am not sure this will be easy to maintain with different systems possibly having different env vars
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's true, and that's why I've added the default case. So if we can't determine the local rank from environment variables (which is think is preferable) then we use the open mpi construct for that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel this this might create behaviour that the user not intend or want. For example, Summit uses variables from BSUB i think. So, it user sets it using those variables, then we just disregard them. In general, I think taking into account user settings from different places is a hard problem, and I dont have a solution right now. But that might be something we have to think about when interfacing to other applications.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In your Summit example, we would not be able to handle it anyway, so I guess nothing is lost.
For interfacing external libraries, if they have MPI already set up, they can just pass us in the local rank. I guess there needs to be a nice way to insert that information.
include/ginkgo/core/base/mpi.hpp
Outdated
struct MPI_Comm {}; | ||
|
||
|
||
constexpr MPI_Comm MPI_COMM_WORLD{}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might conflict with other libraries which have similar definitions. For example deal.ii, mfem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will remove this. It was only necessary for the tests, to have the same setup for the mpi/non-mpi tests, but I've split that up now.
hip/base/scoped_device_id.hip.cpp
Outdated
device_guard& operator=(const device_guard& other) = delete; | ||
|
||
device_guard(device_guard&& other) = delete; | ||
hip_scoped_device_id::hip_scoped_device_id(int device_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this is not a interface break, because this is not a user-facing header file. But this would break ABI though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Only minor suggestions
* @param num_devices the number of devices per node. | ||
* @return device id that this rank should use. | ||
*/ | ||
int map_rank_to_device_id(MPI_Comm comm, int num_devices); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should hopefully find its way into #704 at some point
class noop_scoped_device_id : public generic_scoped_device_id {}; | ||
|
||
|
||
/** | ||
* A scoped device id for CUDA. | ||
*/ | ||
class cuda_scoped_device_id : public generic_scoped_device_id { | ||
public: | ||
explicit cuda_scoped_device_id(int device_id); | ||
|
||
~cuda_scoped_device_id() noexcept(false) override; | ||
|
||
cuda_scoped_device_id(cuda_scoped_device_id&& other) noexcept; | ||
|
||
cuda_scoped_device_id& operator=(cuda_scoped_device_id&& other) noexcept; | ||
|
||
private: | ||
int original_device_id_; | ||
bool need_reset_; | ||
}; | ||
|
||
|
||
/** | ||
* A scoped device id for HIP. | ||
*/ | ||
class hip_scoped_device_id : public generic_scoped_device_id { | ||
public: | ||
explicit hip_scoped_device_id(int device_id); | ||
|
||
~hip_scoped_device_id() noexcept(false) override; | ||
|
||
hip_scoped_device_id(hip_scoped_device_id&& other) noexcept; | ||
|
||
hip_scoped_device_id& operator=(hip_scoped_device_id&& other) noexcept; | ||
|
||
private: | ||
int original_device_id_; | ||
bool need_reset_; | ||
}; | ||
|
||
|
||
} // namespace detail |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these could be moved to the .cpp file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the class definition has to be in some header, because the implementation happens both in core and in cuda/hip. So we could keep it in the public headers or in a new private header. I would lean towards the private header
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree :)
scoped_device_id(const OmpExecutor* exec, int device_id) | ||
: scope_(std::make_unique<detail::noop_scoped_device_id>()) | ||
{} | ||
|
||
/** | ||
* Create a scoped device id from an CudaExecutor. | ||
* | ||
* This will pick the cuda_scoped_device_id. | ||
* | ||
* @param exec Not used. | ||
* @param device_id The device id to use within the scope. | ||
*/ | ||
scoped_device_id(const CudaExecutor* exec, int device_id) | ||
: scope_(std::make_unique<detail::cuda_scoped_device_id>(device_id)) | ||
{} | ||
|
||
/** | ||
* Create a scoped device id from an HipExecutor. | ||
* | ||
* This will pick the hip_scoped_device_id. | ||
* | ||
* @param exec Not used. | ||
* @param device_id The device id to use within the scope. | ||
*/ | ||
scoped_device_id(const HipExecutor* exec, int device_id) | ||
: scope_(std::make_unique<detail::hip_scoped_device_id>(device_id)) | ||
{} | ||
|
||
/** | ||
* Create a scoped device id from an OmpExecutor. | ||
* | ||
* This will pick the noop_scoped_device_id. | ||
* | ||
* @param exec Not used. | ||
* @param device_id Not used. | ||
*/ | ||
scoped_device_id(const DpcppExecutor* exec, int device_id) | ||
: scope_(std::make_unique<detail::noop_scoped_device_id>()) < C403 /td> | ||
{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
implementations don't need to live in the header
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this doesn't work. I've put the implementation of EXEC::get_scoped_device_id
into the ginkgo_*_device
library, so these need the constructor to be available. But since these libraries are not linked against core, this results in linker errors.
I think we have two options:
scoped_device_id
in core,EXEC::get_scoped_device_id
in headerscoped_device_id
in header,EXEC::get_scoped_device_id
in devices
The easiest would be 2., because then I would just drop the last commit. Not sure if there is an actual benefit to one of them.
- move implementation details into core Co-authored-by: Tobias Ribizel <ribizel@kit.edu>
Co-authored-by: Tobias Ribizel <ribizel@kit.edu>
b97ef22
to
7c880d7
Compare
format! |
Co-authored-by: Marcel Koch <marcel.koch@kit.edu>
core/base/scoped_device_id.hpp
Outdated
@@ -0,0 +1,63 @@ | |||
#ifndef GINKGO_CORE_BASE_SCOPED_DEVICE_ID_HPP |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing license header
scoped_device_id(const OmpExecutor* exec, int device_id) | ||
: scope_(std::make_unique<detail::noop_scoped_device_id>()) | ||
{} | ||
scoped_device_id(const OmpExecutor* exec, int device_id); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this type of operation is sometimes called scope guard, I would suggest trying to get guard
into the class name
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you're right, I also thought about that, but I seem to have forgotten about it again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
7dda6a7
to
ecf67e3
Compare
at least AppleClang complains that the linker can't find OmpExecutor functions for the vtable of ReferenceExecutor
f2a1f12
to
6dc1bc7
Compare
6dc1bc7
to
46b71cc
Compare
format! |
Co-authored-by: Marcel Koch <marcel.koch@kit.edu>
Note: This PR changes the Ginkgo ABI:
For details check the full ABI diff under Artifacts here |
This PR will add basic, distributed data structures (matrix and vector), and enable some solvers for these types. This PR contains the following PRs: - #961 - #971 - #976 - #985 - #1007 - #1030 - #1054 # Additional Changes - moves new types into experimental namespace - moves existing Partition class into experimental namespace - moves existing mpi namespace into experimental namespace - makes generic_scoped_device_id_guard destructor noexcept by terminating if restoring the original device id fails - switches to blocking communication in the SpMV if OpenMPI version 4.0.x is used - disables Horeka mpi tests and uses nla-gpu instead Related PR: #1133
Advertise release 1.5.0 and last changes + Add changelog, + Update third party libraries + A small fix to a CMake file See PR: #1195 The Ginkgo team is proud to announce the new Ginkgo minor release 1.5.0. This release brings many important new features such as: - MPI-based multi-node support for all matrix formats and most solvers; - full DPC++/SYCL support, - functionality and interface for GPU-resident sparse direct solvers, - an interface for wrapping solvers with scaling and reordering applied, - a new algebraic Multigrid solver/preconditioner, - improved mixed-precision support, - support for device matrix assembly, and much more. If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions). Supported systems and requirements: + For all platforms, CMake 3.13+ + C++14 compliant compiler + Linux and macOS + GCC: 5.5+ + clang: 3.9+ + Intel compiler: 2018+ + Apple LLVM: 8.0+ + NVHPC: 22.7+ + Cray Compiler: 14.0.1+ + CUDA module: CUDA 9.2+ or NVHPC 22.7+ + HIP module: ROCm 4.0+ + DPC++ module: Intel OneAPI 2021.3 with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`. + Windows + MinGW and Cygwin: GCC 5.5+ + Microsoft Visual Studio: VS 2019 + CUDA module: CUDA 9.2+, Microsoft Visual Studio + OpenMP module: MinGW or Cygwin. Algorithm and important feature additions: + Add MPI-based multi-node for all matrix formats and solvers (except GMRES and IDR). ([#676](#676), [#908](#908), [#909](#909), [#932](#932), [#951](#951), [#961](#961), [#971](#971), [#976](#976), [#985](#985), [#1007](#1007), [#1030](#1030), [#1054](#1054), [#1100](#1100), [#1148](#1148)) + Porting the remaining algorithms (preconditioners like ISAI, Jacobi, Multigrid, ParILU(T) and ParIC(T)) to DPC++/SYCL, update to SYCL 2020, and improve support and performance ([#896](#896), [#924](#924), [#928](#928), [#929](#929), [#933](#933), [#943](#943), [#960](#960), [#1057](#1057), [#1110](#1110), [#1142](#1142)) + Add a Sparse Direct interface supporting GPU-resident numerical LU factorization, symbolic Cholesky factorization, improved triangular solvers, and more ([#957](#957), [#1058](#1058), [#1072](#1072), [#1082](#1082)) + Add a ScaleReordered interface that can wrap solvers and automatically apply reorderings and scalings ([#1059](#1059)) + Add a Multigrid solver and improve the aggregation based PGM coarsening scheme ([#542](#542), [#913](#913), [#980](#980), [#982](#982), [#986](#986)) + Add infrastructure for unified, lambda-based, backend agnostic, kernels and utilize it for some simple kernels ([#833](#833), [#910](#910), [#926](#926)) + Merge different CUDA, HIP, DPC++ and OpenMP tests under a common interface ([#904](#904), [#973](#973), [#1044](#1044), [#1117](#1117)) + Add a device_matrix_data type for device-side matrix assembly ([#886](#886), [#963](#963), [#965](#965)) + Add support for mixed real/complex BLAS operations ([#864](#864)) + Add a FFT LinOp for all but DPC++/SYCL ([#701](#701)) + Add FBCSR support for NVIDIA and AMD GPUs and CPUs with OpenMP ([#775](#775)) + Add CSR scaling ([#848](#848)) + Add array::const_view and equivalent to create constant matrices from non-const data ([#890](#890)) + Add a RowGatherer LinOp supporting mixed precision to gather dense matrix rows ([#901](#901)) + Add mixed precision SparsityCsr SpMV support ([#970](#970)) + Allow creating CSR submatrix including from (possibly discontinuous) index sets ([#885](#885), [#964](#964)) + Add a scaled identity addition (M <- aI + bM) feature interface and impls for Csr and Dense ([#942](#942)) Deprecations and important changes: + Deprecate AmgxPgm in favor of the new Pgm name. ([#1149](#1149)). + Deprecate specialized residual norm classes in favor of a common `ResidualNorm` class ([#1101](#1101)) + Deprecate CamelCase non-polymorphic types in favor of snake_case versions (like array, machine_topology, uninitialized_array, index_set) ([#1031](#1031), [#1052](#1052)) + Bug fix: restrict gko::share to rvalue references (*possible interface break*) ([#1020](#1020)) + Bug fix: when using cuSPARSE's triangular solvers, specifying the factory parameter `num_rhs` is now required when solving for more than one right-hand side, otherwise an exception is thrown ([#1184](#1184)). + Drop official support for old CUDA < 9.2 ([#887](#887)) Improved performance additions: + Reuse tmp storage in reductions in solvers and add a mutable workspace to all solvers ([#1013](#1013), [#1028](#1028)) + Add HIP unsafe atomic option for AMD ([#1091](#1091)) + Prefer vendor implementations for Dense dot, conj_dot and norm2 when available ([#967](#967)). + Tuned OpenMP SellP, COO, and ELL SpMV kernels for a small number of RHS ([#809](#809)) Fixes: + Fix various compilation warnings ([#1076](#1076), [#1183](#1183), [#1189](#1189)) + Fix issues with hwloc-related tests ([#1074](#1074)) + Fix include headers for GCC 12 ([#1071](#1071)) + Fix for simple-solver-logging example ([#1066](#1066)) + Fix for potential memory leak in Logger ([#1056](#1056)) + Fix logging of mixin classes ([#1037](#1037)) + Improve value semantics for LinOp types, like moved-from state in cross-executor copy/clones ([#753](#753)) + Fix some matrix SpMV and conversion corner cases ([#905](#905), [#978](#978)) + Fix uninitialized data ([#958](#958)) + Fix CUDA version requirement for cusparseSpSM ([#953](#953)) + Fix several issues within bash-script ([#1016](#1016)) + Fixes for `NVHPC` compiler support ([#1194](#1194)) Other additions: + Simplify and properly name GMRES kernels ([#861](#861)) + Improve pkg-config support for non-CMake libraries ([#923](#923), [#1109](#1109)) + Improve gdb pretty printer ([#987](#987), [#1114](#1114)) + Add a logger highlighting inefficient allocation and copy patterns ([#1035](#1035)) + Improved and optimized test random matrix generation ([#954](#954), [#1032](#1032)) + Better CSR strategy defaults ([#969](#969)) + Add `move_from` to `PolymorphicObject` ([#997](#997)) + Remove unnecessary device_guard usage ([#956](#956)) + Improvements to the generic accessor for mixed-precision ([#727](#727)) + Add a naive lower triangular solver implementation for CUDA ([#764](#764)) + Add support for int64 indices from CUDA 11 onward with SpMV and SpGEMM ([#897](#897)) + Add a L1 norm implementation ([#900](#900)) + Add reduce_add for arrays ([#831](#831)) + Add utility to simplify Dense View creation from an existing Dense vector ([#1136](#1136)). + Add a custom transpose implementation for Fbcsr and Csr transpose for unsupported vendor types ([#1123](#1123)) + Make IDR random initilization deterministic ([#1116](#1116)) + Move the algorithm choice for triangular solvers from Csr::strategy_type to a factory parameter ([#1088](#1088)) + Update CUDA archCoresPerSM ([#1175](#1116)) + Add kernels for Csr sparsity pattern lookup ([#994](#994)) + Differentiate between structural and numerical zeros in Ell/Sellp ([#1027](#1027)) + Add a binary IO format for matrix data ([#984](#984)) + Add a tuple zip_iterator implementation ([#966](#966)) + Simplify kernel stubs and declarations ([#888](#888)) + Simplify GKO_REGISTER_OPERATION with lambdas ([#859](#859)) + Simplify copy to device in tests and examples ([#863](#863)) + More verbose output to array assertions ([#858](#858)) + Allow parallel compilation for Jacobi kernels ([#871](#871)) + Change clang-format pointer alignment to left ([#872](#872)) + Various improvements and fixes to the benchmarking framework ([#750](#750), [#759](#759), [#870](#870), [#911](#911), [#1033](#1033), [#1137](#1137)) + Various documentation improvements ([#892](#892), [#921](#921), [#950](#950), [#977](#977), [#1021](#1021), [#1068](#1068), [#1069](#1069), [#1080](#1080), [#1081](#1081), [#1108](#1108), [#1153](#1153), [#1154](#1154)) + Various CI improvements ([#868](#868), [#874](#874), [#884](#884), [#889](#889), [#899](#899), [#903](#903), [#922](#922), [#925](#925), [#930](#930), [#936](#936), [#937](#937), [#958](#958), [#882](#882), [#1011](#1011), [#1015](#1015), [#989](#989), [#1039](#1039), [#1042](#1042), [#1067](#1067), [#1073](#1073), [#1075](#1075), [#1083](#1083), [#1084](#1084), [#1085](#1085), [#1139](#1139), [#1178](#1178), [#1187](#1187))
This PR abstracts the device guard mechanic from the cuda/hip module and makes it available in other modules. The
Executor
gets a new abstract methodget_scoped_device_id
(name not finalized) which returns an object that has the same semantics as the older cuda/hip device guards. This means that during the lifetime of the returned object, the device id is set to the one stored in the executor. If that concept does not apply (reference/omp/dpcpp?) then the object has no effect.In the mpi wrapper, the
communicator
now stores an executor. As a result, it is not possible anymore to implicitly convert fromMPI_Comm
tocommunicator
. Thewindow
andrequest
classes also stores aexecutor
. Using the stored executor, all MPI calls are guarded to use the correct device id, if necessary.Note: Currently this is based on
distributed-solvers
, but I would rebase it ontodistributed-develop
after it has been reviewd to push forward merging the distributed stuff into develop.