8000 Explicitly set device id for each MPI call by MarcelKoch · Pull Request #1054 · ginkgo-project/ginkgo · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Explicitly set device id for each MPI call #1054

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 32 commits into from
Oct 5, 2022

Conversation

MarcelKoch
Copy link
Member
@MarcelKoch MarcelKoch commented Jun 14, 2022

This PR abstracts the device guard mechanic from the cuda/hip module and makes it available in other modules. The Executor gets a new abstract method get_scoped_device_id (name not finalized) which returns an object that has the same semantics as the older cuda/hip device guards. This means that during the lifetime of the returned object, the device id is set to the one stored in the executor. If that concept does not apply (reference/omp/dpcpp?) then the object has no effect.

In the mpi wrapper, the communicator now stores an executor. As a result, it is not possible anymore to implicitly convert from MPI_Comm to communicator. The window and request classes also stores a executor. Using the stored executor, all MPI calls are guarded to use the correct device id, if necessary.

Note: Currently this is based on distributed-solvers, but I would rebase it onto distributed-develop after it has been reviewd to push forward merging the distributed stuff into develop.

@MarcelKoch MarcelKoch added the 1:ST:WIP This PR is a work in progress. Not ready for review. label Jun 14, 2022
@MarcelKoch MarcelKoch added this to the Ginkgo 1.5.0 milestone Jun 14, 2022
@MarcelKoch MarcelKoch self-assigned this Jun 14, 2022
@ginkgo-bot ginkgo-bot added mod:core This is related to the core module. mod:cuda This is related to the CUDA module. mod:dpcpp This is related to the DPC++ module. mod:hip This is related to the HIP module. reg:benchmarking This is related to benchmarking. reg:build This is related to the build system. reg:testing This is related to testing. type:solver This is related to the solvers labels Jun 14, 2022
@MarcelKoch
Copy link
Member Author

format!

@MarcelKoch MarcelKoch force-pushed the distributed-solvers branch from 3ce00b4 to 3bb70a2 Compare July 11, 2022 12:25
@MarcelKoch MarcelKoch force-pushed the communicator-device-guard branch from 3ada760 to befb2c1 Compare July 11, 2022 15:06
@MarcelKoch MarcelKoch added the 1:ST:need-feedback The PR is somewhat ready but feedback on a blocking topic is required before a proper review. label Jul 12, 2022
@upsj upsj self-requested a review July 12, 2022 08:17
@MarcelKoch
Copy link
Member Author

format!

@MarcelKoch MarcelKoch force-pushed the communicator-device-guard branch from 1400001 to e1b0025 Compare July 19, 2022 09:01
@MarcelKoch MarcelKoch requested a review from a team July 19, 2022 14:07
@MarcelKoch MarcelKoch added 1:ST:ready-for-review This PR is ready for review and removed 1:ST:WIP This PR is a work in progress. Not ready for review. 1:ST:need-feedback The PR is somewhat ready but feedback on a blocking topic is required before a proper review. labels Jul 19, 2022
@MarcelKoch MarcelKoch force-pushed the communicator-device-guard branch from 9740cdb to ef2735a Compare July 19, 2022 14:39
@MarcelKoch MarcelKoch changed the title WIP: Explicitly set device id for each MPI call Explicitly set device id for each MPI call Jul 20, 2022
@MarcelKoch MarcelKoch force-pushed the communicator-device-guard branch 2 times, most recently from 37fb251 to 1564e23 Compare July 21, 2022 15:01
@sonarqubecloud
Copy link

SonarCloud Quality Gate failed.    Quality Gate failed

Bug E 7 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 26 Code Smells

45.2% 45.2% Coverage
1.7% 1.7% Duplication

upsj
upsj previously approved these changes Jul 22, 2022
Copy link
Member
@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Some minor comments, but they shouldn't hold things up

scoped_device_id CudaExecutor::get_scoped_device_id() const
{
GKO_NOT_COMPILED(cuda);
return {static_cast<OmpExecutor*>(nullptr), 0};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this even necessary? With the previous throw, does the compiler still complain?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, at least on my system it doesn't complain, so I will remove that and the others.


TEST_F(ScopedDeviceId, SetsId)
{
auto new_device_id = std::max(hip->get_num_devices() - 1, 0);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we only run CI on systems with at most one GPU of a kind ATM, so this will probably not show anything?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add a CI job on horeka using multiple gpus, since that is also interesting for our gpu-aware mpi stuff. Then these tests would make more sense.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds great! Could probably do the same thing on our AMD machine, since we already had gitlab-runner set up there before. cc @tcojean

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should indeed be usable since I yesterday fixed the machines and runners, and reenabled nla-gpu1 as a target.

8000
* @param num_devices the number of devices per node.
* @return device id that this rank should use.
*/
inline int map_rank_to_device_id(MPI_Comm comm, const int num_devices)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO that should move to a source file

{
unsigned size = num_elems * sizeof(ValueType);
if (c_type == create_type::create) {
auto guard = this->exec_->get_scoped_device_id();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

even with GKO_NOT_IMPLEMENTED in the other branch, we could avoid the duplication by moving it up the scope

@@ -66,92 +64,41 @@ void init_executor(std::shared_ptr<gko::ReferenceExecutor> ref,
std::shared_ptr<gko::CudaExecutor>& exec)
{
ASSERT_GT(gko::CudaExecutor::get_num_devices(), 0);
exec = gko::CudaExecutor::create(0, ref);
exec = gko::CudaExecutor::create(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be better if we had MPI-specific test utils, otherwise running the tests manually might have unexpected side-effects

@pratikvn pratikvn self-requested a review July 22, 2022 07:59
pratikvn
pratikvn previously approved these changes Jul 22, 2022
Copy link
Member
@pratikvn pratikvn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some minor comments, but otherwise looks good.

Comment on lines 121 to 131
if (auto str = std::getenv("MV2_COMM_WORLD_LOCAL_RANK")) {
local_rank = std::stoi(str);
} else if (auto str = std::getenv("OMPI_COMM_WORLD_LOCAL_RANK")) {
local_rank = std::stoi(str);
} else if (auto str = std::getenv("MPI_LOCALRANKID")) {
local_rank = std::stoi(str);
} else if (auto str = std::getenv("SLURM_LOCALID")) {
local_rank = std::stoi(str);
} else {
local_rank = mpi_node_local_rank(comm);
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem with this approach is that this list is not exhaustive. I am not sure this will be easy to maintain with different systems possibly having different env vars

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's true, and that's why I've added the default case. So if we can't determine the local rank from environment variables (which is think is preferable) then we use the open mpi construct for that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel this this might create behaviour that the user not intend or want. For example, Summit uses variables from BSUB i think. So, it user sets it using those variables, then we just disregard them. In general, I think taking into account user settings from different places is a hard problem, and I dont have a solution right now. But that might be something we have to think about when interfacing to other applications.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In your Summit example, we would not be able to handle it anyway, so I guess nothing is lost.
For interfacing external libraries, if they have MPI already set up, they can just pass us in the local rank. I guess there needs to be a nice way to insert that information.

Comment on lines 1976 to 1979
struct MPI_Comm {};


constexpr MPI_Comm MPI_COMM_WORLD{};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might conflict with other libraries which have similar definitions. For example deal.ii, mfem.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will remove this. It was only necessary for the tests, to have the same setup for the mpi/non-mpi tests, but I've split that up now.

device_guard& operator=(const device_guard& other) = delete;

device_guard(device_guard&& other) = delete;
hip_scoped_device_id::hip_scoped_device_id(int device_id)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this is not a interface break, because this is not a user-facing header file. But this would break ABI though.

Copy link
Member
@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Only minor suggestions

* @param num_devices the number of devices per node.
* @return device id that this rank should use.
*/
int map_rank_to_device_id(MPI_Comm comm, int num_devices);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should hopefully find its way into #704 at some point

Comment on lines 83 to 124
class noop_scoped_device_id : public generic_scoped_device_id {};


/**
* A scoped device id for CUDA.
*/
class cuda_scoped_device_id : public generic_scoped_device_id {
public:
explicit cuda_scoped_device_id(int device_id);

~cuda_scoped_device_id() noexcept(false) override;

cuda_scoped_device_id(cuda_scoped_device_id&& other) noexcept;

cuda_scoped_device_id& operator=(cuda_scoped_device_id&& other) noexcept;

private:
int original_device_id_;
bool need_reset_;
};


/**
* A scoped device id for HIP.
*/
class hip_scoped_device_id : public generic_scoped_device_id {
public:
explicit hip_scoped_device_id(int device_id);

~hip_scoped_device_id() noexcept(false) override;

hip_scoped_device_id(hip_scoped_device_id&& other) noexcept;

hip_scoped_device_id& operator=(hip_scoped_device_id&& other) noexcept;

private:
int original_device_id_;
bool need_reset_;
};


} // namespace detail
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these could be moved to the .cpp file

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the class definition has to be in some header, because the implementation happens both in core and in cuda/hip. So we could keep it in the public headers or in a new private header. I would lean towards the private header

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree :)

Comment on lines 163 to 201
scoped_device_id(const OmpExecutor* exec, int device_id)
: scope_(std::make_unique<detail::noop_scoped_device_id>())
{}

/**
* Create a scoped device id from an CudaExecutor.
*
* This will pick the cuda_scoped_device_id.
*
* @param exec Not used.
* @param device_id The device id to use within the scope.
*/
scoped_device_id(const CudaExecutor* exec, int device_id)
: scope_(std::make_unique<detail::cuda_scoped_device_id>(device_id))
{}

/**
* Create a scoped device id from an HipExecutor.
*
* This will pick the hip_scoped_device_id.
*
* @param exec Not used.
* @param device_id The device id to use within the scope.
*/
scoped_device_id(const HipExecutor* exec, int device_id)
: scope_(std::make_unique<detail::hip_scoped_device_id>(device_id))
{}

/**
* Create a scoped device id from an OmpExecutor.
*
* This will pick the noop_scoped_device_id.
*
* @param exec Not used.
* @param device_id Not used.
*/
scoped_device_id(const DpcppExecutor* exec, int device_id)
: scope_(std::make_unique<detail::noop_scoped_device_id>()) < C403 /td>
{}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

implementations don't need to live in the header

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this doesn't work. I've put the implementation of EXEC::get_scoped_device_id into the ginkgo_*_device library, so these need the constructor to be available. But since these libraries are not linked against core, this results in linker errors.
I think we have two options:

  1. scoped_device_id in core, EXEC::get_scoped_device_id in header
  2. scoped_device_id in header, EXEC::get_scoped_device_id in devices
    The easiest would be 2., because then I would just drop the last commit. Not sure if there is an actual benefit to one of them.

MarcelKoch and others added 2 commits September 29, 2022 14:51
- move implementation details into core

Co-authored-by: Tobias Ribizel <ribizel@kit.edu>
Co-authored-by: Tobias Ribizel <ribizel@kit.edu>
@MarcelKoch MarcelKoch force-pushed the communicator-device-guard branch from b97ef22 to 7c880d7 Compare September 30, 2022 07:57
@MarcelKoch
Copy link
Member Author

format!

47C5
Co-authored-by: Marcel Koch <marcel.koch@kit.edu>
@@ -0,0 +1,63 @@
#ifndef GINKGO_CORE_BASE_SCOPED_DEVICE_ID_HPP
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing license header

scoped_device_id(const OmpExecutor* exec, int device_id)
: scope_(std::make_unique<detail::noop_scoped_device_id>())
{}
scoped_device_id(const OmpExecutor* exec, int device_id);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this type of operation is sometimes called scope guard, I would suggest trying to get guard into the class name

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you're right, I also thought about that, but I seem to have forgotten about it again.

Copy link
Member
@pratikvn pratikvn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@MarcelKoch MarcelKoch force-pushed the communicator-device-guard branch 2 times, most recently from 7dda6a7 to ecf67e3 Compare October 4, 2022 09:07
at least AppleClang complains that the linker can't find OmpExecutor functions for the vtable of ReferenceExecutor
@MarcelKoch MarcelKoch force-pushed the communicator-device-guard branch 2 times, most recently from f2a1f12 to 6dc1bc7 Compare October 4, 2022 12:22
@MarcelKoch MarcelKoch force-pushed the communicator-device-guard branch from 6dc1bc7 to 46b71cc Compare October 4, 2022 12:23
@MarcelKoch
Copy link
Member Author

format!

Co-authored-by: Marcel Koch <marcel.koch@kit.edu>
@ginkgo-bot
Copy link
Member

Note: This PR changes the Ginkgo ABI:

Functions changes summary: 8 Removed, 624 Changed (42832 filtered out), 1184 Added functions
Variables changes summary: 0 Removed, 0 Changed, 0 Added variable

For details check the full ABI diff under Artifacts here

@MarcelKoch MarcelKoch merged commit b68b8d3 into distributed-develop Oct 5, 2022
@MarcelKoch MarcelKoch deleted the communicator-device-guard branch October 5, 2022 06:51
MarcelKoch added a commit that referenced this pull request Oct 31, 2022
This PR will add basic, distributed data structures (matrix and vector), and enable some solvers for these types. This PR contains the following PRs:
- #961
- #971 
- #976 
- #985 
- #1007 
- #1030 
- #1054

# Additional Changes

- moves new types into experimental namespace
- moves existing Partition class into experimental namespace
- moves existing mpi namespace into experimental namespace
- makes generic_scoped_device_id_guard destructor noexcept by terminating if restoring the original device id fails
- switches to blocking communication in the SpMV if OpenMPI version 4.0.x is used
- disables Horeka mpi tests and uses nla-gpu instead

Related PR: #1133
tcojean pushed a commit that referenced this pull request Nov 12, 2022
Advertise release 1.5.0 and last changes

+ Add changelog,
+ Update third party libraries
+ A small fix to a CMake file

See PR: #1195

The Ginkgo team is proud to announce the new Ginkgo minor release 1.5.0. This release brings many important new features such as:
- MPI-based multi-node support for all matrix formats and most solvers;
- full DPC++/SYCL support,
- functionality and interface for GPU-resident sparse direct solvers,
- an interface for wrapping solvers with scaling and reordering applied,
- a new algebraic Multigrid solver/preconditioner,
- improved mixed-precision support,
- support for device matrix assembly,

and much more.

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.13+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CUDA 9.2+ or NVHPC 22.7+
  + HIP module: ROCm 4.0+
  + DPC++ module: Intel OneAPI 2021.3 with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: GCC 5.5+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.2+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add MPI-based multi-node for all matrix formats and solvers (except GMRES and IDR). ([#676](#676), [#908](#908), [#909](#909), [#932](#932), [#951](#951), [#961](#961), [#971](#971), [#976](#976), [#985](#985), [#1007](#1007), [#1030](#1030), [#1054](#1054), [#1100](#1100), [#1148](#1148))
+ Porting the remaining algorithms (preconditioners like ISAI, Jacobi, Multigrid, ParILU(T) and ParIC(T)) to DPC++/SYCL, update to SYCL 2020, and improve support and performance ([#896](#896), [#924](#924), [#928](#928), [#929](#929), [#933](#933), [#943](#943), [#960](#960), [#1057](#1057), [#1110](#1110),  [#1142](#1142))
+ Add a Sparse Direct interface supporting GPU-resident numerical LU factorization, symbolic Cholesky factorization, improved triangular solvers, and more ([#957](#957), [#1058](#1058), [#1072](#1072), [#1082](#1082))
+ Add a ScaleReordered interface that can wrap solvers and automatically apply reorderings and scalings ([#1059](#1059))
+ Add a Multigrid solver and improve the aggregation based PGM coarsening scheme ([#542](#542), [#913](#913), [#980](#980), [#982](#982),  [#986](#986))
+ Add infrastructure for unified, lambda-based, backend agnostic, kernels and utilize it for some simple kernels ([#833](#833), [#910](#910), [#926](#926))
+ Merge different CUDA, HIP, DPC++ and OpenMP tests under a common interface ([#904](#904), [#973](#973), [#1044](#1044), [#1117](#1117))
+ Add a device_matrix_data type for device-side matrix assembly ([#886](#886), [#963](#963), [#965](#965))
+ Add support for mixed real/complex BLAS operations ([#864](#864))
+ Add a FFT LinOp for all but DPC++/SYCL ([#701](#701))
+ Add FBCSR support for NVIDIA and AMD GPUs and CPUs with OpenMP ([#775](#775))
+ Add CSR scaling ([#848](#848))
+ Add array::const_view and equivalent to create constant matrices from non-const data ([#890](#890))
+ Add a RowGatherer LinOp supporting mixed precision to gather dense matrix rows ([#901](#901))
+ Add mixed precision SparsityCsr SpMV support ([#970](#970))
+ Allow creating CSR submatrix including from (possibly discontinuous) index sets ([#885](#885), [#964](#964))
+ Add a scaled identity addition (M <- aI + bM) feature interface and impls for Csr and Dense ([#942](#942))


Deprecations and important changes:
+ Deprecate AmgxPgm in favor of the new Pgm name. ([#1149](#1149)).
+ Deprecate specialized residual norm classes in favor of a common `ResidualNorm` class ([#1101](#1101))
+ Deprecate CamelCase non-polymorphic types in favor of snake_case versions (like array, machine_topology, uninitialized_array, index_set) ([#1031](#1031), [#1052](#1052))
+ Bug fix: restrict gko::share to rvalue references (*possible interface break*) ([#1020](#1020))
+ Bug fix: when using cuSPARSE's triangular solvers, specifying the factory parameter `num_rhs` is now required when solving for more than one right-hand side, otherwise an exception is thrown ([#1184](#1184)).
+ Drop official support for old CUDA < 9.2 ([#887](#887))


Improved performance additions:
+ Reuse tmp storage in reductions in solvers and add a mutable workspace to all solvers ([#1013](#1013), [#1028](#1028))
+ Add HIP unsafe atomic option for AMD ([#1091](#1091))
+ Prefer vendor implementations for Dense dot, conj_dot and norm2 when available ([#967](#967)).
+ Tuned OpenMP SellP, COO, and ELL SpMV kernels for a small number of RHS ([#809](#809))


Fixes:
+ Fix various compilation warnings ([#1076](#1076), [#1183](#1183), [#1189](#1189))
+ Fix issues with hwloc-related tests ([#1074](#1074))
+ Fix include headers for GCC 12 ([#1071](#1071))
+ Fix for simple-solver-logging example ([#1066](#1066))
+ Fix for potential memory leak in Logger ([#1056](#1056))
+ Fix logging of mixin classes ([#1037](#1037))
+ Improve value semantics for LinOp types, like moved-from state in cross-executor copy/clones ([#753](#753))
+ Fix some matrix SpMV and conversion corner cases ([#905](#905), [#978](#978))
+ Fix uninitialized data ([#958](#958))
+ Fix CUDA version requirement for cusparseSpSM ([#953](#953))
+ Fix several issues within bash-script ([#1016](#1016))
+ Fixes for `NVHPC` compiler support ([#1194](#1194))


Other additions:
+ Simplify and properly name GMRES kernels ([#861](#861))
+ Improve pkg-config support for non-CMake libraries ([#923](#923), [#1109](#1109))
+ Improve gdb pretty printer ([#987](#987), [#1114](#1114))
+ Add a logger highlighting inefficient allocation and copy patterns ([#1035](#1035))
+ Improved and optimized test random matrix generation ([#954](#954), [#1032](#1032))
+ Better CSR strategy defaults ([#969](#969))
+ Add `move_from` to `PolymorphicObject` ([#997](#997))
+ Remove unnecessary device_guard usage ([#956](#956))
+ Improvements to the generic accessor for mixed-precision ([#727](#727))
+ Add a naive lower triangular solver implementation for CUDA ([#764](#764))
+ Add support for int64 indices from CUDA 11 onward with SpMV and SpGEMM ([#897](#897))
+ Add a L1 norm implementation ([#900](#900))
+ Add reduce_add for arrays ([#831](#831))
+ Add utility to simplify Dense View creation from an existing Dense vector ([#1136](#1136)).
+ Add a custom transpose implementation for Fbcsr and Csr transpose for unsupported vendor types ([#1123](#1123))
+ Make IDR random initilization deterministic ([#1116](#1116))
+ Move the algorithm choice for triangular solvers from Csr::strategy_type to a factory parameter ([#1088](#1088))
+ Update CUDA archCoresPerSM ([#1175](#1116))
+ Add kernels for Csr sparsity pattern lookup ([#994](#994))
+ Differentiate between structural and numerical zeros in Ell/Sellp ([#1027](#1027))
+ Add a binary IO format for matrix data ([#984](#984))
+ Add a tuple zip_iterator implementation ([#966](#966))
+ Simplify kernel stubs and declarations ([#888](#888))
+ Simplify GKO_REGISTER_OPERATION with lambdas ([#859](#859))
+ Simplify copy to device in tests and examples ([#863](#863))
+ More verbose output to array assertions ([#858](#858))
+ Allow parallel compilation for Jacobi kernels ([#871](#871))
+ Change clang-format pointer alignment to left ([#872](#872))
+ Various improvements and fixes to the benchmarking framework ([#750](#750), [#759](#759), [#870](#870), [#911](#911), [#1033](#1033), [#1137](#1137))
+ Various documentation improvements ([#892](#892), [#921](#921), [#950](#950), [#977](#977), [#1021](#1021), [#1068](#1068), [#1069](#1069), [#1080](#1080), [#1081](#1081), [#1108](#1108), [#1153](#1153), [#1154](#1154))
+ Various CI improvements ([#868](#868), [#874](#874), [#884](#884), [#889](#889), [#899](#899), [#903](#903),  [#922](#922), [#925](#925), [#930](#930), [#936](#936), [#937](#937), [#958](#958), [#882](#882), [#1011](#1011), [#1015](#1015), [#989](#989), [#1039](#1039), [#1042](#1042), [#1067](#1067), [#1073](#1073), [#1075](#1075), [#1083](#1083), [#1084](#1084), [#1085](#1085), [#1139](#1139), [#1178](#1178), [#1187](#1187))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-for-review This PR is ready for review 1:ST:run-full-test mod:core This is related to the core module. mod:cuda This is related to the CUDA module. mod:dpcpp This is related to the DPC++ module. mod:hip This is related to the HIP module. reg:benchmarking This is related to benchmarking. reg:build This is related to the build system. reg:testing This is related to testing. type:solver This is related to the solvers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants
0