8000 check for nvidia driver's sufficiency before checking for number of CUDA devices by soumith · Pull Request #1156 · pytorch/pytorch · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

check for nvidia driver's sufficiency before checking for number of CUDA devices #1156

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 31, 2017

Conversation

soumith
Copy link
Member
@soumith soumith commented Mar 31, 2017

Fixes #1154

@soumith soumith merged commit b13b701 into master Mar 31, 2017
@soumith soumith deleted the cudafix branch March 31, 2017 16:20
jjsjann123 pushed a commit to jjsjann123/pytorch that referenced this pull request Nov 5, 2021
Validation of allocations need to be done only for tensors, so
non-tensor allocations can be just skipped.
akashveramd pushed a commit to akashveramd/pytorch that referenced this pull request Apr 9, 2025
…duction (pytorch#1156)

* Squashed 'src/composable_kernel/' content from commit f6edda6

git-subtree-dir: src/composable_kernel
git-subtree-split: f6edda6

* add solver ConvIgemmFwdV6r1DlopsNchwKcyxNkhw; rename static ck source files

* Squashed 'src/composable_kernel/' changes from f6edda6..5781adf

5781adf Update develop (pytorch#5) (pytorch#6)
97e6d51 Merge pull request pytorch#4 from ROCmSoftwarePlatform/separate_online_compile
7b1ec41 refactor
49c33aa refactor
54b3e73 rename

git-subtree-dir: src/composable_kernel
git-subtree-split: 5781adf

* fix

* refactor

* remove online compilation from CK

* refactor

* fix

* add ctest

* tidy

* add tidy

* tidy

* tidy

* tidy

* tidy

* tidy

* tidy

* tidy

* tidy

* tidy

* add c-style pointer cast

* vector/scalar pointer cast use c-style pointer cast instead of reinterpret_cast

* fix clang warning suppression

* tidy

* suppress cppcheck

* fix enum issue

* revert chagnes to hip build

* fix kernel filename

* update CK build script

* rename

* rename

* make innner product compatiable on gfx900

* Update src/include/miopen/solver/ck_utility_common.hpp

Co-authored-by: JD <Jehandad.Khan@amd.com>

* compiler parameter use stream

* use int instead of index_t in kernel wrapper

* DynamicBuffer, StaticBuffer, amd_buffer_load support customized value for invalid element

* refactor

* refactor

* change cmakelist

* change ck common utility

* fix

* Squashed 'src/composable_kernel/' changes from 5781adf..31b4035

31b4035 Merge pull request pytorch#16 from ROCmSoftwarePlatform/develop
b62bf8c Merge pull request pytorch#14 from ROCmSoftwarePlatform/miopen_downstream_init_integration
ccc4a1d Merge pull request pytorch#8 from ROCmSoftwarePlatform/miopen_downstream_init_integration
67ad47e refactor
16effa7 refactor
a91b68d DynamicBuffer, StaticBuffer, amd_buffer_load support customized value for invalid element
2cbabbb use int instead of index_t in kernel wrapper
0834bc7 compiler parameter use stream
f2ac783 make innner product compatiable on gfx900
4e57b30 rename
c03045c rename
b258995 update CK build script
2c48039 fix kernel filename
d626dcc fix enum issue
643ebd4 tidy
ddd49ec fix clang warning suppression
4f566c6 vector/scalar pointer cast use c-style pointer cast instead of reinterpret_cast
172036d add c-style pointer cast
76f3131 tidy
d184289 tidy
f885c13 tidy
80120f0 tidy
c3efeb5 tidy
56fc084 tidy
54fba51 tidy
e62bae7 tidy
24c8728 add tidy
61487e0 fix
ae98b52 remove online compilation from CK
cb95421 refactor
73ca970 Merge commit '437cc595c6e206dfebb118985b5171bbc1e29eab' into composable_kernel_init_integration_v3
3b86646 Merge pull request pytorch#7 from ROCmSoftwarePlatform/master
d09ea4f Update develop (pytorch#5)
3d32ae9 add solver ConvIgemmFwdV6r1DlopsNchwKcyxNkhw; rename static ck source files

git-subtree-dir: src/composable_kernel
git-subtree-split: 31b4035

* Tiny fix in using data type template parameters in blockwise and direct_threadwise kernel

* Fix with regard to implementing GetZeroVal() in both kernel and host

* Avoid convert to compType from dstDataType before writting the output value

* Add half_t support to NumericLimits and make constexpr GetZeroVal() of binary operator

* Add CONSTANT decorator for descriptor read buffer

* Use get_thread_local_1d_id() for thread local Id

* Rename GetZeroVal() to GetReductionZeroVal() in the kernels

* Remove constexpr from initialized zeroVal and tiny fix in reduction_operator.hpp

* Occasional tiny simplification and update in the kernel files

* Update in src/reducetensor.cpp for consistent IDs passing to the kernel

* Update to re-order tensor dimensions on the host, split second_call kernel wrapper files and simplify reduce_all kernel wrappers

* Update to remove OpenCL tidy checking failures

* Small updates in src/reducetensor.cpp

* Update for better readability

* Remove unused codes and not-needed template parameters in the kernel wrappers

Co-authored-by: Chao Liu <chao.liu2@amd.com>
Co-authored-by: JD <Jehandad.Khan@amd.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0