-
Notifications
You must be signed in to change notification settings - Fork 576
Add capability to use different allocators in cuML Python benchmarks #6903
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: branch-25.08
Are you sure you want to change the base?
Conversation
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
"--rmm-allocator", | ||
choices=["cuda", "managed", "prefetched"], | ||
default="cuda", | ||
help="RMM memory resource to use (default: CUDA)", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
help="RMM memory resource to use (default: CUDA)", | |
help="RMM memory resource to use (default: cuda)", |
# Setup RMM allocator based on command line option | ||
if args.rmm_allocator == "cuda": | ||
dev_resource = rmm.mr.CudaMemoryResource() | ||
rmm.mr.set_current_device_resource(dev_resource) | ||
print("Using CUDA Memory Resource...") | ||
elif args.rmm_allocator == "managed": | ||
managed_resource = rmm.mr.ManagedMemoryResource() | ||
rmm.mr.set_current_device_resource(managed_resource) | ||
print("Using Managed Memory Resource...") | ||
elif args.rmm_allocator == "prefetched": | ||
upstream_mr = rmm.mr.ManagedMemoryResource() | ||
prefetch_mr = rmm.mr.PrefetchResourceAdaptor(upstream_mr) | ||
rmm.mr.set_current_device_resource(prefetch_mr) | ||
print("Using Prefetched Managed Memory Resource...") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a good use-case for match/case syntax:
# Setup RMM allocator based on command line option | |
if args.rmm_allocator == "cuda": | |
dev_resource = rmm.mr.CudaMemoryResource() | |
rmm.mr.set_current_device_resource(dev_resource) | |
print("Using CUDA Memory Resource...") | |
elif args.rmm_allocator == "managed": | |
managed_resource = rmm.mr.ManagedMemoryResource() | |
rmm.mr.set_current_device_resource(managed_resource) | |
print("Using Managed Memory Resource...") | |
elif args.rmm_allocator == "prefetched": | |
upstream_mr = rmm.mr.ManagedMemoryResource() | |
prefetch_mr = rmm.mr.PrefetchResourceAdaptor(upstream_mr) | |
rmm.mr.set_current_device_resource(prefetch_mr) | |
print("Using Prefetched Managed Memory Resource...") | |
# Setup RMM allocator based on command line option | |
match args.rmm_allocator: | |
case "cuda": | |
dev_resource = rmm.mr.CudaMemoryResource() | |
rmm.mr.set_current_device_resource(dev_resource) | |
print("Using CUDA Memory Resource...") | |
case "managed": | |
managed_resource = rmm.mr.ManagedMemoryResource() | |
rmm.mr.set_current_device_resource(managed_resource) | |
print("Using Managed Memory Resource...") | |
case "prefetched": | |
upstream_mr = rmm.mr.ManagedMemoryResource() | |
prefetch_mr = rmm.mr.PrefetchResourceAdaptor(upstream_mr) | |
rmm.mr.set_current_device_resource(prefetch_mr) | |
print("Using Prefetched Managed Memory Resource...") | |
case _: | |
raise ValueError(f"Unknown RMM allocator type: {args.rmm_allocator}") |
# Define RMM allocator options | ||
RMM_ALLOCATOR_TYPES = ["cuda", "managed", "prefetched"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't use this variable anywhere, so I'd suggest to remove it.
PR adds a new CLI option to run_benchmarks.py to set up different RMM allocators.