8000 [Dashboard] Add GPU component usage by Bye-legumes · Pull Request #52102 · ray-project/ray · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

[Dashboard] Add GPU component usage #52102

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 26 commits into
base: master
Choose a base branch
from

Conversation

Bye-legumes
Copy link
Contributor
@Bye-legumes Bye-legumes commented Apr 8, 2025

Why are these changes needed?

Close #45755.
This PR addresses the need for enhanced GPU usage metrics at the task/actor level in the Ray dashboard. Currently, the Ray dashboard provides detailed CPU and memory usage metrics for individual tasks and actors, but lacks similar granularity for GPU metrics. This enhancement aims to fill that gap by introducing per-task/actor GPU utilization and memory usage metrics.


Area Change + / –
dashboard/agent.py, dashboard/modules/stats_collector.py Collect per-GPU SM, memory-used, memory-total and temperature using NVML (fallback to nvidia-smi --query-gpu if NVML is not available). +307 LOC
dashboard/frontend/src/pages/node/Stats.vue New GPU bars beside existing CPU/Mem charts; shows live %, absolute MiB and thermals with colour-coded alert gradients. +62 LOC
dashboard/frontend/src/components/ResourceIcon.tsx Adds gpu-core, gpu-mem icons and tooltip helpers. +18 LOC
python/ray/dashboard/tests/test_gpu_stats.py E2E integration test that spins up a fake GPU via CUDA_VISIBLE_DEVICES=0 + mock NVML bindings to assert Dashboard JSON schema and time-series values. +20 LOC
Misc. Typo fixes, pylint: disable=c-extension-no-member guards, build-time NVML check in setup.py. –12 LOC

image
image

Related issue number

Close #45755.

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Signed-off-by: zhilong <zhilong.chen@mail.mcgill.ca>
Signed-off-by: zhilong <zhilong.chen@mail.mcgill.ca>
@Bye-legumes
Copy link
Contributor Author
Bye-legumes commented Apr 8, 2025
import ray
import torch
import os

# Initialize Ray, using all available GPUs
ray.init()

# Check if CUDA is available
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"CUDA device count: {torch.cuda.device_count()}")


@ray.remote(num_gpus=1)
class TorchGPUWorker:
    def __init__(self):
        assert torch.cuda.is_available(), "CUDA is not available"
        self.device = torch.device("cuda")
        print(f"Worker running on device: {self.device}")

    def matrix_multiply(self, size=20000):

8000
        # Create two large random tensors on GPU
        a = torch.randn(size, size, device=self.device)
        b = torch.randn(size, size, device=self.device)
        result = torch.matmul(a, b)

        # Return just the norm to reduce transfer cost
        return result.norm().item()


if __name__ == "__main__":
    # Create an actor
    gpu_worker = TorchGPUWorker.remote()

    # Run a GPU task
    result = ray.get(gpu_worker.matrix_multiply.remote(2048))

    print(f"Result norm of matrix multiply on GPU: {result}")

@hainesmichaelc hainesmichaelc added the community-contribution Contributed by the community label Apr 9, 2025
zhaoch23 added 2 commits April 9, 2025 16:22
Signed-off-by: zhaoch23 <c233zhao@uwaterloo.ca>
Signed-off-by: zhaoch23 <c233zhao@uwaterloo.ca>
@jcotant1 jcotant1 added dashboard Issues specific to the Ray Dashboard observability Issues related to the Ray Dashboard, Logging, Metrics, Tracing, and/or Profiling labels Apr 10, 2025
zhaoch23 and others added 4 commits April 10, 2025 16:46
Signed-off-by: zhaoch23 <c233zhao@uwaterloo.ca>
Signed-off-by: zhaoch23 <c233zhao@uwaterloo.ca>
Signed-off-by: zhaoch23 <c233zhao@uwaterloo.ca>
@zhaoch23
Copy link
Contributor
zhaoch23 commented Apr 10, 2025

image
image

@zhaoch23
Copy link
Contributor

Script to test:

import ray
import torch
import os
import time
# Initialize Ray, using all available GPUs
ray.init()

# Check if CUDA is available
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"CUDA device count: {torch.cuda.device_count()}")


@ray.remote(num_gpus=0.5)
class TorchGPUWorker:
    def __init__(self):
        assert torch.cuda.is_available(), "CUDA is not available"
        print(os.getenv("CUDA_VISIBLE_DEVICES"))
        self.device = torch.device("cuda")
        print(f"Worker running on device: {self.device}")

    def matrix_multiply(self, size=16384):
        a = torch.randn(size, size, device=self.device)
        b = torch.randn(size, size, device=self.device)

        torch.matmul(a, b)
        torch.cuda.synchronize()
        
        REPEATS = 6 

        start = time.time()
        for _ in range(REPEATS):
            c = torch.matmul(a, b)
        torch.cuda.synchronize()



if __name__ == "__main__":
    # Create an actor
    gpu_workers = [TorchGPUWorker.remote() for i in range(4)]

    # Run a GPU task
    for i in range(100):
        result = ray.get([worker.matrix_multiply.remote() for worker in gpu_workers])
        print(f"Result norm of matrix multiply on GPU: {result}")

Signed-off-by: zhaoch23 <c233zhao@uwaterloo.ca>
@Bye-legumes Bye-legumes changed the title [WIP][Dashboard] Add GPU component usage [Dashboard] Add GPU component usage Apr 11, 2025
Comment on lines 466 to 479
expr="sum(ray_component_gpu_utilization{{{global_filters}}} / 100) by (Component, pid, GpuIndex, GpuDeviceName)",
legend="{{Component}}::{{pid}}, gpu.{{GpuIndex}}, {{GpuDeviceName}}",
),
],
),
Panel(
id=46,
title="Component GPU Memory Usage",
description="GPU memory usage of Ray components.",
unit="bytes",
targets=[
Target(
expr="sum(ray_component_gpu_memory_usage{{{global_filters}}}) by (Component, pid, GpuIndex, GpuDeviceName)",
legend="{{Component}}::{{pid}}, gpu.{{GpuIndex}}, {{GpuDeviceName}}",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's remove pid to align with the Node CPU component graph.

if pid == "-": # no process on this GPU
continue
gpu_id = int(gpu_id)
pinfo = ProcessGPUInfo(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we use a different type here? Since gpu_memory_usage is of type Megabytes. It's very confusing for it to be a percentage and may introduce tricky bugs later.

if nv_process.usedGpuMemory
else 0
),
gpu_utilization=None, # Not available in pynvml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we match the Ray Dashboard behavior where we show the total gpu utilization for gpu that the process attaches to. Not necessarily the utilization exclusive to that process.

The nvdia-smi parsing is a fragile. I don't know what backwards compatability guarantees nvidia-smi provides

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean we give up nvidia-smi? Or we implement a fallback strategy that use the pynvml to display the total gpu utilization if nvidia-smi is not available?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, lets remove the usage of nvidia-smi and just use pynvml all the time. We can add nvidia-smi at a later time if there is enough demand. But I think for most usecases the pynvml approach should be good enough.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed offline. We will be adding the nvidia-smi dependency. We will add a test validating the output of nvidia-smi pmon. We will also update the ray dashboard UI to utilize the gpu utilization value from nvidia-smi instead of pynvml.

zhaoch23 and others added 8 commits April 14, 2025 21:05
Signed-off-by: zhaoch23 <c233zhao@uwaterloo.ca>
Signed-off-by: zhaoch23 <c233zhao@uwaterloo.ca>
Signed-off-by: zhaoch23 <c233zhao@uwaterloo.ca>
Signed-off-by: zhaoch23 <c233zhao@uwaterloo.ca>
Signed-off-by: zhaoch23 <c233zhao@uwaterloo.ca>
@alanwguo
Copy link
Contributor

I did some manual testing and didn't seem to get any metrics for component_gpu_utilization even though I got metrics for component_memory_usage

Screenshot 2025-05-13 at 5 14 36 PM

In my worker node, this is the output i get from nvidia-smi:

$ nvidia-smi --query-gpu=index,name,uuid,utilization.gpu,memory.used,memory.total --format=csv,noheader,nounits
0, Tesla T4, GPU-9b0c908a-c921-4693-783f-534cb205ec77, 100, 9059, 15360
1, Tesla T4, GPU-15797035-31fa-a236-16b1-209b8dd896dd, 100, 9059, 15360
2, Tesla T4, GPU-f20b2bc8-2492-322f-551b-23e8ef87130f, 100, 9059, 15360
3, Tesla T4, GPU-2ac2bffc-c188-fa30-3710-21d904c37691, 100, 9059, 15360

$ nvidia-smi pmon -c 1
# gpu         pid   type     sm    mem    enc    dec    jpg    ofa    command 
# Idx           #    C/G      %      %      %      %      %      %    name 
    0       5569     C     94     84      -      -      -      -    ray::RayTrainWo
    1       5718     C     91     80      -      -      -      -    ray::RayTrainWo
    2       5716     C     91     81      -      -      -      -    ray::RayTrainWo
    3       5717     C     90     81      -      -      -      -    ray::RayTrainWo

$ curl localhost:8085/metrics | grep component_gpu
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  125k  100  125k    0     0  20.0M      0 # HELP ray_component_gpu_memory_usage GPU memory usage of all components on the node.
--:# TYPE ray_component_gpu_memory_usage gauge
--ray_component_gpu_memory_usage{Component="ray::RayTrainWorker",GpuDeviceName="Tesla T4",GpuIndex="2",IsHeadNode="",SessionName="session_2025-05-13_16-42-55_367950_2270",Version="3.0.0.dev0",ip="",pid="5716"} 9.495904256e+09
:-ray_component_gpu_memory_usage{Component="ray::RayTrainWorker",GpuDeviceName="Tesla T4",GpuIndex="3",IsHeadNode="",SessionName="session_2025-05-13_16-42-55_367950_2270",Version="3.0.0.dev0",ip="",pid="5717"} 9.495904256e+09
- ray_component_gpu_memory_usage{Component="ray::RayTrainWorker",GpuDeviceName="Tesla T4",GpuIndex="1",IsHeadNode="",SessionName="session_2025-05-13_16-42-55_367950_2270",Version="3.0.0.dev0",ip="",pid="5718"} 9.495904256e+09
--ray_component_gpu_memory_usage{Component="ray::RayTrainWorker",GpuDeviceName="Tesla T4",GpuIndex="0",IsHeadNode="",SessionName="session_2025-05-13_16-42-55_367950_2270",Version="3.0.0.dev0",ip="",pid="5569"} 9.495904256e+09

@zhaoch23
Copy link
Contributor

I have fixed some potential parsing error. This is what is looks like on my side:
image
Please let me know if this fix works on your environment. @alanwguo

@Bye-legumes
Copy link
Contributor Author

Lint failure

It fix by merge main, can you check it, thanks!

@jjyao
Copy link
Collaborator
jjyao commented May 15, 2025

@Bye-legumes could you update the PR description of what changed and attach a screenshot?

@Bye-legumes
Copy link
Contributor Author

@Bye-legumes could you update the PR description of what changed and attach a screenshot?

updated!

Signed-off-by: zhilong <zhilong.chen@mail.mcgill.ca>
@Bye-legumes
Copy link
Contributor Author

I think this time is OK now @jjyao

@@ -993,6 +1127,81 @@ def generate_worker_stats_record(self, worker_stats: List[dict]) -> List[Record]

return records

def generate_worker_gpu_stats_record(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For component_cpu_percentage metric, we don't emit one for each worker process, instead we group by task/actor name (in other words, pid label is not set for core worker component_cpu_percentage metric.

def generate_worker_stats_record(self, worker_stats: List[dict]) -> List[Record]:
        """Generate a list of Record class for worker proceses.

        This API automatically sets the component_name of record as
        the name of worker processes. I.e., ray::* so that we can report
        per task/actor (grouped by a func/class name) resource usages.

We should do the same thing for component_gpu_percentage

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zhaoch23 seems the updated code is not pushed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sry, pushed just now

@jjyao jjyao self-assigned this May 31, 2025
@zhaoch23
Copy link
Contributor
zhaoch23 commented Jun 2, 2025

image
image

Copy link

This pull request has been automatically marked as stale because it has not had
any activity for 14 days. It will be closed in another 14 days if no further activity occurs.
Thank you for your contributions.

You can always ask for help on our discussion forum or Ray's public slack channel.

If you'd like to keep this open, just leave any comment, and the stale label will be removed.

@github-actions github-actions bot added the stale The issue is stale. It will be closed within 7 days unless there are further conversation label Jun 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
community-contribution Contributed by the community dashboard Issues specific to the Ray Dashboard go add ONLY when ready to merge, run all tests observability Issues related to the Ray Dashboard, Logging, Metrics, Tracing, and/or Profiling stale The issue is stale. It will be closed within 7 days unless there are further conversation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Core] Show per task/actor GPU usage metric
6 participants
0