8000 Random seed with external GPU · Issue #47672 · pytorch/pytorch · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
Random seed with external GPU #47672
Closed
Closed
@KarolyPoka

Description

@KarolyPoka

I bought a new Palit GeForce RTX 3070 GPU, to speed up my deep learning projects. My laptop is a Dell Latitude 5491 with an Nvidia GeForce MX130 and Intel UHD Graphics 630. I am using the GeForce RTX 3070 in a Razer Core X via Thunderbolt 3.0.

I would like to make my pytorch training reproducible, so I am using: torch.manual_seed(1) np.random.seed(1) random.seed(1) torch.cuda.manual_seed(1) torch.cuda.manual_seed_all(1) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False

Symptom: When the device=“cuda:0” its addressing the MX130, and the seeds are working, I got the same result every time. When the device=“cuda:1” its addressing the RTX 3070 and I dont get the same results. Seems like with the external GPU the random seed is not working. When device=“cuda” its automatically uses the RTX 3070 and no reproducibility. I am working with num_workers=0 and worker_init_fn=np.random.seed(1) in the dataloder. So practically changing the executor GPU has effect on the random seed. I dont want to, and I am not using both GPU-s in parallel.

How can I make the work with external GPU reproducible? I would very appreciate any help. Thanks in advance!

Pytorch version: 1.7.0 Cuda toolkit: 11.0.221 Anaconda version: 2020.07

According to NVIDIA-SMI: Cuda version: 11.1 Driver version: 457.09

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0