Moving computations to GPU

#2
by dominic-biographica - opened

Hi there,

thanks for providing this implementation!

I'm having some trouble performing inference using GPUs. Specifically, despite having a cuda enabled machine, by default the computations in the example take place on CPU (due to the gpus=None argument). However, upon setting gpus=[0] I find that still none of the computations are moved to the GPU and indeed when i check with nvidia-smi there is no GPU usage. I've been playing around trying to send models/data to device by just simply adding in `.to(device) methods into the test function but with no success so far. I was wondering whether I am missing something very obvious which is causing these difficulties.

Thanks!

The code was tested on many GPU-enabled servers and worked fine, so I suppose there might be issues with installing CUDA-enabled Pytorch on your end?

  • Do you have a Pytorch 2.1+ with CUDA 11.8+?
  • Does Pytorch see a GPU on that machine in torch.cuda.is_available()?

Yep to both of those! Eventually the code was made to utilise CUDA using a mixture of to(device) and the ToDevice class from torch_geometric.transforms

It could potentially be issues with the VM - it was spun up on GCP and used a Deep Learning on Linux machine image running Debian 10. Can post full specs if useful but given the fixes above perhaps it's unnecessary.

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment