The Python Oracle

torch.cuda.is_available() returns false in colab

Become part of the top 3% of the developers by applying to Toptal https://topt.al/25cXVn

--

Music by Eric Matyas
https://www.soundimage.org
Track title: Techno Intrigue Looping

--

Chapters
00:00 Question
00:43 Accepted answer (Score 0)
01:07 Answer 2 (Score 16)
01:25 Answer 3 (Score 1)
01:47 Answer 4 (Score 1)
02:16 Thank you

--

Full question
https://stackoverflow.com/questions/5915...

--

Content licensed under CC BY-SA
https://meta.stackexchange.com/help/lice...

--

Tags
#python #pytorch #googlecolaboratory

#avk47



ANSWER 1

Score 21


Make sure your Hardware accelerator is set to GPU.

Runtime > Change runtime type > Hardware Accelerator




ANSWER 2

Score 5


In case anyone else comes here and makes the same mistake I was making:

If you are trying to check if GPU is available and you do:

if torch.cuda.is_available:
  print('GPU available')
else:
  print('Please set GPU via Edit -> Notebook Settings.')

It will always seem that GPU is available. Note you need to use torch.cuda.is_available() not torch.cuda.is_available.




ACCEPTED ANSWER

Score 1


Worked with all the versions mentioned above and I did not have to downgrade my CUDA to 10.0. I had restarted my colab after the updates which set my running machine back to CPU and I just had to change it back to GPU.




ANSWER 4

Score 0


Temporal fix may be to try Cuda 10.0 as explained in here.

Something like this:

conda install pytorch torchvision cudatoolkit=10.0 -c pytorch

In future versions this may work.