How to tell if tensorflow is using gpu acceleration from inside python shell?
Rise to the top 3% as a developer or hire one of them at Toptal: https://topt.al/25cXVn
--------------------------------------------------
Music by Eric Matyas
https://www.soundimage.org
Track title: Puzzle Island
--
Chapters
00:00 How To Tell If Tensorflow Is Using Gpu Acceleration From Inside Python Shell?
00:45 Accepted Answer Score 448
01:15 Answer 2 Score 302
02:50 Answer 3 Score 207
03:12 Answer 4 Score 124
03:40 Thank you
--
Full question
https://stackoverflow.com/questions/3800...
--
Content licensed under CC BY-SA
https://meta.stackexchange.com/help/lice...
--
Tags
#python #tensorflow #ubuntu #gpu
#avk47
ACCEPTED ANSWER
Score 448
No, I don't think "open CUDA library" is enough to tell, because different nodes of the graph may be on different devices.
When using tensorflow2:
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
For tensorflow1, to find out which device is used, you can enable log device placement like this:
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
Check your console for this type of output.
ANSWER 2
Score 302
Apart from using sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) which is outlined in other answers as well as in the official TensorFlow documentation, you can try to assign a computation to the gpu and see whether you have an error.
import tensorflow as tf
with tf.device('/gpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
with tf.Session() as sess:
print (sess.run(c))
Here
- "/cpu:0": The CPU of your machine.
- "/gpu:0": The GPU of your machine, if you have one.
If you have a gpu and can use it, you will see the result. Otherwise you will see an error with a long stacktrace. In the end you will have something like this:
Cannot assign a device to node 'MatMul': Could not satisfy explicit device specification '/device:GPU:0' because no devices matching that specification are registered in this process
Recently a few helpful functions appeared in TF:
- tf.test.is_gpu_available tells if the gpu is available
- tf.test.gpu_device_name returns the name of the gpu device
You can also check for available devices in the session:
with tf.Session() as sess:
devices = sess.list_devices()
devices will return you something like
[_DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:CPU:0, CPU, -1, 4670268618893924978),
_DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 17179869184, 6127825144471676437),
_DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:XLA_GPU:0, XLA_GPU, 17179869184, 16148453971365832732),
_DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:0, TPU, 17179869184, 10003582050679337480),
_DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:1, TPU, 17179869184, 5678397037036584928)
ANSWER 3
Score 207
Following piece of code should give you all devices available to tensorflow.
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
Sample Output
[name: "/cpu:0" device_type: "CPU" memory_limit: 268435456 locality { } incarnation: 4402277519343584096,
name: "/gpu:0" device_type: "GPU" memory_limit: 6772842168 locality { bus_id: 1 } incarnation: 7471795903849088328 physical_device_desc: "device: 0, name: GeForce GTX 1070, pci bus id: 0000:05:00.0" ]
ANSWER 4
Score 124
I think there is an easier way to achieve this.
import tensorflow as tf
if tf.test.gpu_device_name():
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
else:
print("Please install GPU version of TF")
It usually prints like
Default GPU Device: /device:GPU:0
This seems easier to me rather than those verbose logs.
Edit:- This was tested for TF 1.x versions. I never had a chance to do stuff with TF 2.0 or above so keep in mind.
