computation of test data in tensorflow tutorial
Become part of the top 3% of the developers by applying to Toptal https://topt.al/25cXVn
--
Music by Eric Matyas
https://www.soundimage.org
Track title: Hypnotic Orient Looping
--
Chapters
00:00 Question
01:19 Accepted answer (Score 3)
02:05 Answer 2 (Score 2)
02:38 Thank you
--
Full question
https://stackoverflow.com/questions/3767...
Question links:
https://www.tensorflow.org/versions/r0.9...
--
Content licensed under CC BY-SA
https://meta.stackexchange.com/help/lice...
--
Tags
#python #machinelearning #tensorflow #deeplearning
#avk47
--
Music by Eric Matyas
https://www.soundimage.org
Track title: Hypnotic Orient Looping
--
Chapters
00:00 Question
01:19 Accepted answer (Score 3)
02:05 Answer 2 (Score 2)
02:38 Thank you
--
Full question
https://stackoverflow.com/questions/3767...
Question links:
https://www.tensorflow.org/versions/r0.9...
--
Content licensed under CC BY-SA
https://meta.stackexchange.com/help/lice...
--
Tags
#python #machinelearning #tensorflow #deeplearning
#avk47
ACCEPTED ANSWER
Score 3
accuracy depends on correct_prediction which depends on y.
So when you call sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}), y is computed before accuracy is computed. All this happen inside the TensorFlow graph.
The TensorFlow graph is the same for train and test. The only difference is the data you feed to the placeholders x and y_.
ANSWER 2
Score 2
y is computed here:
y = tf.nn.softmax(tf.matmul(x, W) + b) # Line 7
specifically what you are looking for is with in that line:
tf.matmul(x, W) + b
the output of which is put through the softmax function to identify the class.
This is computed in each of the 1000 passes through the graph, each time the variables W, and b are updated by GradientDescent and y is computed and compared against y_ to determine the loss.