TypeError: Input 'b' of 'MatMul' Op has type float32 that does not match type int32 of argument 'a'
--------------------------------------------------
Rise to the top 3% as a developer or hire one of them at Toptal: https://topt.al/25cXVn
--------------------------------------------------
Music by Eric Matyas
https://www.soundimage.org
Track title: Switch On Looping
--
Chapters
00:00 Typeerror: Input 'B' Of 'Matmul' Op Has Type Float32 That Does Not Match Type Int32
01:16 Answer 1 Score 1
01:30 Accepted Answer Score 4
02:30 Thank you
--
Full question
https://stackoverflow.com/questions/4590...
--
Content licensed under CC BY-SA
https://meta.stackexchange.com/help/lice...
--
Tags
#python #python27 #tensorflow
#avk47
    Rise to the top 3% as a developer or hire one of them at Toptal: https://topt.al/25cXVn
--------------------------------------------------
Music by Eric Matyas
https://www.soundimage.org
Track title: Switch On Looping
--
Chapters
00:00 Typeerror: Input 'B' Of 'Matmul' Op Has Type Float32 That Does Not Match Type Int32
01:16 Answer 1 Score 1
01:30 Accepted Answer Score 4
02:30 Thank you
--
Full question
https://stackoverflow.com/questions/4590...
--
Content licensed under CC BY-SA
https://meta.stackexchange.com/help/lice...
--
Tags
#python #python27 #tensorflow
#avk47
ACCEPTED ANSWER
Score 4
I've met the same problem using Tensorflow r1.4 with Python 3.4.
Indeed, I think you need to change the code
tf.nn.nce_loss(nce_weights, nce_biases, embed, train_labels,
                 num_sampled, vocabulary_size))
into
tf.nn.nce_loss(nce_weights, nce_biases, train_labels, embed,
                 num_sampled, vocabulary_size))
or
loss = tf.reduce_mean(tf.nn.nce_loss(
        weights = softmax_weights,
        biases = softmax_biases, 
        inputs = embed, 
        labels = train_labels, 
        num_sampled = num_sampled, 
        num_classes = vocabulary_size))
Meanwhile, you need to change the code back to
similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
It wrong to use tf.cast(..., tf.int32) and actually, there is no need to use tf.cast(..., tf.float32) because it's already been tf.float32.
p.s.
The solution is also useful when you meet the problem while using tf.nn.sampled_softmax_loss() because the usage of sampled_softmax_loss() is quite similar to nce_loss().
ANSWER 2
Score 1
Why are you doing the matrix multiplication in integer space? You probably want both of those tf.cast to be to tf.float32.