Difference between numpy dot() and Python 3.5+ matrix multiplication @
--
Music by Eric Matyas
https://www.soundimage.org
Track title: Sunrise at the Stream
--
Chapters
00:00 Question
00:50 Accepted answer (Score 215)
02:10 Answer 2 (Score 24)
02:48 Answer 3 (Score 20)
04:10 Answer 4 (Score 17)
05:34 Thank you
--
Full question
https://stackoverflow.com/questions/3414...
Question links:
[new matrix multiplication operator (@)]: https://docs.python.org/3/whatsnew/3.5.h...
[numpy dot]: http://docs.scipy.org/doc/numpy/referenc...
Accepted answer links:
[np.matmul]: https://numpy.org/doc/stable/reference/g...
[np.dot]: http://docs.scipy.org/doc/numpy/referenc...
Answer 2 links:
[perfplot]: https://github.com/nschloe/perfplot
[image]: https://i.stack.imgur.com/AoEFk.png
Answer 4 links:
[broadcasting]: https://docs.scipy.org/doc/numpy/user/ba...
--
Content licensed under CC BY-SA
https://meta.stackexchange.com/help/lice...
--
Tags
#python #numpy #matrixmultiplication #python35
#avk47
ACCEPTED ANSWER
Score 230
The @ operator calls the array's __matmul__ method, not dot. This method is also present in the API as the function np.matmul.
>>> a = np.random.rand(8,13,13)
>>> b = np.random.rand(8,13,13)
>>> np.matmul(a, b).shape
(8, 13, 13)
From the documentation:
matmuldiffers fromdotin two important ways.
- Multiplication by scalars is not allowed.
- Stacks of matrices are broadcast together as if the matrices were elements.
The last point makes it clear that dot and matmul methods behave differently when passed 3D (or higher dimensional) arrays. Quoting from the documentation some more:
For matmul:
If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly.
For np.dot:
For 2-D arrays it is equivalent to matrix multiplication, and for 1-D arrays to inner product of vectors (without complex conjugation). For N dimensions it is a sum product over the last axis of a and the second-to-last of b
ANSWER 2
Score 31
Just FYI, @ and its numpy equivalents dot and matmul are all equally fast. (Plot created with perfplot, a project of mine.)
Code to reproduce the plot:
import perfplot
import numpy
def setup(n):
A = numpy.random.rand(n, n)
x = numpy.random.rand(n)
return A, x
def at(A, x):
return A @ x
def numpy_dot(A, x):
return numpy.dot(A, x)
def numpy_matmul(A, x):
return numpy.matmul(A, x)
perfplot.show(
setup=setup,
kernels=[at, numpy_dot, numpy_matmul],
n_range=[2 ** k for k in range(15)],
)
ANSWER 3
Score 23
The answer by @ajcr explains how the dot and matmul (invoked by the @ symbol) differ. By looking at a simple example, one clearly sees how the two behave differently when operating on 'stacks of matricies' or tensors.
To clarify the differences take a 4x4 array and return the dot product and matmul product with a 3x4x2 'stack of matricies' or tensor.
import numpy as np
fourbyfour = np.array([
[1,2,3,4],
[3,2,1,4],
[5,4,6,7],
[11,12,13,14]
])
threebyfourbytwo = np.array([
[[2,3],[11,9],[32,21],[28,17]],
[[2,3],[1,9],[3,21],[28,7]],
[[2,3],[1,9],[3,21],[28,7]],
])
print('4x4*3x4x2 dot:\n {}\n'.format(np.dot(fourbyfour,threebyfourbytwo)))
print('4x4*3x4x2 matmul:\n {}\n'.format(np.matmul(fourbyfour,threebyfourbytwo)))
The products of each operation appear below. Notice how the dot product is,
...a sum product over the last axis of a and the second-to-last of b
and how the matrix product is formed by broadcasting the matrix together.
4x4*3x4x2 dot:
[[[232 152]
[125 112]
[125 112]]
[[172 116]
[123 76]
[123 76]]
[[442 296]
[228 226]
[228 226]]
[[962 652]
[465 512]
[465 512]]]
4x4*3x4x2 matmul:
[[[232 152]
[172 116]
[442 296]
[962 652]]
[[125 112]
[123 76]
[228 226]
[465 512]]
[[125 112]
[123 76]
[228 226]
[465 512]]]
ANSWER 4
Score 18
In mathematics, I think the dot in numpy makes more sense
dot(a,b)_{i,j,k,a,b,c} =
since it gives the dot product when a and b are vectors, or the matrix multiplication when a and b are matrices
As for matmul operation in numpy, it consists of parts of dot result, and it can be defined as
matmul(a,b)_{i,j,k,c} =
So, you can see that matmul(a,b) returns an array with a small shape, which has smaller memory consumption and make more sense in applications. In particular, combining with broadcasting, you can get
matmul(a,b)_{i,j,k,l} =
for example.
From the above two definitions, you can see the requirements to use those two operations. Assume a.shape=(s1,s2,s3,s4) and b.shape=(t1,t2,t3,t4)
- To use dot(a,b) you need
- t3=s4;
- To use matmul(a,b) you need
- t3=s4
- t2=s2, or one of t2 and s2 is 1
- t1=s1, or one of t1 and s1 is 1
Use the following piece of code to convince yourself.
import numpy as np
for it in range(10000):
a = np.random.rand(5,6,2,4)
b = np.random.rand(6,4,3)
c = np.matmul(a,b)
d = np.dot(a,b)
#print ('c shape: ', c.shape,'d shape:', d.shape)
for i in range(5):
for j in range(6):
for k in range(2):
for l in range(3):
if c[i,j,k,l] != d[i,j,k,j,l]:
print (it,i,j,k,l,c[i,j,k,l]==d[i,j,k,j,l]) # you will not see them
