torch.mm, torch.mul, torch.matmul的区别

一、点乘

  • torch.mul(a, b)是矩阵a和b对应位相乘,a和b的维度一般相等,比如a的维度是(1, 2),b的维度是(1, 2),返回的仍是(1, 2)的矩阵
>>> a = torch.rand(1, 2)
>>> b = torch.rand(1, 2)
>>> torch.mul(a, b)  # 返回 1*2 的tensor

#乘列向量
>>> a = torch.ones(3,4) 
>>> a
tensor([[1., 1., 1., 1.],
       [1., 1., 1., 1.],
       [1., 1., 1., 1.]])
>>> b = torch.Tensor([1,2,3]).reshape((3,1))
>>> b
tensor([[1.],
       [2.],
       [3.]])
>>> torch.mul(a, b)
tensor([[1., 1., 1., 1.],
       [2., 2., 2., 2.],
       [3., 3., 3., 3.]])

二、矩阵相乘

  • torch.mm(a, b)是矩阵a和b矩阵相乘,比如a的维度是(3, 4),b的维度是(4, 2),返回的就是(3, 2)的矩阵torch.mm(a, b)针对二维矩阵
>>> a = torch.ones(3,4)
>>> b = torch.ones(4,2)
>>> torch.mm(a, b)
tensor([[4., 4.],
        [4., 4.],
        [4., 4.]])
  • torch.matmul(a, b)一般是高维矩阵a和b相乘,比如a的维度是(3, 4),b的维度是(5, 4, 2),返回的就是(5, 3, 2)的矩阵,因为后两维(3, 4)和(4, 2)矩阵相乘的结果是(3, 2)
>>> a = torch.ones(5,4,2)
>>> b = torch.ones(5,2,3)
>>> torch.matmul(a, b).shape
torch.Size([5, 4, 3])

参考
https://blog.csdn.net/da_kao_la/article/details/87484403?depth_1-utm_source=distribute.pc_relevant.none-task&utm_source=distribute.pc_relevant.none-task
https://blog.csdn.net/weixin_42105432/article/details/100691592
https://blog.csdn.net/Real_Brilliant/article/details/85756477?depth_1-utm_source=distribute.pc_relevant.none-task&utm_source=distribute.pc_relevant.none-task

发布了44 篇原创文章 · 获赞 0 · 访问量 1898

猜你喜欢

转载自blog.csdn.net/weixin_39331401/article/details/104680776