pytorch:tensor的运算

**

一、加减乘除

**
torch.add / torch.sub / torch.mul / torch.div
分别为矩阵对应元素的加减乘除
与使用符号±*/功能相同
//表示整除

In [1]: import torch

In [2]: a = torch.rand(3,4)

In [3]: b = torch.rand(4)

In [4]: a+b
Out[4]:
tensor([[0.2696, 1.2878, 1.4269, 0.6681],
        [0.6802, 1.0790, 1.8201, 0.7958],
        [1.0932, 1.6674, 0.9889, 0.8394]])

In [5]: torch.add(a,b)
Out[5]:
tensor([[0.2696, 1.2878, 1.4269, 0.6681],
        [0.6802, 1.0790, 1.8201, 0.7958],
        [1.0932, 1.6674, 0.9889, 0.8394]])

In [6]: torch.all(torch.eq(a+b,torch.add(a,b)))
Out[6]: tensor(1, dtype=torch.uint8)

In [7]: torch.all(torch.eq(a-b,torch.sub(a,b)))
Out[7]: tensor(1, dtype=torch.uint8)

In [8]: torch.all(torch.eq(a*b,torch.mul(a,b)))
Out[8]: tensor(1, dtype=torch.uint8)

In [9]: torch.all(torch.eq(a/b,torch.div(a,b)))
Out[9]: tensor(1, dtype=torch.uint8)

In [10]: a//b
Out[10]:
tensor([[0., 0., 0., 3.],
        [1., 0., 1., 4.],
        [3., 0., 0., 4.]])
        
In [11]: a/b
Out[11]:
tensor([[0.1305, 0.3459, 0.6456, 3.5868],
        [1.8521, 0.1277, 1.0990, 4.4633],
        [3.5836, 0.7426, 0.1404, 4.7625]])

**

二、矩阵相乘

**
torch.mm只适用于2维矩阵
torch.matmul矩阵相乘,可以用于任意维矩阵相乘
@与上相同,矩阵相乘重载符号
在大于2维进行矩阵相乘时,只取最后的两维进行矩阵运算,前面的保持不变

In [14]: a = torch.full([2,2],3)

In [15]: a
Out[15]:
tensor([[3., 3.],
        [3., 3.]])

In [16]: b = torch.ones(2,2)

In [17]: b
Out[17]:
tensor([[1., 1.],
        [1., 1.]])

In [18]: torch.mm(a,b)
Out[18]:
tensor([[6., 6.],
        [6., 6.]])

In [19]: a@b
Out[19]:
tensor([[6., 6.],
        [6., 6.]])

In [20]: torch.all(torch.eq(a@b,torch.mm(a,b)))
Out[20]: tensor(1, dtype=torch.uint8)

In [21]: torch.all(torch.eq(a@b,torch.matmul(a,b)))
Out[21]: tensor(1, dtype=torch.uint8)
In [23]: a = torch.rand(4,3,28,64)#高维矩阵相乘,只取最后的两维进行运算

In [24]: b = torch.rand(4,3,64,32)

In [25]: torch.matmul(a,b).shape
Out[25]: torch.Size([4, 3, 28, 32])
In [27]: a = torch.rand(4,3,28,64)#高维矩阵相乘,其他维不等时,使用broadcast扩展

In [28]: b = torch.rand(4,1,64,32)

In [29]: torch.matmul(a,b).shape
Out[29]: torch.Size([4, 3, 28, 32])

**

三、次方和次方根运算

**

torch.pow次方运算
**次方运算重载符号
torch.exp以自然底数e为底的幂运算
torch.log以自然底数e为底的求幂逆运算

In [30]: a = torch.full([2,2],3)

In [31]: a
Out[31]:
tensor([[3., 3.],
        [3., 3.]])

In [32]: a**2
Out[32]:
tensor([[9., 9.],
        [9., 9.]])

In [33]: a.sqrt()
Out[33]:
tensor([[1.7321, 1.7321],
        [1.7321, 1.7321]])

In [34]: a.rsqrt()
Out[34]:
tensor([[0.5774, 0.5774],
        [0.5774, 0.5774]])

In [35]: a.pow(2)
Out[35]:
tensor([[9., 9.],
        [9., 9.]])

In [36]: torch.pow(a,2)
Out[36]:
tensor([[9., 9.],
        [9., 9.]])
In [37]: a = torch.exp(torch.ones(2,2))

In [38]: a
Out[38]:
tensor([[2.7183, 2.7183],
        [2.7183, 2.7183]])

In [39]: torch.log(a)
Out[39]:
tensor([[1., 1.],
        [1., 1.]])

**

四、近似运算

**
torch.floor向下取整
torch.ceil向上取整
torch.trunc裁剪获得整数部分
torch.frac裁剪获得小数部分
torch.round四舍五入计算取整
torch.clamp进行限制裁剪,第一个参数为限制的最小值,第二个参数为限制的最大值,可以仅输入第一个参数

In [40]: a = torch.tensor(3.14)

In [41]: a.floor(),a.ceil(),a.trunc(),a.frac()
Out[41]: (tensor(3.), tensor(4.), tensor(3.), tensor(0.1400))

In [42]: a = torch.tensor(3.499)

In [43]: a.round()
Out[43]: tensor(3.)

In [44]: a = torch.tensor(3.5)

In [45]: a.round()
Out[45]: tensor(4.)
In [48]: grad = torch.rand(2,3)*15

In [49]: grad
Out[49]:
tensor([[ 5.4135,  7.0951, 13.4149],
        [14.5973,  6.1970, 13.8659]])

In [50]: grad.clamp(10)#限制最小值为10
Out[50]:
tensor([[10.0000, 10.0000, 13.4149],
        [14.5973, 10.0000, 13.8659]])

In [51]: grad.clamp(0,10)#限制最小值为0,最大值为10
Out[51]:
tensor([[ 5.4135,  7.0951, 10.0000],
        [10.0000,  6.1970, 10.0000]])

你可能感兴趣的:(pytorch)