**
**
torch.norm(p,dim)
In [1]: import torch
In [2]: a = torch.full([8],1)
In [3]: a
Out[3]: tensor([1., 1., 1., 1., 1., 1., 1., 1.])
In [4]: b = a.view(2,4)
In [5]: b
Out[5]:
tensor([[1., 1., 1., 1.],
[1., 1., 1., 1.]])
In [6]: c = a.view(2,2,2)
In [7]: c
Out[7]:
tensor([[[1., 1.],
[1., 1.]],
[[1., 1.],
[1., 1.]]])
In [8]: a.norm(1),b.norm(1),c.norm(1)#1范数为所有元素的绝对值的求和
Out[8]: (tensor(8.), tensor(8.), tensor(8.))
In [9]: a.norm(2),b.norm(2),c.norm(2)#2范数为所有元素绝对值的平方和再开根号
Out[9]: (tensor(2.8284), tensor(2.8284), tensor(2.8284))
In [10]: b.norm(1,dim=1)#在维度1上求1范数
Out[10]: tensor([4., 4.])
In [11]: b.norm(2,dim=1)#在维度1上求2范数
Out[11]: tensor([2., 2.])
In [12]: c.norm(1,dim=0)#在维度0上求1范数
Out[12]:
tensor([[2., 2.],
[2., 2.]])
**
**
torch.min()最小值
torch.max()最大值
torch.mean()均值
torch.prod()累乘
torch.sum()求和
torch.argmin()最小值的索引,这里返回的是打平后的索引
torch.argmax()最大值的索引,这里返回的是打平后的索引
torch.argmin(dim)最小值的索引,这里返回的是某个维度上的索引
torch.argmax(dim)最大值的索引,这里返回的是某个维度上的索引
In [13]: a = torch.arange(8).view(2,4).float()
In [14]: a
Out[14]:
tensor([[0., 1., 2., 3.],
[4., 5., 6., 7.]])
In [15]: a.min(),a.max(),a.mean(),a.prod()
Out[15]: (tensor(0.), tensor(7.), tensor(3.5000), tensor(0.))
In [16]: a.sum()
Out[16]: tensor(28.)
In [17]: a.argmax()
Out[17]: tensor(7)
In [18]: a.argmin()
Out[18]: tensor(0)
In [19]: a.argmin(dim=0),a.argmin(dim=1)
Out[19]: (tensor([0, 0, 0, 0]), tensor([0, 0]))
In [20]: a.argmax(dim=0),a.argmax(dim=1)
Out[20]: (tensor([1, 1, 1, 1]), tensor([3, 3]))
**
**
topk默认返回前k个大的概率值和对应的索引,当设置largest=False时,返回最后k个小的概率值和索引。
kthvalue返回第k个小的概率值和对应索引。
In [21]: a = torch.rand(4,10)
In [22]: a
Out[22]:
tensor([[0.8687, 0.9093, 0.3411, 0.6682, 0.6670, 0.4628, 0.7466, 0.5951, 0.7170,
0.9433],
[0.5066, 0.7350, 0.4292, 0.9048, 0.5080, 0.1066, 0.1216, 0.4718, 0.0018,
0.7826],
[0.8845, 0.7324, 0.0309, 0.6284, 0.1359, 0.8983, 0.4142, 0.5262, 0.2525,
0.6233],
[0.3131, 0.3884, 0.1305, 0.0862, 0.1915, 0.6900, 0.0086, 0.5318, 0.2383,
0.1415]])
In [23]: a.shape
Out[23]: torch.Size([4, 10])
In [24]: a.topk(3,dim=1)
Out[24]:
torch.return_types.topk(
values=tensor([[0.9433, 0.9093, 0.8687],
[0.9048, 0.7826, 0.7350],
[0.8983, 0.8845, 0.7324],
[0.6900, 0.5318, 0.3884]]),
indices=tensor([[9, 1, 0],
[3, 9, 1],
[5, 0, 1],
[5, 7, 1]]))
In [25]: a.topk(3,dim=1,largest=False)#选取最小的3个
Out[25]:
torch.return_types.topk(
values=tensor([[0.3411, 0.4628, 0.5951],
[0.0018, 0.1066, 0.1216],
[0.0309, 0.1359, 0.2525],
[0.0086, 0.0862, 0.1305]]),
indices=tensor([[2, 5, 7],
[8, 5, 6],
[2, 4, 8],
[6, 3, 2]]))
In [26]: a.kthvalue(8,dim=1)
Out[26]:
torch.return_types.kthvalue(
values=tensor([0.8687, 0.7350, 0.7324, 0.3884]),
indices=tensor([0, 1, 1, 1]))
In [27]: a.kthvalue(3)
Out[27]:
torch.return_types.kthvalue(
values=tensor([0.5951, 0.1216, 0.2525, 0.1305]),
indices=tensor([7, 6, 8, 2]))
In [28]: a.kthvalue(3,dim=1)
Out[28]:
torch.return_types.kthvalue(
values=tensor([0.5951, 0.1216, 0.2525, 0.1305]),
indices=tensor([7, 6, 8, 2]))