主要内容
一、张量的数据类型、默认类型、类型转换。
二、张量的生成:torch.tensor()、torch.Tensor()、张量和NumPy数据互相转换、随机数生成张量、函数生成等。
三、张量操作:改变张量的形状、获取张量中的元素、拼接和拆分等。
四、张量计算:比较大小、基本运算、统计相关计算等。
数学中
标量:单独的数
向量:一行或一列数组
矩阵:二维数组
张量:维度超过2的数组
PyTorch中
张量(Tensor)是一种数据结构,可以是一个标量、一个向量、一个矩阵,甚至是更高维度的数组。
所以PyTorch中的张量(Tensor)和Numpy中的数组(ndarray)非常相似。
Torch定义了七种CPU tensor类型和八种GPU tensor类型:
Data type | CPU tensor | GPU tensor |
---|---|---|
32-bit floating point | torch.FloatTensor | torch.cuda.FloatTensor |
64-bit floating point | torch.DoubleTensor | torch.cuda.DoubleTensor |
16-bit floating point | N/A | torch.cuda.HalfTensor |
8-bit integer (unsigned) | torch.ByteTensor | torch.cuda.ByteTensor |
8-bit integer (signed) | torch.CharTensor | torch.cuda.CharTensor |
16-bit integer (signed) | torch.ShortTensor | torch.cuda.ShortTensor |
32-bit integer (signed) | torch.IntTensor | torch.cuda.IntTensor |
64-bit integer (signed) | torch.LongTensor | torch.cuda.LongTensor |
在torch中默认的数据类型是32位浮点型(torch.FlaotTensor
)
默认情况下,torch.Tensor
就是torch.FlaotTensor
。
设置默认的数据类型:torch.set_default_tensor_type()
查看和设置张量数据类型:torch.tensor([1.2, 3.4]).dtype
print(torch.tensor([1.2, 3.4]).dtype)
>>>torch.float32
torch.set_default_tensor_type(torch.DoubleTensor)
print(torch.tensor([1.2, 3.4]).dtype)
>>>torch.float64
a = torch.tensor([1.2, 3.4])
print("a.dtype:", a.dtype)
print("a.long():", a.long().dtype)
print("a.int():", a.int().dtype)
print("a.float():", a.float().dtype)
>>>a.dtype: torch.float32
>>>a.long(): torch.int64
>>>a.int(): torch.int32
>>>a.float(): torch.float32
Python的列表或序列可以通过torch.tensor()
函数构造张量。
A = torch.tensor([[1.0, 2.0], [3.0, 4.0]])
print(A)
>>>tensor([[1., 2.],
[3., 4.]])
A = torch.tensor([[1.0, 2.0], [3.0, 4.0]])
print(A)
print("A的维度:", A.shape) # 查看维度
print("A的维度:", A.size()) # 查看维度
print("A的元素数量:", A.numel()) # 查看元素数量
>>>tensor([[1., 2.],
[3., 4.]])
>>>A的维度: torch.Size([2, 2])
>>>A的维度: torch.Size([2, 2])
>>>A的元素数量: 4
B = torch.tensor((1, 2, 3), dtype=torch.float32, requires_grad=True)
print(B)
>>>tensor([1., 2., 3.], requires_grad=True)
B = torch.tensor((1, 2, 3), dtype=torch.float32, requires_grad=True)
print(B)
Y = B.pow(2).sum()
print(Y)
Y.backward()
print(B.grad)
>>>tensor([1., 2., 3.], requires_grad=True)
>>>tensor(14., grad_fn=<SumBackward0>)
>>>tensor([2., 4., 6.])
输出结果为每个位置上的梯度—— 2 × B 2\times B 2×B,注意:只有浮点型才能计算梯度,其他类型计算梯度时会报错。
将list转化为张量,与torch.tensor()
相同:
C = torch.Tensor([1.0, 2.0])
print(C)
>>>tensor([1., 2.])
D = torch.Tensor(2,3)
print(D)
>>>tensor([[0.0000e+00, 0.0000e+00, 4.2039e-45],
[0.0000e+00, 1.4013e-45, 0.0000e+00]])
E = torch.Tensor([[1.0, 2.0], [3.0, 4.0]])
print(E)
print(torch.zeros_like(E)) # 零张量
print(torch.ones_like(E)) # 单位张量
print(torch.rand_like(E)) # 随机张量
>>>tensor([[1., 2.],
[3., 4.]])
>>>tensor([[0., 0.],
[0., 0.]])
>>>tensor([[1., 1.],
[1., 1.]])
>>>tensor([[0.5060, 0.8078],
[0.0232, 0.1987]])
F = torch.tensor([2, 3], dtype=torch.float16, requires_grad=True)
print(F)
print(F.new_tensor([[1, 2], [3, 4]])) # 新张量
print(F.new_full((3, 3), fill_value=1)) # 3*3使用fill_value填充张量
print(F.new_zeros((3, 3))) # 3*3的全0张量
print(F.new_empty((3, 3))) # 3*3的空张量
print(F.new_ones((3, 3))) # 3*3的全1张量
>>>tensor([2., 3.], dtype=torch.float16, requires_grad=True)
>>>tensor([[1., 2.],
[3., 4.]], dtype=torch.float16)
>>>tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]], dtype=torch.float16)
>>>tensor([[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]], dtype=torch.float16)
>>>tensor([[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]], dtype=torch.float16)
>>>tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]], dtype=torch.float16)
Gnp = np.ones((3, 3))
GTensor1 = torch.as_tensor(Gnp)
GTensor2 = torch.from_numpy(Gnp)
print(GTensor1)
print(GTensor2)
>>>tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]], dtype=torch.float64)
>>>tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]], dtype=torch.float64)
此时tensor的数据类型为float64
,因为Numpy生成的数组默认就是64位浮点型数组。
HTensor = torch.Tensor(2, 3)
Hnp = HTensor.numpy()
print(Hnp)
>>>[[7.826805e-03 7.826805e-03 1.261169e-44]
[0.000000e+00 1.401298e-45 0.000000e+00]]
生成随机数前,可以使用torch.manual_seed()
函数,指定随机数种子,保证生成的随机数可以重复出现。
torch.manual_seed(111)
print(torch.normal(mean=0.0, std=torch.tensor(1.0))) # 均值为0,标准差为1
print(torch.normal(mean=0.0, std=torch.arange(1, 5.0))) # 均值为0,标准差分别为1、2、3、4
>>>tensor(-0.1222)
>>>tensor([-0.7573, 2.0832, 0.9908, 2.1260])
torch.manual_seed(111)
# 均值分别为1、2、3、4,标准差分别为1、2、3、4
print(torch.normal(mean=torch.arange(1, 5.0), std=torch.arange(1, 5.0)))
>>>tensor([0.8778, 0.4855, 6.1248, 5.3211])
torch.manual_seed(111)
print(torch.rand(3, 4)) # 均值为0,标准差为1的3*4张量
>>>tensor([[0.7156, 0.9140, 0.2819, 0.2581],
[0.6311, 0.6001, 0.9312, 0.2153],
[0.6033, 0.7328, 0.1857, 0.5101]])
# 生成和其他张量尺寸相同的随机数张量
torch.manual_seed(111)
I = torch.ones(2, 3)
print(torch.rand_like(I))
>>>tensor([[0.7156, 0.9140, 0.2819],
[0.2581, 0.6311, 0.6001]])
# 生成服从标准正太分布的随机数张量
J = torch.randn(3, 3)
print(J)
print(torch.rand_like(J))
>>>tensor([[-1.7776, 0.5832, -0.2682],
[ 0.0241, -1.3542, -1.2677],
[-2.7603, -0.3466, 0.5342]])
>>>tensor([[0.4177, 0.3047, 0.0382],
[0.5805, 0.2089, 0.3964],
[0.3527, 0.5514, 0.3021]])
K = torch.arange(start=0, end=10, step=2)
print(K)
>>>tensor([0, 2, 4, 6, 8])
torch.linspace()——生成固定数量等间隔张量:
L = torch.linspace(start=1, end=10, steps=5)
print(L)
>>>tensor([ 1.0000, 3.2500, 5.5000, 7.7500, 10.0000])
torch.logspace()——生成以对数为间隔的张量:
M = torch.logspace(start=0.1, end=1.0, steps=5)
print(M)
>>>tensor([ 1.2589, 2.1135, 3.5481, 5.9566, 10.0000])
改变张量的形状、获取或改变张量中的元素、将张量进行拼接和拆分等。
A = torch.arange(12.0).reshape(3,4)
print(A)
>>>tensor([[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]])
改变输入的尺寸:
A = torch.arange(12.0).reshape(3, 4)
B = torch.reshape(input=A, shape=(2, -1))
print(B)
>>>tensor([[ 0., 1., 2., 3., 4., 5.],
[ 6., 7., 8., 9., 10., 11.]])
A = torch.arange(12.0).reshape(3, 4)
C = A.resize_(2, 6)
print(C)
>>>tensor([[ 0., 1., 2., 3., 4., 5.],
[ 6., 7., 8., 9., 10., 11.]])
A = torch.arange(12.0).reshape(3, 4)
D = torch.arange(10.0, 19.0).reshape(3, 3)
E = A.resize_as_(D)
print(E)
>>>tensor([[0., 1., 2.],
[3., 4., 5.],
[6., 7., 8.]])
torch.unsqueeze()——指定维度插入新的维度:
A = torch.arange(12.0).reshape(3, 4)
F = torch.unsqueeze(A, dim=0)
print(F)
print(F.size())
>>>tensor([[[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]]])
>>>torch.Size([1, 3, 4])
torch.squeeze()——移除制定或维度大小为1的维度:
A = torch.arange(12.0).reshape(3, 4)
print(A.size())
print(torch.unsqueeze(A, dim=1).size())
print(torch.squeeze(A, dim=1).size())
>>>torch.Size([3, 1, 4])
>>>torch.Size([1, 3, 4])
>>>torch.Size([3, 4])
对张量的维度进行扩充:
A = torch.arange(3)
print(A)
print(A.expand(3, -1))
>>>tensor([[0, 1, 2],
[0, 1, 2],
[0, 1, 2]])
A = torch.arange(3)
C = torch.arange(6).reshape(2, 3)
print(A.expand_as(C))
>>>tensor([[0, 1, 2],
[0, 1, 2]])
将张量看做一个整体,根据指定形状进行重复填充:
A = torch.tensor([1, 2, 3])
D = A.repeat(1, 2, 2) # 三个参数分别代表三个维度repeat的次数
print(D)
print(D.size())
>>>tensor([[[1, 2, 3, 1, 2, 3],
[1, 2, 3, 1, 2, 3]]])
>>>torch.Size([1, 2, 6])
A = torch.arange(12).reshape(1, 3, 4)
print(A)
print(A[0])
print(A[0][0]) # 获取零维度下的第一行矩阵
print(A[0, 0:2, :]) # 获取0维度下,前两行矩阵
print(A[0, -1, -4:-1]) # 获取0维度下,最后一行矩阵的-4~-1列
>>>tensor([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]]])
>>>tensor([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>>tensor([0, 1, 2, 3])
>>>tensor([[0, 1, 2, 3],
[4, 5, 6, 7]])
>>>tensor([ 8, 9, 10])
当A>5是为True时返回A对应的值,为False时返回B对应的值:
A = torch.arange(12).reshape(1, 3, 4)
B = -A
print(torch.where(A > 5, A, B))
>>>tensor([[[ 0, -1, -2, -3],
[-4, -5, 6, 7],
[ 8, 9, 10, 11]]])
获取A中大于5的元素:
A = torch.arange(12).reshape(1, 3, 4)
print(A[A > 5])
>>>tensor([ 6, 7, 8, 9, 10, 11])
torch.tril()——获取下三角矩阵
torch.triu()——获取上三角矩阵
B = torch.arange(1, 10.0).reshape(1, 3, 3)
print(torch.tril(B, diagonal=0)) # diagonal控制对角线
print(torch.tril(B, diagonal=1))
>>>tensor([[[1., 0., 0.],
[4., 5., 0.],
[7., 8., 9.]]])
>>>tensor([[[1., 2., 0.],
[4., 5., 6.],
[7., 8., 9.]]])
torch.diag()——获取对角线元素
A = torch.arange(1, 13.0).reshape(3, 4)
print(A)
print(torch.diag(A, diagonal=0))
print(torch.diag(A, diagonal=1))
>>>tensor([[ 1., 2., 3., 4.],
[ 5., 6., 7., 8.],
[ 9., 10., 11., 12.]])
>>>tensor([ 1., 6., 11.])
>>>tensor([ 2., 7., 12.])
torch.diag()——提供对角线元素生成矩阵张量
print(torch.diag(torch.tensor([1, 2, 3])))
>>>tensor([[1, 0, 0],
[0, 2, 0],
[0, 0, 3]])
在给定维度中连接给定的张量序列
A = torch.arange(6).reshape(2, 3)
B = torch.linspace(1, 10, 6).reshape(2, 3)
# 拼接过程要注意张量的尺寸可以进行拼接。
print(torch.cat((A, B), dim=0))
print(torch.cat((A, B), dim=1))
>>>tensor([[ 0.0000, 1.0000, 2.0000],
[ 3.0000, 4.0000, 5.0000],
[ 1.0000, 2.8000, 4.6000],
[ 6.4000, 8.2000, 10.0000]])
>>>tensor([[ 0.0000, 1.0000, 2.0000, 1.0000, 2.8000, 4.6000],
[ 3.0000, 4.0000, 5.0000, 6.4000, 8.2000, 10.0000]])
沿新维度连接张量
A = torch.arange(6).reshape(2, 3)
B = torch.linspace(1, 10, 6).reshape(2, 3)
print(torch.stack((A, B), dim=1))
print(torch.stack((A, B), dim=2))
>>>tensor([[[ 0.0000, 1.0000, 2.0000],
[ 1.0000, 2.8000, 4.6000]],
[[ 3.0000, 4.0000, 5.0000],
[ 6.4000, 8.2000, 10.0000]]])
>>>tensor([[[ 0.0000, 1.0000],
[ 1.0000, 2.8000],
[ 2.0000, 4.6000]],
[[ 3.0000, 6.4000],
[ 4.0000, 8.2000],
[ 5.0000, 10.0000]]])
沿着dim维度,分成2块。
A = torch.arange(12).reshape(2, 6)
B1, B2 = torch.chunk(A, 2, dim=0)
print(B1, B2)
C1, C2 = torch.chunk(A, 2, dim=1)
print(C1)
print(C2)
>>>tensor([[0, 1, 2, 3, 4, 5]]) tensor([[ 6, 7, 8, 9, 10, 11]])
>>>tensor([[0, 1, 2],
[6, 7, 8]])
>>>tensor([[ 3, 4, 5],
[ 9, 10, 11]])
若不能整除时,则最后一块将最小
A = torch.arange(10).reshape(2, 5)
D1, D2, D3 = torch.chunk(A, 3, dim=1)
print(D1)
print(D2)
print(D3)
>>>tensor([[0, 1],
[5, 6]])
>>>tensor([[2, 3],
[7, 8]])
>>>tensor([[4],
[9]])
将张量分块,可指定每一块的大小
A = torch.arange(12).reshape(2, 6)
D1, D2, D3 = torch.split(A, [1, 2, 3], dim=1)
print(D1)
print(D2)
print(D3)
>>>tensor([[0],
[6]])
>>>tensor([[1, 2],
[7, 8]])
>>>tensor([[ 3, 4, 5],
[ 9, 10, 11]])
计算公式: ∣ A − B ∣ ≤ a t o l + r t o l × ∣ B ∣ |A-B|\leq atol +rtol\times |B| ∣A−B∣≤atol+rtol×∣B∣
A = torch.tensor([10.0])
B = torch.tensor([10.1])
print(torch.allclose(A, B, rtol=1e-05, atol=1e-08, equal_nan=False))
print(torch.allclose(A, B, rtol=0.1, atol=0.01, equal_nan=False))
>>>False
>>>True
如果设置euqal_nan=True,那么缺失值可以判断为接近。
C = torch.tensor(float("nan"))
print(torch.allclose(C, C, equal_nan=False))
print(torch.allclose(C, C, equal_nan=True))
>>>False
>>>True
A = torch.tensor([1,2,3,4,5,6])
B = torch.arange(1,7)
C = torch.unsqueeze(B,dim=0)
print(torch.eq(A,B))
print(torch.eq(A,C))
>>>tensor([True, True, True, True, True, True])
>>>tensor([[True, True, True, True, True, True]])
A = torch.tensor([1,2,3,4,5,6])
B = torch.arange(1,7)
C = torch.unsqueeze(B,dim=0)
print(torch.equal(A,B))
print(torch.equal(A,C))
>>>True
>>>False
A = torch.tensor([1,2,3,4,5,6])
B = torch.arange(1,7)
C = torc
h.unsqueeze(B,dim=0)
print(torch.ge(A, B))
print(torch.ge(A, C))
>>>tensor([True, True, True, True, True, True])
>>>tensor([[True, True, True, True, True, True]])
A = torch.tensor([1,2,3,4,5,6])
B = torch.arange(1,7)
C = torch.unsqueeze(B,dim=0)
print(torch.gt(A, B))
print(torch.gt(A, C))
>>>tensor([False, False, False, False, False, False])
>>>tensor([[False, False, False, False, False, False]])
print(torch.le(A, B))
print(torch.lt(A, B))
print(torch.ne(A, B))
print(torch.isnan(torch.tensor([0,1,float("nan"),2])))
>>>tensor([False, False, True, False])
A = torch.arange(6.0).reshape(2, 3)
B = torch.linspace(10, 20, steps=6).reshape(2, 3)
print(A)
print(B)
print(A + B)
print(A - B)
print(A * B)
print(A / B)
>>>tensor([[0., 1., 2.],
[3., 4., 5.]])
>>>tensor([[10., 12., 14.],
[16., 18., 20.]])
>>>tensor([[10., 13., 16.],
[19., 22., 25.]])
>>>tensor([[-10., -11., -12.],
[-13., -14., -15.]])
>>>tensor([[ 0., 12., 28.],
[ 48., 72., 100.]])
>>>tensor([[0.0000, 0.0833, 0.1429],
[0.1875, 0.2222, 0.2500]])
x = torch.tensor([[1.0, 2], [3, 4]])
print(x)
print(sum(x))
print(torch.sum(x))
>>>tensor([[1., 2.],
[3., 4.]])
>>>tensor([4., 6.])
>>>tensor(10.)
两种方法均可:
A = torch.arange(6.0).reshape(2, 3)
print(torch.pow(A, 3))
print(A ** 3)
>>>tensor([[ 0., 1., 8.],
[ 27., 64., 125.]])
>>>tensor([[ 0., 1., 8.],
[ 27., 64., 125.]])
A = torch.arange(6.0).reshape(2, 3)
print(torch.exp(A))
>>>tensor([[ 1.0000, 2.7183, 7.3891],
[ 20.0855, 54.5981, 148.4132]])
A = torch.arange(6.0).reshape(2, 3)
print(torch.log(A))
>>>tensor([[ -inf, 0.0000, 0.6931],
[1.0986, 1.3863, 1.6094]])
与求幂函数相类似,可以采用两种方式。
A = torch.arange(6.0).reshape(2, 3)
print(torch.sqrt(A))
print(A ** 0.5)
>>>tensor([[0.0000, 1.0000, 1.4142],
[1.7321, 2.0000, 2.2361]])
>>>tensor([[0.0000, 1.0000, 1.4142],
[1.7321, 2.0000, 2.2361]])
与求幂函数相类似,可以采用两种方式。
A = torch.arange(6.0).reshape(2, 3)
print(torch.rsqrt(A))
print(1 / (A ** 0.5))
>>>tensor([[ inf, 1.0000, 0.7071],
[0.5774, 0.5000, 0.4472]])
>>>tensor([[ inf, 1.0000, 0.7071],
[0.5774, 0.5000, 0.4472]])
大于最大值的元素将变为最大值。
A = torch.arange(6.0).reshape(2, 3)
print(torch.clamp_max(A, 3))
>>>tensor([[0., 1., 2.],
[3., 3., 3.]])
小于最大值的元素将变为最小值。
A = torch.arange(6.0).reshape(2, 3)
print(torch.clamp_min(A, 3))
>>>tensor([[3., 3., 3.],
[3., 4., 5.]])
将torch.clamp_max()
和torch.clamp_min()
相结合。
A = torch.arange(6.0).reshape(2, 3)
print(torch.clamp(A, 2, 4))
>>>tensor([[2., 2., 2.],
[3., 4., 4.]])
A = torch.arange(6.0).reshape(2, 3)
print(torch.t(A))
>>>tensor([[0., 3.],
[1., 4.],
[2., 5.]])
计算矩阵乘法时注意条件,A的行数要等于B的列数。
A = torch.arange(6.0).reshape(2, 3)
B = torch.t(A)
print(torch.matmul(A, B))
>>>tensor([[ 5., 14.],
[14., 50.]])
A = torch.arange(1, 10.0).reshape(3, 3)
print(torch.inverse(A))
>>>tensor([[ -2796203.0000, 5592406.0000, -2796203.0000],
[ 5592404.5000, -11184812.0000, 5592406.5000],
[ -2796201.7500, 5592406.0000, -2796203.2500]])
A = torch.tensor([12, 1, 3, 5, 15, 999])
print(A.max())
print(A.argmax())
>>>tensor(999)
>>>tensor(5)
A = torch.tensor([12, 1, 3, 5, 15, 999]).reshape(2, 3)
print(A)
print(A.max(dim=0)) # 列向量最大值,返回值和索引
print(A.max(dim=1)) # 行向量最大值,返回值和索引
>>>tensor([[ 12, 1, 3],
[ 5, 15, 999]])
>>>torch.return_types.max(
values=tensor([ 12, 15, 999]),
indices=tensor([0, 1, 1]))
>>>torch.return_types.max(
values=tensor([ 12, 999]),
indices=tensor([0, 2]))
A = torch.tensor([12, 1, 3, 5, 15, 999])
print(A.min())
print(A.argmin())
>>>tensor(1)
>>>tensor(1)
A = torch.tensor([12, 1, 3, 5, 15, 999]).reshape(2, 3)
print(A)
print(A.min(dim=0)) # 列向量最小值,返回值和索引
print(A.min(dim=1)) # 行向量最小值,返回值和索引
>>>tensor([[ 12, 1, 3],
[ 5, 15, 999]])
>>>torch.return_types.min(
values=tensor([5, 1, 3]),
indices=tensor([1, 0, 0]))
>>>torch.return_types.min(
values=tensor([1, 5]),
indices=tensor([1, 0]))
一维张量排序,返回值和索引。
A = torch.tensor([12, 1, 3, 5, 15, 999])
print(torch.sort(A)) # 升序
print(torch.sort(A, descending=True)) # 降序
>>>torch.return_types.sort(
values=tensor([ 1, 3, 5, 12, 15, 999]),
indices=tensor([1, 2, 3, 0, 4, 5]))
>>>torch.return_types.sort(
values=tensor([999, 15, 12, 5, 3, 1]),
indices=tensor([5, 4, 0, 3, 2, 1]))
二维张量排序,返回值和索引。
A = torch.tensor([12, 1, 3, 5, 15, 999]).reshape(2, 3)
print(torch.sort(A))
>>>torch.return_types.sort(
values=tensor([[ 1, 3, 12],
[ 5, 15, 999]]),
>>>indices=tensor([[1, 2, 0],
[0, 1, 2]]))
A = torch.tensor([12, 1, 3, 5, 15, 999])
print(torch.topk(A, 3))
>>>torch.return_types.topk(
values=tensor([999, 15, 12]),
indices=tensor([5, 4, 0]))
获取张量取值大小为第k小的数值及其所在位置。
A = torch.tensor([12, 1, 3, 5, 15, 999])
print(torch.kthvalue(A, 3))
>>>torch.return_types.kthvalue(
values=tensor(5),
indices=tensor(3))
计算每一行的均值。
keepdim=True
:对应行输出
keepdim=False
:转变为一维的tensor输出。
A = torch.tensor([12.0, 1, 3, 5, 15, 999]).reshape(2, 3)
print(A)
print(torch.mean(A, dim=1, keepdim=True))
print(torch.mean(A, dim=1, keepdim=False))
>>>tensor([[ 12., 1., 3.],
[ 5., 15., 999.]])
>>tensor([[ 5.3333],
[339.6667]])
>>>tensor([ 5.3333, 339.6667])
计算每一列的均值。
keepdim=True
:对应列输出
keepdim=False
:转变为一维的tensor输出。(此时无变化)
A = torch.tensor([12.0, 1, 3, 5, 15, 999]).reshape(2, 3)
print(A)
print(torch.mean(A, dim=0, keepdim=True))
print(torch.mean(A, dim=0, keepdim=False))
>>>tensor([[ 12., 1., 3.],
[ 5., 15., 999.]])
>>>tensor([[ 8.5000, 8.0000, 501.0000]])
>>>tensor([ 8.5000, 8.0000, 501.0000])
计算每一行的和。
keepdim=True
:对应行输出
keepdim=False
:转变为一维的tensor输出。
A = torch.tensor([12.0, 1, 3, 5, 15, 999]).reshape(2, 3)
print(A)
print(torch.sum(A, dim=1, keepdim=True))
print(torch.sum(A, dim=1, keepdim=False))
>>>tensor([[ 12., 1., 3.],
[ 5., 15., 999.]])
>>>tensor([[ 16.],
[1019.]])
>>>tensor([ 16., 1019.])
计算每一列的和。
keepdim=True
:对应列输出
keepdim=False
:转变为一维的tensor输出。(此时无变化)
A = torch.tensor([12.0, 1, 3, 5, 15, 999]).reshape(2, 3)
print(A)
print(torch.sum(A, dim=0, keepdim=True))
print(torch.sum(A, dim=0, keepdim=False))
>>>tensor([[ 12., 1., 3.],
[ 5., 15., 999.]])
>>>tensor([[ 17., 16., 1002.]])
>>>tensor([ 17., 16., 1002.])
A = torch.tensor([12.0, 1, 3, 5, 15, 999]).reshape(2, 3)
print(A)
print(torch.cumsum(A, dim=0)) # 列累加
print(torch.cumsum(A, dim=1)) # 行累加
>>>tensor([[ 12., 1., 3.],
[ 5., 15., 999.]])
>>>tensor([[1.2000e+01, 1.0000e+00, 3.0000e+00],
[1.7000e+01, 1.6000e+01, 1.0020e+03]])
>>>tensor([[ 12., 13., 16.],
[ 5., 20., 1019.]])
输出每一行中位数及其索引。
keepdim=True
:输出张量的维度与原先一致。
keepdim=False
:转变为一维的张量输出。
A = torch.tensor([12.0, 1, 3, 5, 15, 999, 4, 6, 8]).reshape(3, 3)
print(A)
print(torch.median(A, dim=1, keepdim=True))
print(torch.median(A, dim=1, keepdim=False))
>>>tensor([[ 12., 1., 3.],
[ 5., 15., 999.],
[ 4., 6., 8.]])
>>>torch.return_types.median(
>>>values=tensor([[ 3.],
[15.],
[ 6.]]),
>>>indices=tensor([[2],
[1],
[1]]))
>>>torch.return_types.median(
values=tensor([ 3., 15., 6.]),
indices=tensor([2, 1, 1]))
输出每一列中位数及其索引。
keepdim=True
:输出张量的维度与原先一致。
keepdim=False
:转变为一维的张量输出。(无变化)
A = torch.tensor([12.0, 1, 3, 5, 15, 999, 4, 6, 8]).reshape(3, 3)
print(A)
print(torch.median(A, dim=0, keepdim=True))
print(torch.median(A, dim=0, keepdim=False))
>>>tensor([[ 12., 1., 3.],
[ 5., 15., 999.],
[ 4., 6., 8.]])
>>>torch.return_types.median(
values=tensor([[5., 6., 8.]]),
indices=tensor([[1, 2, 2]]))
>>>torch.return_types.median(
values=tensor([5., 6., 8.]),
indices=tensor([1, 2, 2]))
A = torch.tensor([12.0, 1, 3, 5, 15, 999, 4, 6, 8]).reshape(3, 3)
print(A)
print(torch.prod(A, dim=0, keepdim=True))
print(torch.prod(A, dim=1, keepdim=True))
>>>tensor([[ 12., 1., 3.],
[ 5., 15., 999.],
[ 4., 6., 8.]])
>>>tensor([[ 240., 90., 23976.]])
>>>tensor([[3.6000e+01],
[7.4925e+04],
[1.9200e+02]])
A = torch.tensor([12.0, 1, 3, 5, 15, 999, 4, 6, 8]).reshape(3, 3)
print(A)
print(torch.cumprod(A, dim=0))
print(torch.cumprod(A, dim=1))
>>>tensor([[ 12., 1., 3.],
[ 5., 15., 999.],
[ 4., 6., 8.]])
>>>tensor([[1.2000e+01, 1.0000e+00, 3.0000e+00],
[6.0000e+01, 1.5000e+01, 2.9970e+03],
[2.4000e+02, 9.0000e+01, 2.3976e+04]])
>>>tensor([[1.2000e+01, 1.2000e+01, 3.6000e+01],
[5.0000e+00, 7.5000e+01, 7.4925e+04],
[4.0000e+00, 2.4000e+01, 1.9200e+02]])
A = torch.tensor([12.0, 1, 3, 5, 15, 999, 4, 6, 8]).reshape(3, 3)
print(torch.std(A))
>>>tensor(330.7794)