torch.cat([tensor1, tensor2....], dim=)
将多个向量在同一维度上进行合并,并且要求在指定维度dim外,其他维度的形状应保持相同。
例如tensorA[2,3,4]和tensorB[2,1,4]可以在dim=1上合并,但不能在dim=0或dim=2上合并。
如果强行在不合适的维度上进行合并会报错:
import torch
tensorA = torch.tensor([
[
[1, 2, 3],
[4, 5, 6],
],
[
[2, 4, 6],
[3, 5, 7],
]
])
tensorB = torch.tensor([
[
[0, 6, 3],
[4, 2, 6],
[2, 1, 1]
],
[
[1, 4, 6],
[3, 5, 4],
[3, 2, 0]
]
])
print(f"tensorA.shape:{tensorA.shape}")
print(f"tensorB.shape:{tensorB.shape}")
print("try to cat A with B in dim1:")
tensorC = torch.cat([tensorA, tensorB], dim=1)
print(f"successful! the final tensor.shape:{tensorC.shape}")
print("try to cat A with B in dim0:")
tensorC = torch.cat([tensorA, tensorB], dim=0)
print(f"successful! the final tensor.shape:{tensorC.shape}")
运行结果如下:
tensorA.shape:torch.Size([2, 2, 3])
tensorB.shape:torch.Size([2, 3, 3])
try to cat A with B in dim1:
successful! the final tensor.shape:torch.Size([2, 5, 3])
try to cat A with B in dim0:
Traceback (most recent call last):
File "D:\MachineLearning\pytorchBase\test.py", line 32, in
tensorC = torch.cat([tensorA, tensorB], dim=0)
RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 2 but got size 3 for tensor number 1 in the list.
关于在不同维度上合并的结果
import torch
tensorA = torch.tensor([
[
[1, 2, 3],
[4, 5, 6]
],
[
[2, 4, 6],
[3, 5, 7]
]
])
tensorB = torch.tensor([
[
[0, 6, 3],
[4, 2, 6]
],
[
[1, 4, 6],
[3, 2, 0]
]
])
print(f"tensorA.shape:{tensorA.shape}")
print(f"tensorB.shape:{tensorB.shape}")
tensorA.shape:torch.Size([2, 2, 3])
tensorB.shape:torch.Size([2, 2, 3])
print("try to cat A with B in dim0:")
tensorC = torch.cat([tensorA, tensorB], dim=0)
print(tensorC)
try to cat A with B in dim0:
tensor([[[1, 2, 3],
[4, 5, 6]],
[[2, 4, 6],
[3, 5, 7]],
[[0, 6, 3],
[4, 2, 6]],
[[1, 4, 6],
[3, 2, 0]]])
print("try to cat A with B in dim1:")
tensorC = torch.cat([tensorA, tensorB], dim=1)
print(tensorC)
try to cat A with B in dim1:
tensor([[[1, 2, 3],
[4, 5, 6],
[0, 6, 3],
[4, 2, 6]],
[[2, 4, 6],
[3, 5, 7],
[1, 4, 6],
[3, 2, 0]]])
这里简单的理解是,两个shape为[2,2,3]的张量在dim=0上合并后,由原来两个2x[2,3]的张量合并为了一个4x[2,3]的张量,即shape为[4,2,3]的张量;而两个shape为[2,2,3]的张量在dim=1上合并后第一个维度的数量不变,而将第二个维度进行合并,可以看作将原来2x[2,3]的张量合并为了2x[4,3]的张量,原来每个[2x3]的张量的行数扩展至了4行。
torch.stack(tensors, dim=0, *, out=None) → Tensor
沿新维度连接一系列张量
所有张量的size必须一样
使用stack操作后,会在传入的dim前面插入一个新的维度,而其余原本维度的形状不变
import torch
tensorA = torch.tensor([
[1, 2, 3],
[4, 5, 6]
])
tensorB = torch.tensor([
[7, 8, 9],
[3, 2, 1]
])
print(f"tensorA.shape:{tensorA.shape}")
print(f"tensorB.shape:{tensorB.shape}")
tensorA.shape:torch.Size([2, 3])
tensorB.shape:torch.Size([2, 3])
在AB第0维度前插入新的维度进行合并:
print("try to stack A with B in dim0:")
tensorC = torch.stack([tensorA, tensorB], dim=0)
print(f"tensorC.shape:{tensorC.shape}")
print(tensorC)
try to stack A with B in dim0:
tensorC.shape:torch.Size([2, 2, 3])
tensor([[[1, 2, 3],
[4, 5, 6]],
[[7, 8, 9],
[3, 2, 1]]])
在AB第1维度前插入新的维度进行合并:
print("try to stack A with B in dim1:")
tensorC = torch.stack([tensorA, tensorB], dim=1)
print(f"tensorC.shape:{tensorC.shape}")
print(tensorC)
try to stack A with B in dim1:
tensorC.shape:torch.Size([2, 2, 3])
tensor([[[1, 2, 3],
[7, 8, 9]],
[[4, 5, 6],
[3, 2, 1]]])
在AB第2维度前插入新的维度进行合并:
print("try to stack A with B in dim2:")
tensorC = torch.stack([tensorA, tensorB], dim=2)
print(f"tensorC.shape:{tensorC.shape}")
print(tensorC)
try to stack A with B in dim2:
tensorC.shape:torch.Size([2, 3, 2])
tensor([[[1, 7],
[2, 8],
[3, 9]],
[[4, 3],
[5, 2],
[6, 1]]])
注意原张量是2维的,所以合并插入的维度不能超过2,当在3维度前插入维度合并时,将会报错:
print("try to stack A with B in dim3:")
tensorC = torch.stack([tensorA, tensorB], dim=3)
print(f"tensorC.shape:{tensorC.shape}")
print(tensorC)
try to stack A with B in dim3:
Traceback (most recent call last):
File "D:\MachineLearning\pytorchBase\stack.py", line 27, in
tensorC = torch.stack([tensorA, tensorB], dim=3)
IndexError: Dimension out of range (expected to be in range of [-3, 2], but got 3)