维度相同直接用stack即可,注意stack会升维,squeeze掉无用的维度
a=torch.rand([1,2])
b=torch.rand([1,2])
z=torch.rand([1,2])
c=[]
c.append(a)
c.append(b)
c.append(z)
ret=torch.stack(c).squeeze()
print(a,b,z,ret)
print(ret.size())
>>>tensor([[0.7365, 0.8111]]) tensor([[0.2135, 0.1070]]) tensor([[0.4466, 0.6659]]) tensor([[0.7365, 0.8111],
[0.2135, 0.1070],
[0.4466, 0.6659]])
>>>torch.Size([3, 2])
一个常遇到的场景就是batch的行数不同。当然这对上面的场景同样适用
#将list中的tensor做cat
a=torch.rand([1,2])
b=torch.rand([2,2])
z=torch.rand([3,2])
c=[]
c.append(a)
c.append(b)
c.append(z)
#ret=torch.stack(c).squeeze()
ret=torch.cat(c,dim=0)
print(a,b,z,ret)
print(ret.size())
>>>tensor([[0.2152, 0.1298]]) tensor([[0.1018, 0.6687],
[0.1744, 0.1017]]) tensor([[0.5049, 0.4362],
[0.1908, 0.4006],
[0.3071, 0.9557]]) tensor([[0.2152, 0.1298],
[0.1018, 0.6687],
[0.1744, 0.1017],
[0.5049, 0.4362],
[0.1908, 0.4006],
[0.3071, 0.9557]])
>>>torch.Size([6, 2])
所以为什么第一个场景要用stack?就是说明一下csdn上讲的最多的方法就是一帮sb在误人子弟