numpy篇 https://blog.csdn.net/blog_empire/article/details/39298557
一,维度增减
1、直接修改shape
y=np.arange(1, 11)#
shape:(10,)
y.shape=(10,1)
print(y)
2、用np.newaxis(np.newaxis == None)
np.newaxis放的位置不一样,效果不一样
y=np.arange(1, 11)#
shape:(10,)
y=y
[:,np.newaxis]
print(y)
3、使用np.expand_dims()
y=np.arange(1, 11)#
shape:(10,)
y=np.expand_dims(y,axis = 1)
print(y)
4、只有np.squeeze() 没有np.unsqueeze()
y=np.arange(1, 11)#
shape:(10,)
y=np.expand_dims(y,axis = 1)
y=np.squeeze(y,axis = 1)
print(y)
二、维度拼接
1、 np.c_[]按行拼接,np.r_[]按列拼接
a = np.random.randn(2,3)
b = np.random.randn(2,3)
c = np.c_[a,b]
print(c)
2、np.hstack,np.vstack,np.dstack,np.stack
a = np.random.randn(2,3)
b = np.random.randn(2,3)
c = np.hstack((a,b))#注意参数
print(c)
#注意:np.stack和np.hstack,np.vstack有很大的不同,np.stack是增加了维度的拼接
arrays = [np.random.randn(3, 4) for _ in range(10)] # 利用列表生成式生成一个10*3*4的列表
np.stack(arrays, axis=0).shape # (10, 3, 4)
3、np.concatenate
a = np.random.randn(2,3)
b = np.random.randn(2,3)
c = np.concatenate((a, b), axis=1)
print(c)
4、类似python list append使用np.append()
此外还有np.delete(),np.insert()
三、自身数据维度内交换
1、np.transpose交换多维度数据
a = np.random.randn(2,3,4)
print(a.shape)
b = np.transpose(a,(2,0,1))
print(b.shape)
2、交换二维数据可用np.transpose或者T()
a = np.random.randn(2,3)
print(a)
b = a.T #np.transpose(a)
print(b)
四、以某种维度查看数据,和维度交换是不一样的,数据在内存中结构不变
1、np.reshape()
a = np.random.randn(2,3)
print(a)
b = np.reshape(a,(3,2))
print(b)
2、以一维数据形式查看x.ravel(),x.flatten()
a = np.random.randn(2,3)
print(a)
b = a.flatten()
print(b)
pytorch篇
一,维度增减
1、torch.squeeze,torch.unsqueeze
a = torch.randn(2,3)
print(a.size())
b = torch.unsqueeze(a,2) #后面的参数2代表在index为2的维度增加一维
print(b.size())
二、维度拼接
1、torch.cat()
a = torch.randn(2,3)
b = torch.randn(2,3)
print(a)
c = torch.cat((a,b),1)
print(c)
2、torch.stack()
a = torch.randn(2,3)
b = torch.randn(2,3)
print(a)
c = torch.stack((a,b),0)
print(c.size())
print(c)
3、expand,repeat,narrow等
三、自身数据维度内交换
1、torch.transpose只能交换二维数据
a = torch.randn(2,3)
print(a)
b = a.transpose(1,0)
print(b)
2、torch.permute交换多维数据
a = torch.randn(2,3,4)
print(a)
b = a.permute(2,1,0)
print(b)
四、以某种维度查看数据,和维度交换是不一样的,数据在内存中结构不变
1、tensor.view()
view只能用在contiguous的variable上。如果在view之前用了transpose,permute等,需要用contiguous()来返回一个contiguous copy
a = torch.randn(2,3)
print(a)
b = a.view(3,2)
print(b)
2、torch.reshape(),这个与numpy.reshape 的功能类似。它大致相当于tensor.contiguous().view()
一些小技巧代码片段
一,numpy
>>> n_x3 = np.array(x3)
>>> n_x3
array([0.1, 0.2, 0.3])
>>> n_x4 = n_x3[[2,2,1,1,1,2,0]] ##############numpy用idx给扩展赋值 idx可以用list或者np array扩展都行
>>> n_x4
array([0.3, 0.3, 0.2, 0.2, 0.2, 0.3, 0.1])
二,torch
>>> t_x = torch.from_numpy(x)
>>> t_x
tensor([[ 1.5228, -1.5240, -1.3225, 0.5608, -0.8209, 0.0349],
[-1.0126, -1.1155, -1.1779, -0.0450, -0.6270, 0.5286],
[ 0.0656, 0.2860, 0.4819, -0.7166, -0.0462, 0.3427]],
dtype=torch.float64)
>>> x1 = t_x.select(1,0) ##########tensor.select(dim,index)用法
>>> x1
tensor([ 1.5228, -1.0126, 0.0656], dtype=torch.float64)
>>> x1.lt(0) ###############tensor.lt gt ge用法
tensor([0, 1, 0], dtype=torch.uint8) #########产生的是掩码0,1
>>> x1.gt(0)
tensor([1, 0, 1], dtype=torch.uint8)
>>> x1.ge(0)
tensor([1, 0, 1], dtype=torch.uint8)
>>> idx = x1.gt(0) ##############用tensor掩码给相应位置赋值
>>> x1[idx] = 1
>>> x1
tensor([ 1.0000, -1.0126, 1.0000], dtype=torch.float64)
>>> x2 = x1[(2,2,1,1,2,0,0,1)] ##############不能用tuple
Traceback (most recent call last):
File "
IndexError: too many indices for tensor of dimension 1
>>> x2 = x1[[2,2,1,1,2,0,0,1]] ##############tensor用list idx给扩展赋值
>>> x2
tensor([ 1.0000, 1.0000, -1.0126, -1.0126, 1.0000, 1.0000, 1.0000, -1.0126],
dtype=torch.float64)