numpy可以同时处理两种数据:list和numpy.ndarray。
而torch只能处理张量(Tensor)数据。
1.替换 np.asarray()
import torch
data=[[1,2],[3,4]]
data = torch.Tensor(data)
# data = np.asarray(data)
print(data.shape)
2.替换 nn.prod()
#使用torch.prod()替换nn.prod(),其中torch.prod()里面的参数必须得是tensor.
import torch
data=[1,2]
data = torch.Tensor(data)
print(data.shape)
c = torch.prod(data)
c
tensor(2.)
Python torch.prod实例讲解 - 码农教程
3.替换 np.max()
#使用torch.max()替换np.max(),其中torch.max()里面的参数必须得是tensor.
import torch
data=[1,2,3,9,4]
data = torch.Tensor(data)
a = torch.max(data)
a
tensor(9.)
4.torch.Size转换为np.arrary
import torch
# data = np.asarray(data)
data=[[1,2],[3,4]]
data = torch.Tensor(data)
print(data.shape)
print(np.asarray(data.shape))
c = torch.prod(torch.Tensor(np.asarray(data.shape)))
c
torch.Size([2, 2])
[2 2]
tensor(4.)
5.numpy与torch的ravel,它俩用法一样,注意tensor.ravel()后的数据还是Tensor
import torch
# data = np.asarray(data)
data=[[1,2],[3,4]]
data = torch.Tensor(data)
data = data.ravel()
print("ravel:",data)
ravel: tensor([1., 2., 3., 4.])
6.numpy与torch的ndim,它俩用法一样
import torch
data=[[1,2],[3,4]]
data_numpy = np.asarray(data).ravel()
data = torch.Tensor(data)
data = data.ravel()
print("ravel:",data)
print(data.shape)
print("tensor_ndim:",data.ndim)
print("data_numpy_ndim:",data_numpy.ndim)
ravel: tensor([1., 2., 3., 4.])
torch.Size([4])
tensor_ndim: 1
data_numpy_ndim: 1
7.numpy与torch.tensor的data.T效果一样
import torch
data=[[1,2],[3,4]]
data_numpy = np.asarray(data)
data = torch.Tensor(data)
# data = data.ravel()
data_trans_tensor=data.T
print("data:",data)
print("data_trans:",data_trans_tensor)
data_trans_numpy=data_numpy.T
print("data_numpy:",data_numpy)
print("data_trans_numpy:",data_trans_numpy)
data: tensor([[1., 2.], [3., 4.]])
data_trans: tensor([[1., 3.], [2., 4.]])
data_numpy: [[1 2] [3 4]]
data_trans_numpy: [[1 3] [2 4]]
8.np.arange()与torch.arange()用法一样
timesteps=5
a = np.arange(timesteps)
b = torch.arange(timesteps)
print("a:",a)
print("b:",b)
a: [0 1 2 3 4]
b: tensor([0, 1, 2, 3, 4])
Python torch.arange实例讲解 - 码农教程
9.np.ceil()与torch.ceil()用法一样,但torch.ceil()的输入必须是tensor.
a=[0.1,1.1,2.1,3.1]
b=np.ceil(a)
c=torch.ceil(torch.Tensor(a))
print("b:",b)
print("c:",c)
b: [1. 2. 3. 4.]
c: tensor([1., 2., 3., 4.])
10.torch.sort()替换np.sort(),torch.sort()还会返回对应原序列的索引
import numpy as np
import torch
a=[5,1.1,2.1,3.1]
b=np.sort(a)
value,indices=torch.sort(torch.Tensor(a))
print("b:",b)
print("c:",value,indices)
b: [1.1 2.1 3.1 5. ]
c: tensor([1.1000, 2.1000, 3.1000, 5.0000]) tensor([1, 2, 3, 0])
np.sort()函数的作用_wx5ba0c87f1984b的技术博客_51CTO博客
torch.sort()_James_Bobo的博客-CSDN博客_torch.sort()
11.torch.abs()替换np.abs()
data = [-1 , -2 , 1 , 2]
tensor = torch.FloatTensor(data)
print(
"abs in numpy:\n",np.abs(data),
"\nabs in torch:\n",torch.abs(tensor)
)
abs in numpy: [1 2 1 2] abs in torch: tensor([1., 2., 1., 2.])
12.torch.ones_like()替换np.ones_like(),但torch.ones_like参数必须是Tensor
import torch
import numpy as np
a=[[1,2],[3,4]]
input = torch.empty(2, 3)
print(np.ones_like(a))
print(torch.ones_like(input))
[[1 1] [1 1]]
tensor([[1., 1., 1.], [1., 1., 1.]])
13. torch.split替换np.split//未做比较solid的实验,因为np.split可以接受tensor,而且np.split和torch.split的用法不太一样,但能达到相同的效果,未深入研究
import torch
import numpy as np
a=[[1,2],[3,4]]
input = torch.tensor(a)
input_1 = torch.tensor(input)
print(input)
print(input_1)
a = np.split(input,1)
print(a)
b = torch.split(input,1)
print(b)
tensor([[1, 2], [3, 4]])
tensor([[1, 2], [3, 4]])
[tensor([[1, 2], [3, 4]])]
(tensor([[1, 2]]), tensor([[3, 4]]))
np.split 用法详解_Wanderer001的博客-CSDN博客_np.split函数
numpy的ones_like函数_God_s_apple的博客-CSDN博客_numpy ones_like
pytorch之torch.ones_like_刘嘉晨Amber的博客-CSDN博客
14.torch.empty_like()替换np.empty_like()//torch.empty_like()创建一个使用未初始化值填满的tensor
import torch
import numpy as np
a=[[1,2],[3,4]]
input = torch.tensor(a)
print(np.empty_like(a))
print(torch.empty_like(input))
[[1 2] [3 4]]
tensor([[0, 2], [3, 4]])
Python numpy.empty_like用法及代码示例 - 纯净天空
pytorch每日一学22(torch.empty()、torch.empty_like()、torch.empty_strided())创建未初始化数据的tensor_Fluid_ray的博客-CSDN博客_torch.empty
15.tensor.sum()替换np.sum()
import torch
import numpy as np
a=[[1,2],[3,4]]
input = torch.tensor(a)
d = np.array(input)
d1=d.sum(axis=0)
print(d1)
e = torch.tensor(d)
# print(e)
d2=e.sum(axis=0)
print(d2)
[4 6]
tensor([4, 6])
16.np.transpose(可处理高维)与torch.transpose不一样,torch.transpose只能处理2D情况,torch.permute可以处理高维情况
待补
python transpose与permute函数详解_ice123muxin的博客-CSDN博客_transpose和permute
np.transpose_cool whidpers的博客-CSDN博客_np transpose
17. torch.squeeze替换np.squeeze
写到这里突然发现,大部分np函数可以处理tensor,处理完后一般结果是np,再转换为tensor就好了
18. tensor.T与np.arrary.T的效果一样
19.A@B//矩阵乘法
import torch
import numpy as np
a=[[1,2],[3,4]]
input = torch.tensor(a)
d = np.array(input)
print(d)
e = torch.tensor(d)
print(e)
f = torch.tensor(np.squeeze(e))
print(type(f))
g=f.T
print(g)
k = e @ g
print(k)
print(k.type())
[[1 2] [3 4]]
tensor([[1, 2], [3, 4]])
tensor([[1, 3], [2, 4]]) tensor([[ 5, 11], [11, 25]]) torch.LongTensor
/tmp/ipykernel_7717/3140886201.py:
11: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). f = torch.tensor(np.squeeze(e))
Python中@符号是什么意思 - 知乎