深度学习中最常用的框架有Tensorflow、PyTorch。小编本人当初自学深度学习时,Tensorflow2.0刚出来,由于市面上并没有太多的PyTorch资料,而且Tensorflow入门简单,更快,所以选择了Tensorflow。但随着科研深入,发现大多数论文以及项目都是使用PyTorch框架,自己一脸懵逼!!!应该是自己太菜了,不能够灵活运用Tensorflow,而且身边人都开始用PyTorch,所以最终自己又转向学习PyTorch框架。(随着近年来,PyTorch在各行各业的广泛使用,而其灵活性较高,所以建议白都从PyTorch入门深度学习。)
PyTorch优点:
安装方式:后续出一个完整的安装文章,可自行先安装试试。(一般搭配环境都是安装Anaconda、PyCharm、PyTorch)。
PyTorch官方也出了一个教程,感兴趣的可以看看。
官方文档
torch.tensor(data, *, dtype=None, device=None, requires_grad=False, pin_memory=False) → Tensor
官方说明:
Parameters:
data (array_like) – Initial data for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types.
Keyword Arguments:
dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, infers data type from data.
device (torch.device, optional) – the device of the constructed tensor. If None and data is a tensor then the device of data is used. If None and data is not a tensor then the result tensor is constructed on the CPU.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.
pin_memory (bool, optional) – If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default: False.
解释:
看起来是不是很麻烦!接下来就不按照官方文档那样介绍了,直接简单一点,想了解更详细的话,看官网文档就行!
torch.is_tensor(obj)
如果obj是一个pytorch张量,则返回True
参数:obj —判断对象
torch.is_storage(obj)
如果obj是一个pytorch storage对象,则返回True
参数:obj —判断对象
torch.__set_default_tensor_type(t)
torch.numel(input) -> int
返回input张量中的元素个数
参数:input(Tensor)—输入张量
案例:
a = torch.randn(1,2,3,4,5)
torch.numel(a)
#a = 5
b = torch.ones((4,4))
torch.numel(b)
#b = 16
torch.eye(3)
tensor([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
a = np.array([1,2,3])
torch.from_numpy(a)
tensor([1, 2, 3], dtype=torch.int32)
torch.linspace(-10,10,steps=10)
tensor([-10.0000, -7.7778, -5.5556, -3.3333, -1.1111, 1.1111, 3.3333,
5.5556, 7.7778, 10.0000])
torch.ones((2,3))
tensor([[1., 1., 1.],
[1., 1., 1.]])
torch.arange(1,4)
tensor([1,2,3])
x = torch.randn((2,3))
x
tensor([[ 0.2392, 1.6352, 0.3733],
[ 0.2606, -0.3474, -1.9230]])
torch.cat((x,x),0)
tensor([[ 0.2392, 1.6352, 0.3733],
[ 0.2606, -0.3474, -1.9230],
[ 0.2392, 1.6352, 0.3733],
[ 0.2606, -0.3474, -1.9230]])
torch.cat((x,x),1)
tensor([[ 0.2392, 1.6352, 0.3733, 0.2392, 1.6352, 0.3733],
[ 0.2606, -0.3474, -1.9230, 0.2606, -0.3474, -1.9230]])
x = torch.zeros((3,1,3,1,3))
x.size()
#torch.Size([3, 1, 3, 1, 3])
torch.squeeze(x,0).size()
#torch.Size([3, 1, 3, 1, 3])
torch.squeeze(x,1).size()
#torch.Size([3, 3, 1, 3])
x = torch.randn((2,3))
tensor([[ 1.7073, -0.1557, 1.4400],
[ 0.3452, 0.0751, 0.1627]])
torch.t(x)
tensor([[ 1.7073, 0.3452],
[-0.1557, 0.0751],
[ 1.4400, 0.1627]])
torch.t(x).size()
torch.Size([3, 2])
x = torch.randn(2,3)
x
tensor([[ 1.6700, 2.3639, -1.5683],
[ 0.0986, 1.5579, 0.9787]])
torch.transpose(x,0,1)
tensor([[ 1.6700, 0.0986],
[ 2.3639, 1.5579],
[-1.5683, 0.9787]])
x = torch.Tensor([1,2,3,4])
torch.unsqueeze(x,0)
#tensor([[1., 2., 3., 4.]])
x.size()
#torch.Size([4])
torch.unsqueeze(x,0).size()
#torch.Size([1, 4])