1.概念和TensorFlow的是基本一致的,只是代码编写格式的不同。我们声明一个Tensor,并打印它,例如
torch.empty()
torch.zeors()
torch.rand()
torch.tensor( [3, 5 , 11 ] )
x.size() #tensor x的大小
torch.tensor 与numpy.ndarray类似,但能借助GPU加速运算
import torch #导入torch模块
x = torch.empty(5,3) #创建随机矩阵
print(x)
out:
tensor([[0.0000e+00, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 0.0000e+00, 0.0000e+00],
[0.0000e+00, 6.2218e-43, 0.0000e+00],
[0.0000e+00, 1.9421e+20, 0.0000e+00]])
x = torch.rand(5,3)
print(x)
out:
tensor([[0.1380, 0.3707, 0.7600],
[0.9985, 0.9175, 0.6474],
[0.7655, 0.6301, 0.1842],
[0.8244, 0.6575, 0.4823],
[0.6430, 0.9246, 0.1736]])
x = torch.zeros(5,3,dtype=torch.int32)
print(x)
out:
tensor([[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]], dtype=torch.int32)
x = torch.tensor([5, 3 , 2.3])
print(x)
out:
tensor([5.0000, 3.0000, 2.3000])
从现有的tensor中创建tensor
x = x.new_ones(5, 3, dtype=torch.double) # new_* methods take in sizes
print(x)
x = torch.randn_like(x, dtype=torch.float) # override dtype!
print(x) # result has the same size
Out:
tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]], dtype=torch.float64)
tensor([[ 0.5277, -0.8684, 0.7691],
[ 0.6421, 0.0203, -1.0397],
[-0.7351, -2.4571, 0.1277],
[-0.9801, -2.3145, -2.0558],
[-0.6275, -1.2192, 1.0371]])
如果想生成不同类型的数据,可以改变torch.后面函数名称,例如下面这样:
import torch
#定义一个Tensor矩阵
a = torch.Tensor([[1, 2], [3, 4],[5, 6], [7, 8]])
print('{}'.format(a))
b = torch.zeros((4, 2))
print(b)
c = torch.IntTensor([[1, 2], [3, 4],[5, 6], [7, 8]])
print(c)
d = torch.LongTensor([[1, 2], [3, 4],[5, 6], [7, 8]])
print(d)
e = torch.DoubleTensor([[1, 2], [3, 4],[5, 6], [7, 8]])
print(e)
print(x.size())
out:
torch.size([5,3])
注意:
torch.size()支持所有元组tuple操作
print(e[1, 1])
#改变元素值
e[1, 1] = 3
print(e[1, 1])
torch.from_numpy(numpy.ndarray)
x.numpy() #x is tensor
numpy可以直接转成tensor
x is numpy.ndarray
x = torch.Tensor(x) # float32 默认
x = torch.tensor(x) # float 64
inputs = torch.from_numpy(x.astype("float32")) #numpy 转 tensor
list_data = [1,2,3,4]
list2np = np.array(list_data)
print('\nlist to numpy.ndarray: ',list2np)
list2tensor = torch.FloatTensor(list_data)
print('\nlist to torch.tensor: ',list2tensor)
np2tensor = torch.from_numpy(list2np)
print('\nnumpy to torch.tensor: ',np2tensor)
tensor2np = np2tensor.numpy()
print('\ntensor to numpy: ', tensor2np)
x = torch.tensor([[1, 2, 3], [4, 5 , 6]],dtype=torch.float)
# let us run this cell only if CUDA is available
# We will use ``torch.device`` objects to move tensors in and out of GPU
if torch.cuda.is_available():
device = torch.device("cuda") # a CUDA device object
y = torch.ones_like(x, device=device) # directly create a tensor on GPU
x = x.to(device) # or just use strings ``.to("cuda")``
z = x + y
print(z)
print(z.to("cpu", torch.double))
# ``.to`` can also change dtype together!
out:
tensor([[2., 3., 4.],
[5., 6., 7.]], device='cuda:0')
tensor([[2., 3., 4.],
[5., 6., 7.]], dtype=torch.float64)
1.Tensor和Numpy都是矩阵,区别是前者可以在GPU上运行,后者只能在CPU上
2.Tensor和Numpy互相转化很方便,类型也比较兼容
3.Tensor可以直接通过print显示数据类型,而Numpy不可以,例如:dtype = torch.float64