每天进步一点点 -- pytorch学习:张量的介绍与创建

推荐查看的文档:机器视觉全栈|机器视觉教程|docsify|pytorch官方教程中文版|opencv-python官方教程中文版|open3D(0.15.1)官方教程中文版

Tensors个人理解为在GPU上工作的数组和矩阵,既然是数组和矩阵,那么里面也保存一些类型的内容,Tensor类型都包含如下,在不同的设备上运行,类型也不一样:

每天进步一点点 -- pytorch学习:张量的介绍与创建_第1张图片

 初始化tensor与理解张量

(1)张量的初始化和类型

data = [[1, 2], [2, 3]]
x_data = torch.tensor(data)

#Python自带的type方法只能查看到时Tensor,无法查看到具体的tensor类型
print(type(data)) # 
print(type(x_data)) # 

# 仅Tensor对象有的type方法
print(x_data.type()) # torch.LongTensor,自动推断成LongTensor
#print(data.type())   # 会报错,非tensor对象,无法使用

# Tensor自带的isInstance方法即可以判断tensor,也可以判断非tensor对象
print(isinstance(x_data,torch.LongTensor))  #True
print(isinstance(data,torch.LongTensor))    #false

# 张量(向量),神经网络的b(bias)
# 直接创建
print(torch.tensor([1.1, 2.2]))  # tensor([1.1000, 2.2000])
print(torch.FloatTensor(2))  # tensor([3.5873e-43, 2.0000e+00]) 2代表向量长度为2,值为随机

data = np.ones(2)  # 从numpy创建tensor
print(data)  # [1. 1.]
data_tensor = torch.from_numpy(data)
print(data_tensor) # tensor([1., 1.], dtype=torch.float64)

(2)获取矩阵的size/dim/shape

        下面这个博客很好的介绍了dim,通过下面的案例,看一下size/dim/shape的区别pytorch中dim的含义及相关做法_取个名字真难呐的博客-CSDN博客_pytorch的dim

      【快速理解张量】通过torch.rand和举例通俗解释张量tensor_Neo很努力的博客-CSDN博客_torch.rand

x = torch.rand(2, 3, 4, 5)
print(x)
       [  # 0 维 1个元素
        [ # 1 维 2个元素
         [ # 3维 4个元素
          [0.7900, 0.3519, 0.2816, 0.3071, 0.7786], # 4维 5个元素
          [0.7649, 0.4460, 0.7359, 0.6503, 0.0960],
          [0.9055, 0.2015, 0.9697, 0.7157, 0.4706],
          [0.3957, 0.3493, 0.0409, 0.1304, 0.0248]
         ],

         [
          [0.3118, 0.0097, 0.4654, 0.0384, 0.5897],
          [0.3590, 0.9403, 0.9745, 0.6553, 0.3921],
          [0.8680, 0.9558, 0.2972, 0.4599, 0.6051],
          [0.8654, 0.5385, 0.0939, 0.6930, 0.4148]
         ],

         [
          [0.4586, 0.4279, 0.1924, 0.4297, 0.9618],
          [0.4627, 0.2492, 0.2574, 0.3442, 0.7872],
          [0.5375, 0.1170, 0.0672, 0.4515, 0.9849],
          [0.8033, 0.5539, 0.1909, 0.9696, 0.6049]
         ]
        ],


        [
         [
          [0.2846, 0.4734, 0.5478, 0.5626, 0.4522],
          [0.4783, 0.4525, 0.3988, 0.5499, 0.5277],
          [0.8804, 0.3285, 0.0620, 0.9264, 0.6451],
          [0.7034, 0.8597, 0.4179, 0.9200, 0.1851]
         ],

         [
          [0.5449, 0.5049, 0.8411, 0.7550, 0.3722],
          [0.0635, 0.7115, 0.0919, 0.0649, 0.9048],
          [0.7832, 0.5186, 0.5093, 0.0201, 0.6582],
          [0.2729, 0.4594, 0.7123, 0.5449, 0.7297]
         ],

         [
          [0.9535, 0.8421, 0.6229, 0.1735, 0.6973],
          [0.5910, 0.1106, 0.5080, 0.2239, 0.1200],
          [0.4912, 0.8273, 0.6190, 0.2189, 0.1312],
          [0.7433, 0.3109, 0.0515, 0.8466, 0.5950]
         ]
        ]
       ]
x = torch.rand(2, 3, 4, 5)
print(x)
print(x.shape) # torch.Size([2, 3, 4, 5])
print(x.shape[0]) # 2
print(x.shape[1]) # 3
print(x.shape[2]) # 4
print(x.shape[3]) # 5
print(x.size()) # torch.Size([2, 3, 4, 5])
print(x.size(0)) # 2
print(x.size(1)) # 3
print(x.size(2)) # 4
print(x.size(3)) # 5
print(x.dim()) # 4维

(3)元素总数

print(x.numel()) #120 = 2*3*4*5 返回张量元素总数,元素类型不一样,占用的内存也不一样

创建Tensor,直接参考下面的文档吧,这块已经不用单独写了

机器视觉全栈|机器视觉教程|docsify|pytorch官方教程中文版|opencv-python官方教程中文版|open3D(0.15.1)官方教程中文版

注:使用未初始化的tensor时,随机出来的值会特别大或者特别小,因此后序操作务必将未初始化的Tensor初始化

        默认使用tensor类型是floatTensor,使用torch.set_default_tensor_type(torch.DoubleTensor)修改默认类型

        建议使用rand随机初始化,最大值最小值是在[0-1]之间的

print(torch.Tensor(2, 3))
tensor([[1.1561e+19, 6.8794e+11, 2.7253e+20],
        [3.0866e+29, 1.1547e+19, 4.1988e+07]])

print(torch.FloatTensor(2, 3))
tensor([[1.1561e+19, 6.8794e+11, 2.7253e+20],
        [3.0866e+29, 1.1547e+19, 4.1988e+07]])

print(torch.IntTensor(2, 3))
tensor([[1595961953, 1394617393, 1634492771],
        [1886999666, 1595948901, 1277176882]], dtype=torch.int32)
torch.set_default_tensor_type(torch.DoubleTensor)
print(torch.tensor([2, 3]).type()) # torch.LongTensor,常用Double
data = [[1, 2], [2, 3]]
print(torch.rand(3, 3))
tensor([[0.3682, 0.3060, 0.5121],
        [0.4005, 0.3738, 0.8740],
        [0.3723, 0.0709, 0.7200]])

print(torch.rand_like(torch.tensor(data), dtype=torch.float))
tensor([[0.1732, 0.5935],
        [0.5803, 0.4345]])

print(torch.randint(1, 10, [3, 3]))  # 1是最小值,10是最大值,后面的是shape可以多个
tensor([[7, 8, 2],
        [5, 4, 8],
        [4, 5, 7]])

常用初始化

print(torch.randn(3, 3))  # 正太分布的3行3列矩阵
print(torch.arange(1, 10, 2))  # 1-10,step是2
print(torch.full([3, 3], 7))  # 3行3列,填充7
print(torch.linspace(0, 4, steps=4))  # 0-10 共4个点等分
print(torch.logspace(0, 1, steps=10))  # 10的0次方-10的一次方1 等分成10个
print(torch.ones(3, 3))
print(torch.zeros(3, 3))
print(torch.eye(3, 3))
print(torch.randperm(10)) # 0-9 共10个数,随机打散

你可能感兴趣的:(机器学习,pytorch,学习,深度学习)