pytorch-tenor-细节

一: tesor.view的共享内存机制:

a = torch.rand(3, 4)
print("before a\n", a)
b = a.contiguous().view(4, 3)
print("before b\n", b)


b[:, 1].fill_(0)
print("after a\n",a)
print("after b\n",b)

输出为下面可以看到共享内存的效果

before a
 tensor([[0.0521, 0.9051, 0.5144, 0.9332],
        [0.0840, 0.0737, 0.1924, 0.4252],
        [0.9632, 0.7977, 0.4351, 0.7341]])
before b
 tensor([[0.0521, 0.9051, 0.5144],
        [0.9332, 0.0840, 0.0737],
        [0.1924, 0.4252, 0.9632],
        [0.7977, 0.4351, 0.7341]])
after a
 tensor([[0.0521, 0.0000, 0.5144, 0.9332],
        [0.0000, 0.0737, 0.1924, 0.0000],
        [0.9632, 0.7977, 0.0000, 0.7341]])
after b
 tensor([[0.0521, 0.0000, 0.5144],
        [0.9332, 0.0000, 0.0737],
        [0.1924, 0.0000, 0.9632],
        [0.7977, 0.0000, 0.7341]])

二: 如果是用mask进行slice并进行fill, 则会创建新的对象.

b = torch.rand(3, 4)
a = b.contiguous().view(4, 3)
print("a:\n", a)
print("-------after-----")

a[a>0.7].fill_(0)
print("a:\n", a)

输出结果如下

a:
 tensor([[0.6165, 0.9024, 0.3824],
        [0.5746, 0.4745, 0.6394],
        [0.0516, 0.8359, 0.6028],
        [0.2508, 0.0475, 0.8515]])
-------after-----
a:
 tensor([[0.6165, 0.9024, 0.3824],
        [0.5746, 0.4745, 0.6394],
        [0.0516, 0.8359, 0.6028],
        [0.2508, 0.0475, 0.8515]])

可以看到这中fill的方式连tensor自身都没改变

你可能感兴趣的:(pytorch)