详解PyTorch中的copy_()函数、detach()函数、detach_()函数和clone()函数

参考链接: copy_(src, non_blocking=False) → Tensor
参考链接: detach()
参考链接: detach_()
参考链接: clone() → Tensor

总结:

使用clone():
解释说明: 返回一个原张量的副本,同时不破换计算图,它能够维持反向传播计算梯度,
并且两个张量不共享内存.一个张量上值的改变不影响另一个张量.



使用copy_():
解释说明: 比如x4.copy_(x2), 将x2的数据复制到x4,并且会
修改计算图,使得反向传播自动计算梯度时,计算出x4的梯度后
再继续前向计算x2的梯度. 注意,复制完成之后,两者的值的改变互不影响,
因为他们并不共享内存.



使用detach():
解释说明: 比如x4 = x2.detach(),返回一个和原张量x2共享内存的新张量x4,
两者的改动可以相互可见, 一个张量上的值的改动会影响到另一个张量.
返回的新张量和计算图相互独立,即新张量和计算图不再关联,
因此也无法进行反向传播计算梯度.即从计算图上把这个张量x2拆
卸detach下来,非常形象.




使用detach_():
解释说明: detach()的原地操作版本,功能和detach()类似.
比如x4 = x2.detach_(),其实x2和x4是同一个对象,返回的是self,
x2和x4具有相同的id()值.

以下是手动绘制的计算图(其中的x3部分暂时不用,请读者暂时忽略和x3相关的部分):
详解PyTorch中的copy_()函数、detach()函数、detach_()函数和clone()函数_第1张图片

1.最简单的情况:x4 = x2
解释说明: 无需做过多解释

import torch
torch.manual_seed(seed=20200910)
x0 = torch.randn(3, 5, requires_grad=True)
bias = torch.randn(3, 5, requires_grad=True)
x3 = torch.randn(3, 5, requires_grad=False)
x4 = torch.randn(3, 5, requires_grad=False)
x1 = 3 * x0 + 8
x2 = 4 * x1 ** 2 + 5 * x1 + 7

#
# x3 = x2.clone()
# x3 = x3.copy_(x2)
# x3 = x2.detach()
# x3 = x2.detach_()
# x3 = x2

#
# x4 = x2.clone()
# x4 = x4.copy_(x2)
# x4 = x2.detach()
# x4 = x2.detach_()
x4 = x2

#
x5 = 9 * x4 ** 3 + 7 * x4 ** 2 + 3 * x4 + 5 + bias
loss = torch.mean(x5)
print("loss =",loss)
loss.backward()
print("输出梯度信息:")
print(x0.grad)

控制台输出结果:

Windows PowerShell
版权所有 (C) Microsoft Corporation。保留所有权利。

尝试新的跨平台 PowerShell https://aka.ms/pscore6

加载个人及系统配置文件用了 899 毫秒。
(base) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq> conda activate ssd4pytorch1_2_0
(ssd4pytorch1_2_0) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq>  & 'D:\Anaconda3\envs\ssd4pytorch1_2_0\python.exe' 'c:\Users\chenxuqi\.vscode\extensions\ms-python.python-2020.12.424452561\pythonFiles\lib\python\debugpy\launcher' '49618' '--' 'c:\Users\chenxuqi\Desktop\News4cxq\test4cxq\test25.py'
loss = tensor(9.6161e+08, grad_fn=)
输出梯度信息:
tensor([[5.4391e+07, 1.7363e+07, 1.3316e+08, 3.6766e+05, 2.4903e+07],
        [5.1519e+08, 1.5781e+08, 3.6020e+05, 1.7468e+08, 1.2224e+08],
        [4.6100e+07, 9.8575e+07, 6.9595e+05, 2.4009e+07, 1.6252e+07]])
(ssd4pytorch1_2_0) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq>

2.使用clone()的情况:x4 = x2.clone()
解释说明: 返回一个原张量的副本,同时不破换计算图,它能够维持反向传播计算梯度.

import torch
torch.manual_seed(seed=20200910)
x0 = torch.randn(3, 5, requires_grad=True)
bias = torch.randn(3, 5, requires_grad=True)
x3 = torch.randn(3, 5, requires_grad=False)
x4 = torch.randn(3, 5, requires_grad=False)
x1 = 3 * x0 + 8
x2 = 4 * x1 ** 2 + 5 * x1 + 7

#
# x3 = x2.clone()
# x3 = x3.copy_(x2)
# x3 = x2.detach()
# x3 = x2.detach_()
# x3 = x2

#
x4 = x2.clone()
# x4 = x4.copy_(x2)
# x4 = x2.detach()
# x4 = x2.detach_()
# x4 = x2

#
x5 = 9 * x4 ** 3 + 7 * x4 ** 2 + 3 * x4 + 5 + bias
loss = torch.mean(x5)
print("loss =",loss)
loss.backward()
print("输出梯度信息:")
print(x0.grad)

控制台输出结果:

Windows PowerShell
版权所有 (C) Microsoft Corporation。保留所有权利。

尝试新的跨平台 PowerShell https://aka.ms/pscore6

加载个人及系统配置文件用了 901 毫秒。
(base) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq> conda activate ssd4pytorch1_2_0
(ssd4pytorch1_2_0) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq>  & 'D:\Anaconda3\envs\ssd4pytorch1_2_0\python.exe' 'c:\Users\chenxuqi\.vscode\extensions\ms-python.python-2020.12.424452561\pythonFiles\lib\python\debugpy\launcher' '49739' '--' 'c:\Users\chenxuqi\Desktop\News4cxq\test4cxq\test25.py'
loss = tensor(9.6161e+08, grad_fn=)
输出梯度信息:
tensor([[5.4391e+07, 1.7363e+07, 1.3316e+08, 3.6766e+05, 2.4903e+07],
        [5.1519e+08, 1.5781e+08, 3.6020e+05, 1.7468e+08, 1.2224e+08],
        [4.6100e+07, 9.8575e+07, 6.9595e+05, 2.4009e+07, 1.6252e+07]])
(ssd4pytorch1_2_0) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq> 

3.使用copy_()的情况:x4 = x4.copy_(x2)或者x4.copy_(x2)
解释说明: 将x2的数据复制到x4,并且会修改计算图,使得反向传播自动计算梯度时,计算出x4的梯度后再继续前向计算x2的梯度.

import torch
torch.manual_seed(seed=20200910)
x0 = torch.randn(3, 5, requires_grad=True)
bias = torch.randn(3, 5, requires_grad=True)
x3 = torch.randn(3, 5, requires_grad=False)
x4 = torch.randn(3, 5, requires_grad=False)
x1 = 3 * x0 + 8
x2 = 4 * x1 ** 2 + 5 * x1 + 7

#
# x3 = x2.clone()
# x3 = x3.copy_(x2)
# x3 = x2.detach()
# x3 = x2.detach_()
# x3 = x2

#
# x4 = x2.clone()
# x4 = x4.copy_(x2)
# x4.copy_(x3)
x4.copy_(x2)
# x4 = x2.detach()
# x4 = x2.detach_()
# x4 = x2

#
x5 = 9 * x4 ** 3 + 7 * x4 ** 2 + 3 * x4 + 5 + bias
loss = torch.mean(x5)
print("loss =",loss)
loss.backward()
print("输出梯度信息:")
print(x0.grad)

控制台输出结果:

Windows PowerShell
版权所有 (C) Microsoft Corporation。保留所有权利。

尝试新的跨平台 PowerShell https://aka.ms/pscore6

加载个人及系统配置文件用了 914 毫秒。
(base) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq> conda activate ssd4pytorch1_2_0
(ssd4pytorch1_2_0) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq>  & 'D:\Anaconda3\envs\ssd4pytorch1_2_0\python.exe' 'c:\Users\chenxuqi\.vscode\extensions\ms-python.python-2020.12.424452561\pythonFiles\lib\python\debugpy\launcher' '49845' '--' 'c:\Users\chenxuqi\Desktop\News4cxq\test4cxq\test25.py'
loss = tensor(9.6161e+08, grad_fn=)
输出梯度信息:
tensor([[5.4391e+07, 1.7363e+07, 1.3316e+08, 3.6766e+05, 2.4903e+07],
        [5.1519e+08, 1.5781e+08, 3.6020e+05, 1.7468e+08, 1.2224e+08],
        [4.6100e+07, 9.8575e+07, 6.9595e+05, 2.4009e+07, 1.6252e+07]])
(ssd4pytorch1_2_0) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq>

注意: 如果将x4.copy_(x2)改成x4.copy_(x3),那么反向传播无法传递到x0,控制台输出的结果是None:

import torch
torch.manual_seed(seed=20200910)
x0 = torch.randn(3, 5, requires_grad=True)
bias = torch.randn(3, 5, requires_grad=True)
x3 = torch.randn(3, 5, requires_grad=False)
x4 = torch.randn(3, 5, requires_grad=False)
x1 = 3 * x0 + 8
x2 = 4 * x1 ** 2 + 5 * x1 + 7

#
# x3 = x2.clone()
# x3 = x3.copy_(x2)
# x3 = x2.detach()
# x3 = x2.detach_()
# x3 = x2

#
# x4 = x2.clone()
# x4 = x4.copy_(x2)
x4.copy_(x3)
# x4.copy_(x2)
# x4 = x2.detach()
# x4 = x2.detach_()
# x4 = x2

#
x5 = 9 * x4 ** 3 + 7 * x4 ** 2 + 3 * x4 + 5 + bias
loss = torch.mean(x5)
print("loss =",loss)
loss.backward()
print("输出梯度信息:")
print(x0.grad)
Windows PowerShell
版权所有 (C) Microsoft Corporation。保留所有权利。

尝试新的跨平台 PowerShell https://aka.ms/pscore6

加载个人及系统配置文件用了 873 毫秒。
(base) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq> conda activate ssd4pytorch1_2_0
(ssd4pytorch1_2_0) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq>  & 'D:\Anaconda3\envs\ssd4pytorch1_2_0\python.exe' 'c:\Users\chenxuqi\.vscode\extensions\ms-python.python-2020.12.424452561\pythonFiles\lib\python\debugpy\launcher' '49870' '--' 'c:\Users\chenxuqi\Desktop\News4cxq\test4cxq\test25.py'
loss = tensor(16.7390, grad_fn=)
输出梯度信息:
None
(ssd4pytorch1_2_0) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq>

4.使用detach()函数情况:x4 = x2.detach()
解释说明: 返回一个和原张量共享内存的新张量,两者的改动可以相互可见, 返回的新张量和计算图相互独立,即新张量和计算图不再关联,因此也无法 进行反向传播计算梯度.

import torch
torch.manual_seed(seed=20200910)
x0 = torch.randn(3, 5, requires_grad=True)
bias = torch.randn(3, 5, requires_grad=True)
x3 = torch.randn(3, 5, requires_grad=False)
x4 = torch.randn(3, 5, requires_grad=False)
x1 = 3 * x0 + 8
x2 = 4 * x1 ** 2 + 5 * x1 + 7

#
# x3 = x2.clone()
# x3 = x3.copy_(x2)
# x3 = x2.detach()
# x3 = x2.detach_()
# x3 = x2

#
# x4 = x2.clone()
# x4 = x4.copy_(x2)
# x4.copy_(x3)
# x4.copy_(x2)
x4 = x2.detach()
print('x4.requires_grad:',x4.requires_grad)
# x4 = x2.detach_()
# x4 = x2

#
x5 = 9 * x4 ** 3 + 7 * x4 ** 2 + 3 * x4 + 5 + bias
loss = torch.mean(x5)
print("loss =",loss)
loss.backward()
print("输出梯度信息:")
print(x0.grad)

控制台输出结果:

Windows PowerShell
版权所有 (C) Microsoft Corporation。保留所有权利。

尝试新的跨平台 PowerShell https://aka.ms/pscore6

加载个人及系统配置文件用了 975 毫秒。
(base) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq> conda activate ssd4pytorch1_2_0
(ssd4pytorch1_2_0) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq>  & 'D:\Anaconda3\envs\ssd4pytorch1_2_0\python.exe' 'c:\Users\chenxuqi\.vscode\extensions\ms-python.python-2020.12.424452561\pythonFiles\lib\python\debugpy\launcher' '50013' '--' 'c:\Users\chenxuqi\Desktop\News4cxq\test4cxq\test25.py'
x4.requires_grad: False
loss = tensor(9.6161e+08, grad_fn=)
输出梯度信息:
None
(ssd4pytorch1_2_0) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq>

5.使用detach_()的情况:x4 = x2.detach_()
解释说明: detach()的原地操作版本,功能和detach()类似.

import torch
torch.manual_seed(seed=20200910)
x0 = torch.randn(3, 5, requires_grad=True)
bias = torch.randn(3, 5, requires_grad=True)
x3 = torch.randn(3, 5, requires_grad=False)
x4 = torch.randn(3, 5, requires_grad=False)
x1 = 3 * x0 + 8
x2 = 4 * x1 ** 2 + 5 * x1 + 7

#
# x3 = x2.clone()
# x3 = x3.copy_(x2)
# x3 = x2.detach()
# x3 = x2.detach_()
# x3 = x2

#
# x4 = x2.clone()
# x4 = x4.copy_(x2)
# x4.copy_(x3)
# x4.copy_(x2)
# x4 = x2.detach()
# print('x4.requires_grad:',x4.requires_grad)
x4 = x2.detach_()
print('x4.requires_grad:',x4.requires_grad)
# x4 = x2

#
x5 = 9 * x4 ** 3 + 7 * x4 ** 2 + 3 * x4 + 5 + bias
loss = torch.mean(x5)
print("loss =",loss)
loss.backward()
print("输出梯度信息:")
print(x0.grad)

控制台输出结果:

Windows PowerShell
版权所有 (C) Microsoft Corporation。保留所有权利。

尝试新的跨平台 PowerShell https://aka.ms/pscore6

加载个人及系统配置文件用了 915 毫秒。
(base) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq> conda activate ssd4pytorch1_2_0
(ssd4pytorch1_2_0) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq>  & 'D:\Anaconda3\envs\ssd4pytorch1_2_0\python.exe' 'c:\Users\chenxuqi\.vscode\extensions\ms-python.python-2020.12.424452561\pythonFiles\lib\python\debugpy\launcher' '50131' '--' 'c:\Users\chenxuqi\Desktop\News4cxq\test4cxq\test25.py'
x4.requires_grad: False
loss = tensor(9.6161e+08, grad_fn=)
输出梯度信息:
None
(ssd4pytorch1_2_0) PS C:\Users\chenxuqi\Desktop\News4cxq\test4cxq>

你可能感兴趣的:(详解PyTorch中的copy_()函数、detach()函数、detach_()函数和clone()函数)