错误记录:RuntimeError: one of the variables needed for gradient computation has been modified by an inpl

完整错误信息:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [2, 256, 20, 20]], which is output 0 of struct torch:
:autograd::CopySlices, is at version 3; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detec
t_anomaly(True).

错误原因:我想要自己弄一个类似于注意力一样的东西,在FPN的横向连接处,将左边的特征给增强一下。

分析:在运行代码的过程中,最终也最难搞的就是这个错误。我只能从它的错误信息中读出,是因为我修改了特征的原因。(因为要增强特征,怎么可能不修改呢?)最后没有办法,直接深拷贝了一个独立的tensor数据出来,修改这个新的数据,并将其返回。(也不知道会不会对参数的学习有影响)

解决代码:

def enhance_feature(x):
    """
    特征增强
    Args:
        x: 待增强特征
    Returns: 增强后的特征
    """
    en_feature = x.clone().detach()  # 深拷贝x,不共享内存和梯度
    saliencyMaps = process_feature(en_feature)  # 计算注意力图
    for i, saliencyMap in enumerate(saliencyMaps):
        # print(saliencyMap * features[i])
        en_feature[i] = x[i] * saliencyMap  # 增强特征

    return en_feature  # 返回增强特征

你可能感兴趣的:(深度学习,python,人工智能)