pytorch之resnet中def _make_residual解释layers

   def _make_residual(self, block, planes, num_blocks, stride=1):
        downsample = None
        if stride != 1 or self.inplanes != planes * block.expansion:
            downsample = nn.Sequential(
                nn.Conv2d(self.inplanes, planes * block.expansion,
                          kernel_size=1, stride=stride, bias=True),
            )

        layers = []
#########下面这个只是为了输入与输出通道数一直而做的过渡卷积层
        layers.append(block(self.inplanes, planes, stride, downsample))
        self.inplanes = planes * block.expansion
###############layers是一个列表。
###############这部分才是真正的 _make_residual返回值,该部分是从1开始,略过0 索引的layers[0]层
        for _ in range(1, num_blocks):
            layers.append(block(self.inplanes, planes))

        return nn.Sequential(*layers)

因此,在可视化网络的时候会出现bottenck中的卷积层有3个。如下图所示:Conv2d[conv1],Conv2d[conv2],Conv2d[conv3]

                       pytorch之resnet中def _make_residual解释layers_第1张图片

你可能感兴趣的:(网络模型,pytorch,卷积神经网络)