nn.AvgPool1d(kernel_size,stride,padding)与nn.AdaptiveAvgPool1d(N)

nn.AvgPool1d需要根据自己需要的length去手动设计,而nn.AdaptiveAvgPool1d更像是高手设计好的麻瓜模块方便供我这种麻瓜使用


1.nn.AvgPool1d

看网上博客没看到把Lout说明白的,直接看官方文档就行了。

nn.AvgPool1d(kernel_size,stride,padding)与nn.AdaptiveAvgPool1d(N)_第1张图片

N - Batch Size

C - Channel number

L - Length

深度学习中的Tensor 数据格式(N,C,H,W)_d_b_的博客-CSDN博客_tensor格式

2.nn.AdaptiveAvgPool1d(N)

AdaptiveAvgPool1d(N) 对于一个输入shape为BCL的tensor进行一维的pool,输出shape为BCN

nn.AvgPool1d(kernel_size,stride,padding)与nn.AdaptiveAvgPool1d(N)_第2张图片

源码

class AdaptiveAvgPool1d(_AdaptiveAvgPoolNd):
    r"""Applies a 1D adaptive average pooling over an input signal composed of several input planes.

    The output size is :math:`L_{out}`, for any input size.
    The number of output features is equal to the number of input planes.

    Args:
        output_size: the target output size :math:`L_{out}`.

    Shape:
        - Input: :math:`(N, C, L_{in})` or :math:`(C, L_{in})`.
        - Output: :math:`(N, C, L_{out})` or :math:`(C, L_{out})`, where
          :math:`L_{out}=\text{output\_size}`.

    Examples:
        >>> # target output size of 5
        >>> m = nn.AdaptiveAvgPool1d(5)
        >>> input = torch.randn(1, 64, 8)
        >>> output = m(input)

    """

    output_size: _size_1_t

    def forward(self, input: Tensor) -> Tensor:
        return F.adaptive_avg_pool1d(input, self.output_size)

源码分析 

(213条消息) torch.nn.AdaptiveAvgPool分析_JerryLiu1998的博客-CSDN博客_nn.adaptiveavgpool 

参考资料

AvgPool1d — PyTorch 1.12 documentation

(213条消息) torch.nn.AdaptiveAvgPool1d(N)函数解读_wang xiang的博客-CSDN博客_avgpool1d

(213条消息) nn.AdaptiveAvgPool1d()_饿了就干饭的博客-CSDN博客_nn.adaptiveavgpool

你可能感兴趣的:(#,深度学习Pytorch框架)