深度学习中FLOPs计算

深度学习中FLOPs计算

定义:FLOPs(floating point operations),即浮点数运算次数,是一个计算量,所以可以用来衡量一个算法/模型等的复杂度

FLOPs是用来计算整个网络模型中乘法/加法的运行次数,是衡量算法的间接指标。下面通过卷积的例子来阐述计算FLOPs的方式。

假设
输入特征大小: H i n × W i n × C i n H_{in} \times W_{in} \times C_{in} Hin×Win×Cin
输出特征大小: H o u t × W o u t × C o u t H_{out} \times W_{out} \times C_{out} Hout×Wout×Cout
卷积核大小: K × K × C i n × C o u t K \times K \times C_{in} \times C_{out} K×K×Cin×Cout

1、卷积FLOPs计算
将整个卷积的过程分为乘法、加法以及bias
乘法: K × K × C i n × H o u t × W o u t × C o u t K \times K \times C_{in}\times H_{out} \times W_{out} \times C_{out} K×K×Cin×Hout×Wout×Cout
加法: ( K × K × C i n − 1 ) × H o u t × W o u t × C o u t (K \times K \times C_{in}-1) \times H_{out} \times W_{out} \times C_{out} (K×K×Cin1)×Hout×Wout×Cout
bias: H o u t × W o u t × C o u t H_{out} \times W_{out} \times C_{out} Hout×Wout×Cout

以计算乘法FLOPs为例分析思路:

  1. 一次卷积的计算量,即卷积核与特征相乘一次,FLOPs为 K × K K \times K K×K次,输入特征channel为 C i n C_{in} Cin,故一次卷积总共运行了 K × K × C i n K \times K \times C_{in} K×K×Cin
  2. 输出特征中每一层channel的计算量。有人会考虑到padding和stride的问题,但是假设中已经知道了输出特征的大小,所以可以直接通过该已知量来计算。输出特征中的每一个值均是由卷积核和输入特征做一次卷积得到,输出特征每一层大小为 H o u t × W o u t H_{out} \times W_{out} Hout×Wout,故FLOPs为 K × K × C i n × H o u t × W o u t K \times K \times C_{in}\times H_{out} \times W_{out} K×K×Cin×Hout×Wout
  3. 输出特征一共有 C o C_{o} Co层,故共一个完整的卷积FLOPs为 K × K × C i n × H o u t × W o u t × C o u t K \times K \times C_{in}\times H_{out} \times W_{out} \times C_{out} K×K×Cin×Hout×Wout×Cout

加法FLOPs的计算思路和乘法的差不多,乘法的每一次卷积FLOPs为 K × K K \times K K×K次,加法为 ( K × K × C i n − 1 ) (K \times K \times C_{in}-1) (K×K×Cin1)次。

一次完整卷积(不带bias)的FLOPs为
F L O P s = K × K × C i n × H o u t × W o u t × C o u t + ( K × K × C i n − 1 ) × H o u t × W o u t × C o u t FLOPs=K \times K \times C_{in}\times H_{out} \times W_{out} \times C_{out}+(K \times K \times C_{in}-1) \times H_{out} \times W_{out} \times C_{out} FLOPs=K×K×Cin×Hout×Wout×Cout+(K×K×Cin1)×Hout×Wout×Cout
= ( 2 K 2 × C i n − 1 ) × H o u t × W o u t × C o u t =(2K^{2} \times C_{in}-1) \times H_{out} \times W_{out} \times C_{out} =(2K2×Cin1)×Hout×Wout×Cout

带上bias为
F L O P s = ( 2 K 2 × C i − 1 ) × H o u t × W o u t × C o u t + H o u t × W o u t × C o u t FLOPs=(2K^{2} \times C_{i}-1) \times H_{out} \times W_{out} \times C_{out}+H_{out} \times W_{out} \times C_{out} FLOPs=(2K2×Ci1)×Hout×Wout×Cout+Hout×Wout×Cout
= 2 K 2 × C i n × H o u t × W o u t × C o u t =2K^{2} \times C_{in} \times H_{out} \times W_{out} \times C_{out} =2K2×Cin×Hout×Wout×Cout

2、组卷积FLOPs计算
组卷积和普通卷积的区别在于输入特征channel会分为 g g g组,卷积核的channel 为 C i n / g C_{in}/g Cin/g
带偏置的FLOPs为:

F L O P s = ( 2 K 2 × C i n − 1 ) × H o u t × W o u t × C o u t + H o u t × W o u t × C o u t FLOPs=(2K^{2} \times C_{in}-1) \times H_{out} \times W_{out} \times C_{out}+H_{out} \times W_{out} \times C_{out} FLOPs=(2K2×Cin1)×Hout×Wout×Cout+Hout×Wout×Cout
= 2 K 2 × C i n / g × H o u t × W o u t × C o u t =2K^{2} \times C_{in} /g\times H_{out} \times W_{out} \times C_{out} =2K2×Cin/g×Hout×Wout×Cout

3、实际中计算FLOPs工具
Pytorch中的FLOPs计算工具

你可能感兴趣的:(深度学习中FLOPs计算)