One-Class Classification入门

公司最近做异常检测,用One-Class做,感觉很棒,有python代码有c++部署代码。所需要做得就是如何提高训练效果。

先鸽一会,下周写,坑站着。

One Class是一种使用gan的缺陷检测方案,该方案仅依靠正样本和少量的负样本即可实现缺陷检测。

训练集合的样本是生产出来的物品,标签是标准的物品。

样本

One-Class Classification入门_第1张图片

标签

One-Class Classification入门_第2张图片

样本经过CNN(编码网络)得到Feature map 然后再经过CNN(解码网络)得到预测值,预测值与标签计算损失,得到模型参数。

检测时,无论送入什么角度的样本,都生成对应角度的标准件。

假如送入是这个图

One-Class Classification入门_第3张图片

生成这个图

One-Class Classification入门_第4张图片

两者差值得到

One-Class Classification入门_第5张图片

因此存在缺陷

打印了一下我们用的网络

(encoder): Encoder(
    (main): Sequential(
      (initial-conv-3-48): Conv2d(3, 48, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
      (initial-relu-48): LeakyReLU(negative_slope=0.2, inplace)
      (pyramid-48-72-conv): Conv2d(48, 72, kernel_size=(4, 4), stride=(1, 2), padding=(2, 1), bias=False)
      (pyramid-72-batchnorm): BatchNorm2d(72, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (pyramid-72-relu): LeakyReLU(negative_slope=0.2, inplace)
      (pyramid-72-108-conv): Conv2d(72, 108, kernel_size=(4, 4), stride=(1, 2), padding=(2, 1), bias=False)
      (pyramid-108-batchnorm): BatchNorm2d(108, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (pyramid-108-relu): LeakyReLU(negative_slope=0.2, inplace)
      (pyramid-108-162-conv): Conv2d(108, 162, kernel_size=(4, 4), stride=(1, 2), padding=(2, 1), bias=False)
      (pyramid-162-batchnorm): BatchNorm2d(162, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (pyramid-162-relu): LeakyReLU(negative_slope=0.2, inplace)
      (pyramid-162-243-conv): Conv2d(162, 243, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
      (pyramid-243-batchnorm): BatchNorm2d(243, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (pyramid-243-relu): LeakyReLU(negative_slope=0.2, inplace)
      (pyramid-243-364-conv): Conv2d(243, 364, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
      (pyramid-364-batchnorm): BatchNorm2d(364, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (pyramid-364-relu): LeakyReLU(negative_slope=0.2, inplace)
      (pyramid-364-546-conv): Conv2d(364, 546, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
      (pyramid-546-batchnorm): BatchNorm2d(546, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (pyramid-546-relu): LeakyReLU(negative_slope=0.2, inplace)
    )
  )
(decoder): Decoder(
    (main): Sequential(
      (initial-546-1440-convt): ConvTranspose2d(546, 1440, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
      (initial-1440-batchnorm): BatchNorm2d(1440, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (initial-1440-relu): ReLU(inplace)
      (pyramid-1440-960-convt): ConvTranspose2d(1440, 960, kernel_size=(3, 4), stride=(1, 2), padding=(1, 1), bias=False)
      (pyramid-960-batchnorm): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (pyramid-960-relu): ReLU(inplace)
      (pyramid-960-640-convt): ConvTranspose2d(960, 640, kernel_size=(3, 4), stride=(1, 2), padding=(1, 1), bias=False)
      (pyramid-640-batchnorm): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (pyramid-640-relu): ReLU(inplace)
      (pyramid-640-426-convt): ConvTranspose2d(640, 426, kernel_size=(3, 4), stride=(1, 2), padding=(1, 1), bias=False)
      (pyramid-426-batchnorm): BatchNorm2d(426, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (pyramid-426-relu): ReLU(inplace)
      (pyramid-426-284-convt): ConvTranspose2d(426, 284, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
      (pyramid-284-batchnorm): BatchNorm2d(284, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (pyramid-284-relu): ReLU(inplace)
      (pyramid-284-189-convt): ConvTranspose2d(284, 189, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
      (pyramid-189-batchnorm): BatchNorm2d(189, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (pyramid-189-relu): ReLU(inplace)
      (final-189-3-convt): ConvTranspose2d(189, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
      (final-3-tanh): Tanh()
    )
  )
)

 

你可能感兴趣的:(One-Class)