特征提取部分用到了matlab时频图变换工具箱,故障诊断采用新出的MLP-Mixer分类,这一部分用的是pytorch1.6
传统轴承故障诊断是采用各种特征提取方法对一维轴承信号进行特征提取,如HHT包络谱,FFT频谱,小波能量谱等,变换后的特征依旧是一维数据。本文采用小波时频图将一维轴承信号转换为2维(3通道真彩图)的时频图,然后以60*60*3作为CNN的输入实现故障诊断分类,测试集精度有99.5%。
1,数据准备
采用西储大学轴承故障诊断数据集,48K/0HP数据,共10类故障(正常作为一类特殊的故障类型),划分后每个样本的采样点为864(据说是因为这样含两个故障周期),每类故障各200个样本,因此一共2000个样本,然后7:2:1划分训练集,验证集,与测试集。
2,小波时频图
本文采用小波时频图作为轴承信号的特征(其实我也不知道怎么看这个图的好坏,是看到别人论文这么用过),然后构建CNN网络,采用的依旧是Lenet结构,但是参数不一样。
3 MLP-Mixer故障诊断分类
参考代码地址如下lucidrains/mlp-mixer-pytorch,中文理解可看https://zhuanlan.zhihu.com/p/372692759
import torch
from torch import nn
from functools import partial
from einops.layers.torch import Rearrange, Reduce
from torchsummary import summary
class PreNormResidual(nn.Module):
def __init__(self, dim, fn):
super().__init__()
self.fn = fn
self.norm = nn.LayerNorm(dim)
def forward(self, x):
return self.fn(self.norm(x)) + x
def FeedForward(dim, expansion_factor = 4, dropout = 0., dense = nn.Linear):
return nn.Sequential(
dense(dim, dim * expansion_factor),
nn.GELU(),
nn.Dropout(dropout),
dense(dim * expansion_factor, dim),
nn.Dropout(dropout)
)
def MLPMixer(*, image_size, patch_size, dim, depth, num_classes, expansion_factor = 4, dropout = 0.):
assert (image_size % patch_size) == 0, 'image must be divisible by patch size'
num_patches = (image_size // patch_size) ** 2
chan_first, chan_last = partial(nn.Conv1d, kernel_size = 1), nn.Linear
return nn.Sequential(
# 1. 将图片拆成多个patches
Rearrange('b c (h p1) (w p2) -> b (h w) (p1 p2 c)', p1 = patch_size, p2 = patch_size),
# 假如输入是256 patch_size是32 则h 和 w为 256/32=8
# 输入是b,3,256,256,则rearrange操作是先变成(b,3,8x32,8x32),
#最后变成(b,8x8,32x32x3)即(b,64,3072),将每张图片切分成64个小块,
#每个小块长度是32x32x3=3072,也就是说输入长度为64的图像序列,每个元素采用3072长度进行编码。
#2.用一个全连接网络对所有patch进行处理,提取出tokens
nn.Linear((patch_size ** 2) * 3, dim),
# 由于每个patch是3072,特别大,依次先用fc进行降维到dim
# 假如dim=512 得到 (b,64,512)
# 3. 经过N个Mixer层,混合提炼特征信息
*[nn.Sequential(
PreNormResidual(dim, FeedForward(num_patches, expansion_factor, dropout, chan_first)),
PreNormResidual(dim, FeedForward(dim, expansion_factor, dropout, chan_last))
) for _ in range(depth)],
nn.LayerNorm(dim),
Reduce('b n c -> b c', 'mean'),#在n这个维度做平均
# 4. 最后一个全连接层进行类别预测
nn.Linear(dim, num_classes)
)
if __name__ == '__main__':
model = MLPMixer(
image_size = 256,
patch_size = 32,
dim = 512,
depth = 2,
num_classes = 10
)
img = torch.randn(1, 3, 256, 256)
pred = model(img) # (1, 10)
print(pred.size())
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = model.to(device)
summary(model, input_size=(3, 256, 256))
4,结果
完整代码:https://mianbaoduo.com/o/bread/YZmblZZu