cnn、残差网络在数据分析中的应用、纯代码

在数据分析中使用神经网络进行分析时,往往是使用全连接网络的结构来对数据进行分析,而效果虽然也能够达到一定的高度,但效果远没有图像处理方面的那样好,但如果加入cnn中的网络模型会让网络的准确率进一步提高

网络模型的修改

以下是残差网络的网络模型

class Block(nn.Module):
    def __init__(self,input_c,output_c):
        super(Block,self).__init__()
        self.conv_1x1 = nn.Conv2d(in_channels=input_c,out_channels=int(input_c/2),kernel_size=1)
        self.bn1 = nn.BatchNorm2d(int(input_c/2))
        self.convh_1x3 = nn.Conv2d(in_channels=int(input_c/2),out_channels=int(input_c/2),kernel_size=(1,3),padding=1)
        self.bn2 = nn.BatchNorm2d(int(input_c/2))
        self.convv_3x1 = nn.Conv2d(in_channels=int(input_c/2),out_channels=int(input_c/2),kernel_size=(3,1))
        self.bn3 = nn.BatchNorm2d(int(input_c / 2))
        self.conv_last = nn.Conv2d(in_channels=2*int(input_c/2),out_channels=output_c,kernel_size=1)
        self.bn4 = nn.BatchNorm2d(output_c)
        self.relu = nn.ReLU(inplace=True)
    def forward(self,x):
        x = self.conv_1x1(x)
        x1 = self.bn1(x)
        x1 = self.relu(x1)

        x1 = self.convh_1x3(x1)
        x1 = self.bn2(x1)
        x1 = self.relu(x1)

        x1 = self.convv_3x1(x1)
        x1 = self.bn3(x1)
        x1 = self.relu(x1)

        x_cat = torch.cat((x1,x),dim=1)

        x_out = self.conv_last(x_cat)
        x_out = self.bn4(x_out)
        x_out = self.relu(x_out)
        return x_out

残差网络仅是cnn网络中的一个例子,可按此思路尝试其他的网络结构(goodlenet、densenet等)

下图为整个模型的网络结构

class Model_2(nn.Module):
    def __init__(self):
        super(Model_2, self).__init__()
        self.block1 = Block(20, 16)
        self.block2 = Block(16, 32)
        self.block3 = Block(32, 32)
        # self.block4 = Block(32, 32)
        self.block5 = Block(32, 64)
        # self.block6 = Block(64, 64)
        self.block7 = Block(64, 64)
        self.block8 = Block(64, 128)
        self.avg_pool = nn.AdaptiveAvgPool2d((1, 1))
        self.line1 = nn.Linear(128, 128)
        self.line2 = nn.Linear(128, 64)
        self.line3 = nn.Linear(64, 64)
        self.line4 = nn.Linear(64, 7)
        self.line5=nn.Linear(7,1)

        self._initialize_weights()
        self.dropout = nn.Dropout(p=0.5)
        self.relu = nn.ReLU(inplace=True)
    def forward(self, x):
        x = self.block1(x)
        x = self.block2(x)
        x = self.block3(x)
        # x = self.block4(x)
        x = self.block5(x)
        # x = self.block6(x)
        x = self.block7(x)
        x = self.block8(x)
        x = self.avg_pool(x)
        x = x.view(x.size(0), -1)

        x = self.line1(x)
        x=self.dropout(x)
        x=self.relu(x)

        x = self.line2(x)
        x = self.dropout(x)
        x = self.relu(x)

        x = self.line3(x)
        x = self.dropout(x)
        x = self.relu(x)

        x = self.line4(x)
        x = self.dropout(x)
        x = self.relu(x)

        x=self.line5(x)
        return x

第一层的 block(20,16) 中 20 为输入层的数据大小,可根据实际需要自行调整

当然,输入的数据也要从二维数据(batch,N)变成 cnn 格式的输入(batch,N,W,H)

于是给输入数据增加维度,使其满足卷积的需要

x = _data[:, :-1, None, None].to(device)

总结

以上就是对网络进行的所有修改,能够有效的提高数据分析的精确度,在实际的项目中效果良好,欢迎在评论指出并提出修改意见。

祝大家所愿皆成!

你可能感兴趣的:(cnn,数据分析,网络)