提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档
近十年来,随着数字技术的飞速进步,低成本、可视化、非接触式的计算机视觉技术也迅速发展,其中图像识别和机器学习(Machine Learning, ML)技术的快速进步, 尤其是能将图像分为健康或多种病害类型的特定的分类器得到普遍应用,为识别病虫害开辟了新的途径,并在农业植物病虫害领域开展了较多的应用研究。2015年,在应用k均值聚类对水稻植株图像进行分割后,Singh et al.训练了一个SVM分类器,将健康和患病的水稻叶片分为2个不同的类别,该分类器实现了82%的准确率。2016年,Mohan et al.利用支持向量机(SVM)和k-近邻法(k-NN)等分类器识别3种水稻病害,应用k-NN实现了93.33%的识别准确率,应用SVM实现了91.1%的识别准确率。2018年,Chouhan et al.引入了一种基于径向基函数的神经网络即BRBFNN来识别真菌植物病害,其分类器的平均分类准确率达到86.21%。2019年,Kumari et al.应用人工神经网络(ANN)的方法对棉花和番茄病害图像进行识别,平均准确率达到92.5%。深度学习(Deep Learning,DL)技术是机器学习技术的一个分支,也是现在最流行的一种机器学习技术,是能够模拟出人脑神经结构的机器学习方法,其代表算法之一的卷积神经网络(CNNs)正成为图像识别和分类的主流方法,该算法在解决大规模和小规模问题方面表现出色,在识别植物病虫害方面有着较大优势。2015年,Kawasaki et al.采用基于CNN的方法识别黄瓜叶片病害,其引入的方法达到了94.9%的识别准确率。2018年,Ferentinos et al.训练了多个深度学习模型架构来识别25种不同植物58种不同类别的植物+病害组合,其最佳性能达到了99.53%的识别准确率。2019年,Priyadharshini et al.提出一种基于CNN的深度架构,即改进的LeNet,以识别三种玉米植物病害和一个正常类别,该模型实现了97.89%的识别准确率。从查阅的文献看,相关研究大多数是在实验室环境中取材并获取图像,与实际野外场景还有较大差距,但研究的成果已经证明基于CNN的算法可以用于识别植物病虫害,而且准确率较高。当前,国内外针对CNN在农业植物病虫害领域的应用研究取得了较多的成果,开发了相关识别软件并已在市场推广应用,如我知盘中餐、爱植保、识农等等,但在森林病虫害领域方面至今仍是空白,还没有专门的识别软件。
油茶是世界四大木本油料之一,是我国特有的木本食用油料树种,茶油是世界公认质量最好的食用油之一。目前我国油茶种植面积已达6800万亩,茶油产量62.7万吨,约占全球资源的80%,油茶产业总产值达1160亿元,已成为林业产业优势资源之一。大力发展油茶产业,对改善生态环境,促进林农增收致富,精准扶贫,缓解我国耕地压力,减少对食用油进口的依赖,维护粮油安全具有重大而深远的意义。近年来,随着油茶种植面积和规模不断扩大,油茶病虫害发生范围也越来越广,给广大油茶种植户带来了巨大的经济损失。据调查,油茶病虫害有160多种,其中病害20多种,虫害130多种,重度发生的病虫害会严重影响油茶质量,减少油茶的产量。由于我国大多数地区油茶种植户的文化水平较低,油茶病虫害相关知识水平有限,因此目前识别油茶病虫害传统上还是依靠专家人为现场检查或带回实验室进行鉴定。这种识别方式工作量大、耗费财力、难以推广,而且因为油茶病虫害专家数量有限,导致很多时候油茶种植户在没有专家指导下盲目使用化学农药代替绿色、环保、无公害防治技术,不仅影响防治效果,污染了环境,还破坏了油茶林生态多样性和生态平衡。正确识别油茶病虫害是开展防治的前提和基础,因此,为广大油茶种植户提供一种简单、快速、成本低廉、高效的识别油茶病虫害手段,并能正确指导林农实施精准防治,有着十分重要的现实意义。
本文通过分析现有的图像识别算法,研究并构建了新的轻量级深度移动神经网络架构,并首次在森林病虫害防治领域中应用于油茶病虫害图像自动识别,对进一步开发油茶病虫害识别软件,指导广大油茶种植户及时精准实施防治,促进油茶产业蓬勃发展,防止病虫害成灾危害,助推乡村振兴意义重大。
本文研究提出的MS-DNet架构具体构建过程如下:借用DenseNet的概念,利用DenseNet优点,保留其过渡层的组成和结构,该层由批量标准化(BN)、ReLU激活函数、1×1卷积和2×2平均池化层组成。dense block中的标准卷积被DSC取代,专门用于压缩算模型的大小并有效利用模型参数。假设大小为Df×Df且通道数为N的一个直接特征图,通过一个大小为Dk×Dk且输出通道数为M的卷积核进行卷积。标准卷积的计算成本为D2fNMD2k,DSC的计算成本为D2fN(D2k + M)。当输出通道数M大且DSC采用大小为3×3的卷积时,那么该算法的计算复杂度约为现有算法的1/9。为充分利用通道之间的相互依赖特征并提高模型性能,将SE模块合并到dense block中以增强有用的特征,同时通过学习每个通道的重要性来抑制不需要的噪声信息。MS-DNet模型可以最大限度地再利用相互依赖关系,完成通道之间的动态特征重新校准。
为验证MS-DNet识别效果,采用国际上通用的PlantVillage图像数据集进行实验,其中包含54306个叶子图像,以及从野外环境中拍摄的1000个植物病害图像,并选取其中16种病害进行分析。为了对比MS-DNet方法的有效性,实验选择6个有影响的轻量级CNNs,包括MobileNet-V1、MobileNet-V2、NASNetMobile、EfficientNet-B0、DenseNet和PeleeNet作为模型比较的基准方法。实验结果表明,MS-DNet模型在识别多种植物病害类型方面具有显著的效果,与其他最先进的轻量级的移动网络相比,表现出相当优异的性能。基于MS-DNet开发的自动识别系统具有模型尺寸小、图像采集要求较低、识别准确率高、运行速度快的优点,非常适合野外等较为复杂的环境,而且MS-DNet模型还可以通过收集的图像数据不断提高识别能力。基于MS-DNet开发的自动识别系统将打破传统需要少量专家辨识植物病虫害的瓶颈,对广大林农实施病虫害防治具有划时代的现实意义。
从野外实景拍摄油茶病虫害图像980张,其中病害68张,虫害912张,经专家鉴定,其中包括病害8种、虫害38种。按病虫害名称分类顺序建立文件夹,存放拍摄的图像。
对图像预处理:以油茶软腐病为例。建立训练集文件夹(例X:\油茶\病害\1油茶软腐病\train)和测试集文件夹(例X:\油茶\病害\1油茶软腐病\test)。应用Photoshop软件将所有油茶软腐病图像处理成RGB模型,尺寸均匀调整为320×320像素,按1、2…文件名顺序以JPG格式存入训练集文件夹。在训练集文件夹中随机选取4张图像复制到测试集文件夹。使用图像数据标注软件LabelImg标注训练集文件夹中每一张图像病斑部位,并将XML文件保存到训练集文件夹中。图11为标注示例。按以上步骤完成油茶病虫害所有图像预处理。
根据MS-DNet模型设计,运用Python编程语言,编制油茶病虫害图像的识别代码,对训练集文件夹中的图像进行学习训练,对测试集文件夹中的图像进行识别并得到识别结果。
代码编制如下:
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
from tensorflow.keras import optimizers
from tensorflow.keras.optimizers import Adam
import glob
from tensorflow.keras.layers import Input, Dense, Dropout, BatchNormalization, Conv2D, MaxPooling2D, AveragePooling2D, \
concatenate, Activation, ZeroPadding2D
import tensorflow as tf
import cv2
import math
import numpy as np
import pandas as pd
import tensorflow.keras
from tensorflow.keras.models import load_model
from tensorflow.keras.layers import Activation, Dense
from matplotlib import pyplot as plt
from skimage import io, data, transform
import time
from tensorflow.keras import layers
from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import EarlyStopping, ModelCheckpoint
import os, sys
#from sklearn import svm
#from sklearn.ensemble import RandomForestClassifier
now = time.strftime("%Y-%m-%d_%H-%M-%S", time.localtime())
os.getcwd()
os.chdir("C:\\Users\\45947\\PycharmProjects\\pythonProject")
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
def focal_loss(gamma=2.): # 多分类,不带alpha
def focal_loss_fixed(y_true, y_pred):
pt_1 = tf.where(tf.equal(y_true, 1), y_pred, tf.ones_like(y_pred))
return -K.sum(K.pow(1. - pt_1, gamma) * K.log(pt_1))
return focal_loss_fixed
def Conv2d_BN(x, nb_filter, kernel_size, strides=(1, 1), padding='same', name=None):
if name is not None:
bn_name = name + '_bn'
conv_name = name + '_conv'
else:
bn_name = None
conv_name = None
x = Conv2D(nb_filter, kernel_size, padding=padding, strides=strides, activation='relu', name=conv_name)(x)
x = BatchNormalization(axis=3, name=bn_name)(x)
return x
def Conv_Block(inpt, nb_filter, kernel_size, strides=(1, 1), with_conv_shortcut=False):
x = Conv2d_BN(inpt, nb_filter=nb_filter[0], kernel_size=(1, 1), strides=strides, padding='same')
x = Conv2d_BN(x, nb_filter=nb_filter[1], kernel_size=(3, 3), padding='same')
x = Conv2d_BN(x, nb_filter=nb_filter[2], kernel_size=(1, 1), padding='same')
if with_conv_shortcut:
shortcut = Conv2d_BN(inpt, nb_filter=nb_filter[2], strides=strides, kernel_size=kernel_size)
x = set.add([x, shortcut])
return x
else:
x = set.add([x, inpt])
return x
# 数据集
batch_size = 64
train_dir = "F:\\油茶\\病害\\1油茶软腐病\\train" # 训练集
validation_dir = "F:\\油茶\\病害\\1油茶软腐病\\test" # 测试集
img_size = (224, 224) # 图片大小
epochs = 60
#MODEL_INIT = './obj_reco/tst_model_final1.h5'
MODEL_PATH = './obj_reco/tst_model_final.h5'
board_name1 = './obj_reco/stage1/' + now + '/'
#board_name2 = './obj_reco/stage2/' + now + '/'
#board_name3 = './obj_reco/stage2/' + now + '/'
now = time.strftime("%Y-%m-%d_%H-%M-%S", time.localtime())
nb_train_samples = len(glob.glob(train_dir + '/*/*.*')) # 训练样本数
nb_validation_samples = len(glob.glob(validation_dir + '/*/*.*')) # 验证样本数
#创建预训练模型
IMG_SHAPE = (224, 224, 3)
#定义模型
classes = sorted([o for o in os.listdir(train_dir)]) # 根据文件名分类
#---------MobileNet_SE迁移学习
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.preprocessing import image
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
from tensorflow.keras import backend as K
from tensorflow.keras import regularizers
from keras.applications.resnet50 import ResNet50, preprocess_input
class SeBlock(keras.layers.Layer):
def __init__(self, reduction=4, **kwargs):
super(SeBlock, self).__init__(**kwargs)
self.reduction = reduction
def build(self, input_shape): # 构建layer时需要实现
# input_shape
pass
def call(self, inputs):
x = keras.layers.GlobalAveragePooling2D()(inputs)
x = keras.layers.Dense(int(x.shape[-1]) // self.reduction, use_bias=False, activation=keras.activations.relu)(x)
x = keras.layers.Dense(int(inputs.shape[-1]), use_bias=False, activation=keras.activations.hard_sigmoid)(x)
return keras.layers.Multiply()([inputs, x]) # 给通道加权重
# return inputs*x
def pretrained_path_to_tensor(img_path):
# loads RGB image as PIL.Image.Image type
img = image.load_img(img_path, target_size=(224, 224))
# convert PIL.Image.Image type to 3D tensor with shape (224, 224, 3)
x = image.img_to_array(img)
# convert 3D tensor to 4D tensor with shape (1, 224, 224, 3) and return 4D tensor
x = np.expand_dims(x, axis=0)
# convert RGB -> BGR, subtract mean ImageNet pixel, and return 4D tensor
return preprocess_input(x)
#def get_ResNet():
#base_model = keras.applications.mobilenet.MobileNet(input_shape=IMG_SHAPE,include_top=False,
#weights='F:/06_项目资料/09_实体店图片/00_备份/KerasWeights/mobilenet_1_0_224_tf_no_top.h5')
base_model = keras.applications.MobileNetV2(input_shape=IMG_SHAPE, include_top=False,
weights='C:/Users/45947/PycharmProjects/pythonProject/mobilenet_v2_weights_tf_dim_ordering_tf_kernels_1.0_224_no_top.h5')
for layer in base_model.layers:
layer.trainable = False
#增加全局平均池化层
x = base_model.output
x = Conv2d_BN(x, nb_filter=1024, kernel_size=(5, 5), strides=(1, 1), padding='same')
#x=SeBlock()(x)
x = GlobalAveragePooling2D()(x)
#增加全连接层
x = Dense(1024, activation='relu', kernel_regularizer=regularizers.l1(0.0001))(x)
x = BatchNormalization()(x)
x = Dropout(0.4)(x)
x = Dense(512, activation='relu', kernel_regularizer=regularizers.l1(0.0001))(x)
x = BatchNormalization()(x)
x = Dropout(0.4)(x)
#softmax激活函数用户分类
#predictions = Dense(len(ont_hot_labels[0]), activation='softmax', kernel_regularizer =regularizers.l2(0.01) )(x) #l1_reg len(classes)
predictions = Dense(3, activation='softmax', kernel_regularizer=regularizers.l2(0.01))(x) # l1_reg
#预训练模型与新加层的组合
model = Model(inputs=base_model.input, outputs=predictions)
#model.load_weights('./obj_reco/tst_model_final.h5')
#编译模型
model.compile(optimizer='adam', loss = [focal_loss(gamma=2)], metrics = ['accuracy']) #rmsprop
model.compile(loss = [focal_loss(gamma=2)], optimizer=optimizers.Adadelta(), metrics=['accuracy'])
'''
#---------MobileNetV2迁移学习--------------------------------------------------------------
import tensorflow as tf
from tensorflow import keras
from keras.preprocessing import image
from keras.models import Model
from keras.layers import Dense, GlobalAveragePooling2D
from keras import backend as K
from keras import regularizers
#创建预训练模型
IMG_SHAPE=(224, 224, 3)
base_model = keras.applications.MobileNetV2(input_shape=IMG_SHAPE,include_top=False,
weights='./mobilenet_v2_weights_tf_dim_ordering_tf_kernels_1.0_224_no_top.h5')
for layer in base_model.layers:
layer.trainable = False
#增加全局平均池化层
x = base_model.output
x = GlobalAveragePooling2D()(x)
#增加全连接层
x = Dense(512, activation='relu', kernel_regularizer=regularizers.l1(0.0001))(x)
x = BatchNormalization()(x)
x = Dropout(0.4)(x)
#softmax激活函数用户分类
#predictions = Dense(len(ont_hot_labels[0]), activation='softmax', kernel_regularizer =regularizers.l2(0.01) )(x) #l1_reg
predictions = Dense(len(classes), activation='softmax', kernel_regularizer =regularizers.l2(0.01) )(x) #l1_reg
#预训练模型与新加层的组合
model = Model(inputs=base_model.input, outputs=predictions)
#编译模型
#model.compile(optimizer='adam', loss='categorical_crossentropy', metrics = ['accuracy']) #rmsprop
model.compile(loss='categorical_crossentropy', optimizer=optimizers.Adadelta(), metrics=['accuracy'])
#---------Efficientnet迁移学习--------------------------------------------------------------
import tensorflow as tf
from keras.preprocessing import image
from keras.models import Model
from keras.layers import Dense, GlobalAveragePooling2D
from keras import backend as K
from keras import regularizers
import efficientnet.keras as efn
#创建预训练模型
IMG_SHAPE=(224, 224, 3)
model_input = Input(shape=(224,224,3))
base_model = efn.EfficientNetB0(input_shape=IMG_SHAPE, input_tensor=model_input, include_top=False,
weights='F:/06_项目资料/09_实体店图片/00_备份/KerasWeights/efficientnet-b0_weights_tf_dim_ordering_tf_kernels_autoaugment_notop.h5')
#冻结base_model所有层,这样就可以正确获得bottleneck特征
for layer in base_model.layers:
layer.trainable = False
#增加全局平均池化层
x = base_model.output
x = GlobalAveragePooling2D()(x)
#增加全连接层
predictions = Dense(len(classes), activation='softmax')(x)
#预训练模型与新加层的组合
model = Model(inputs=base_model.input, outputs=predictions)
#编译模型
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics = ['accuracy']) #rmsprop
'''
'''
train_datagen = ImageDataGenerator(rotation_range=30., shear_range=0.2, zoom_range=0.2,
horizontal_flip=True)
train_datagen.mean = np.array([103.939, 116.779, 123.68], dtype=np.float32).reshape((3, 1, 1)) # 去掉imagenet BGR均值
train_data = train_datagen.flow_from_directory(train_dir, target_size=img_size, classes=classes)
validation_datagen = ImageDataGenerator()
validation_datagen.mean = np.array([103.939, 116.779, 123.68], dtype=np.float32).reshape((3, 1, 1))
validation_data = validation_datagen.flow_from_directory(validation_dir, target_size=img_size, classes=classes)
model_checkpoint1 = ModelCheckpoint(filepath=MODEL_PATH, save_best_only=True, monitor='val_accuracy', mode='max')
model_checkpoint1 = ModelCheckpoint(filepath=MODEL_PATH, monitor='val_accuracy')
board1 = TensorBoard(log_dir=board_name1,
histogram_freq=0,
write_graph=True,
write_images=True)
callback_list1 = [model_checkpoint1, board1]
model.fit_generator(train_data, steps_per_epoch=nb_train_samples / float(batch_size),
epochs = epochs,
validation_steps=nb_validation_samples / float(batch_size),
validation_data=validation_data,
callbacks=callback_list1, verbose=2)
#---------------第二阶段
model_checkpoint2 = ModelCheckpoint(filepath=MODEL_PATH, monitor='val_accuracy')
board2 = TensorBoard(log_dir=board_name1,
histogram_freq=0,
write_graph=True,
write_images=True)
callback_list2 = [model_checkpoint2, board2]
model.load_weights(MODEL_PATH)
for model1 in model.layers:
model1.trainable = True
model.compile(optimizer=optimizers.SGD(lr=0.0001), loss=[focal_loss(gamma=2)],
metrics=['accuracy']) # loss='categorical_crossentropy',
model.fit_generator(train_data, steps_per_epoch=nb_train_samples / float(batch_size), epochs=epochs,
validation_data=validation_data, validation_steps=nb_validation_samples / float(batch_size),
callbacks=callback_list2, verbose=2)
#def get_class_weight(d):
white_list_formats = {'png', 'jpg', 'jpeg', 'bmp'}
class_number = dict()
dirs = sorted([o for o in os.listdir(d) if os.path.isdir(os.path.join(d, o))])
k = 0
for class_name in dirs:
class_number[k] = 0
iglob_iter = glob.iglob(os.path.join(d, class_name, '*.*'))
for i in iglob_iter:
_, ext = os.path.splitext(i)
if ext[1:] in white_list_formats:
class_number[k] += 1
k += 1
total = np.sum(list(class_number.values()))
max_samples = np.max(list(class_number.values()))
mu = 1. / (total / float(max_samples))
keys = class_number.keys()
class_weight = dict()
for key in keys:
score = math.log(mu * total / float(class_number[key]))
class_weight[key] = score if score > 1. else 1.
return class_weight
class_weight = get_class_weight(train_dir) # 计算每个类别所占数据集的比重
early_stopping = EarlyStopping(verbose=1, patience=30, monitor='val_loss') # 30次微调后loss仍没下降便迭代下一轮
model_checkpoint = ModelCheckpoint(filepath=MODEL_PATH, verbose=1, save_best_only=True, monitor='val_loss')
callbacks = [early_stopping, model_checkpoint]
model.fit_generator(train_data, steps_per_epoch=nb_train_samples / float(batch_size), epochs=epochs,
validation_data=validation_data, validation_steps=nb_validation_samples / float(batch_size),
callbacks=callbacks, class_weight=class_weight)
print('Training is finished!')
#===load model predict and show
#face_recognition_model = keras.Sequential()
#MODEL_INIT = './obj_reco/tst_model_final.h5'
#MODEL_PATH = 'obj_reco/tst_model_final1.h5'
MODEL_PATH = 'C:/Users/45947/PycharmProjects/pythonProject/obj_reco/tst_model_final1.h5'
#face_recognition_model = load_model(MODEL_PATH)
#face_recognition_model= load_model(MODEL_PATH,custom_objects={'SpatialTransformer':SpatialTransformer(None,None)})
model.load_weights(MODEL_PATH)
#image = cv2.imread('obj_reco/trn/62.jpg')
#===输入
_Picture==========================================================================
corr_num=0
#for j in range(1,46):
#imgname = '图片'+str(j)
prob_list = []
for ii in range(1,10):
# for jj in range(1, 10):
# imgname = str(ii)+ '_' +str(jj)
imgname = str(ii)
# imgname='25_14' #图片名称 4_3
# image=cv2.imdecode(np.fromfile(r'F:/油茶/病害/1油茶软腐病/test/'+ imgname +'.jpg' ,dtype=np.uint8),-1)
image = cv2.imdecode(np.fromfile(r'F:\\油茶\\病害\1油茶软腐病/test/' + '1.jpg', dtype=np.uint8), -1)
image = cv2.resize(image,(224,224))
# image = 'test_seg/Potato_Early Blight Fungus serious/' + str(ii)+ '_' + str(jj) + '.jpg'
# org_img=cv2.imdecode(np.fromfile(r'images\\org_img\\'+imgname+'.JPG',dtype=np.uint8),-1)
#
# img = image.reshape((1, 224, 224, 3))
# prd_res = model.predict(img) # 预测类别2
# result=np.argmax(prd_res,axis=1)
# prd_clss=classes[np.argmax(prd_res)]
# print(prd_clss)
sp = image.shape
sz1 = sp[0]#height(rows) of image
sz2 = sp[1]#width(colums) of image
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# # =============================================================================
sizes = np.array([[0,0,sz2,sz1]])
for (x, y, width, height) in sizes:
# dat_img = []
# im = cv2.imread('valid/s27.bmp')
# dat_img.append(np.asarray(im, dtype = np.int8))
# dat_img=np.asarray(dat_img, dtype = np.float32)
# img = dat_img.reshape((1, 100, 100, 3))
#
# img = img.reshape((1, 224, 224, 3))
# img = np.asarray(img, dtype = np.float32)
# img /= 255.0
# result = face_recognition_model.predict_classes(img) # 预测类别1
img = image.reshape((1, 224, 224, 3))
prd_res = model.predict(img) # 预测类别2
result=np.argmax(prd_res,axis=1)
# prd_clss=classes[np.argmax(prd_res)-1]
prd_clss = classes[np.argmax(prd_res)-2]
prob = model.predict(img)
pred_prob=prob[0,result]
cv2.rectangle(image, (x, y), (x + width, y+height), (0, 255, 0), 2)
font = cv2.FONT_HERSHEY_SIMPLEX
# type_map = pd.read_excel('obj_reco\\checkpoint\\map.xls',header=0)
# pred_lbl=prd_clss
#pred_lbl=type_map.iloc[:,1][type_map.iloc[:,0]==prd_clss].values[0]
#pred_lbl=type_map.iloc[:,1]
[type_map.iloc[:,0]==pd.to_numeric(prd_clss)-1].values[0]
cv2.putText(image, str(pred_prob[0]), (x, y+height//2), font, 0.65, (0, 255, 0), 2)
#cv2.putText(org_img, pred_lbl+' '+str(pred_prob[0]), (x, y+height//2), font, 1.0, (0, 255, 0), 2)
#print ("Predict_category = %s"%(prd_clss))
res_prob= np.insert(prob,0,result+1)
prob_list.append(str(res_prob)) #结果存入txt文本
file=open('pred_prob.txt','w')
file.write('\n'.join(prob_list))
file.close()
if result+1==ii:
corr_num=corr_num+1
print("correct num=")
print(corr_num)
img_1 = image[:,:,[2,1,0]]
# img_1=transform.resize(img_1,(350,350))
io.imshow(img_1)
io.show()
# io.imsave('img_copy.jpg',img_1,dpi=(300.0,300.0))
# img_2 = org_img[:,:,[2,1,0]]
# io.imshow(img_2)
# io.show()
结果表明,MS-DNet对油茶病虫害图像的平均识别准确率达96%,具有较高的准确率。利用以上方法,可以逐步建立并完善160多种油茶病虫害图像的训练集,并通过收集越来越多的图像不断提高MS-DNet模型的识别能力。
本文对现今流行的移动CNNs进行了深入探讨,由于经典的深度CNNs拥有海量的参数而十分庞大,不适合部署在便携式设备上,因此综合考虑计算资源、内存效率和当前移动网络性能,研究并提出了一种新型的轻量级深度神经网络架构MS-DNet,该架构具有模型体积小、运行速度快的特点,在识别植物病虫害方面准确度高,优于现有先进的移动网络技术,有着优异的性能。基于MS-DNet编译的自动识别程序对油茶病虫害图像识别准确率高,非常适合用于开发油茶病虫害自动识别系统。在国内森林病虫害防治工作中,该方法首次创新应用计算机分析森林病虫害图像并自动识别森林病虫害类型,是人工智能(AI)技术在森林病虫害防治工作上的重大突破,将极大提升森林病虫害防治效率和水平,推动防治工作走向智能化、信息化。