原文首发于微信公众号「3D视觉工坊」:mask rcnn训练自己的数据集
前言
最近迷上了mask rcnn,也是由于自己工作需要吧,特意研究了其源代码,并基于自己的数据进行训练~
本博客参考:https://blog.csdn.net/disiwei1012/article/details/79928679#commentsedit
哎~说多了都是泪,谁让我是工科生呢?只能检测工件了。。。做不了高大上的东西了,哈哈
基于Mask RCNN开源项目:
https://github.com/matterport/Mask_RCNN
图片标记工具基于开源项目:https://github.com/wkentaro/labelme
训练工具:
win10+GTX1060+cuda9.1+cudnn7+tensorflow-gpu-1.6.0+keras-2.1.6,140幅图像,一共3类,1小时左右
有关labelme的使用可以参考:
https://blog.csdn.net/shwan_ma/article/details/77823281
有关Mask-RCNN和Faster RCNN算法可以参考:
https://blog.csdn.net/linolzhang/article/details/71774168
https://blog.csdn.net/lk123400/article/details/54343550
这是我建立的四个文件夹,下面一一道来~
1.pic
这是训练的图像,一共700幅
这是通过labelme处理训练图像后生成的文件
3.labelme_json
这个是处理.json文件后产生的数据,使用方法为labelme_json_to_dataset+空格+文件名称.json,这个前提是labelme要准确安装并激活。但是这样会产生一个问题,对多幅图像这样处理,太麻烦,在这里提供一个工具,可以直接在.json文件目录下转换所有的json文件,链接: https://download.csdn.net/download/qq_29462849/10540381
由于labelme生成的掩码标签 label.png为16位存储,opencv默认读取8位,需要将16位转8位,可通过C++程序转化,代码请参考这篇博文:http://blog.csdn.net/l297969586/article/details/79154150
一团黑,不过不要怕,正常的~
源代码
运行该代码,需要安装pycocotools,在windows下安装该工具非常烦,有的可以轻松的安装成功,有的重装系统也很难成功,哎,都是坑~~关于Windows下安装pycocotools请参考:https://blog.csdn.net/chixia1785/article/details/80040172
https://blog.csdn.net/gxiaoyaya/article/details/78363391
Github上开源的代码,是基于ipynb的,我直接把它转换成.py文件,首先做个测试,基于coco数据集上训练好的模型,可以调用摄像头~~~
import os
import sys
import random
import math
import numpy as np
import http:// skimage.io
import matplotlib
import matplotlib.pyplot as plt
import cv2
import time
# Root directory of the project
ROOT_DIR = os.path.abspath( "../")
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn import utils
import mrcnn.model as modellib
from mrcnn import visualize
# Import COCO config
sys.path.append(os.path.join(ROOT_DIR, "samples/coco/")) # To find local version
import coco
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(MODEL_DIR , "mask_rcnn_coco.h5")
# Download COCO trained weights from Releases if needed if not os.path.exists(COCO_MODEL_PATH):
utils.download_trained_weights(COCO_MODEL_PATH)
print( "cuiwei***********************")
# Directory of images to run detection on
IMAGE_DIR = os.path.join(ROOT_DIR, "images") class InferenceConfig(coco.CocoConfig):
# Set batch size to 1 since we 'll be running inference on
# one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
GPU_COUNT = 1
IMAGES_PER_GPU = 1
config = InferenceConfig()
config.display()
# Create model object in inference mode.
model = modellib.MaskRCNN(mode= "inference", model_dir=MODEL_DIR, config=config)
# Load weights trained on MS-COCO
model.load_weights(COCO_MODEL_PATH, by_name=True)
# COCO Class names
# Index of the class in the list is its ID. For example, to get ID of
# the teddy bear class, use: class_names.index( 'teddy bear')
class_names = [ 'BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush']
# Load a random image from the images folder
#file_names = next(os.walk(IMAGE_DIR))[2]
#image = skimage.io.imread(os.path.join(IMAGE_DIR, random.choice(file_names)))
cap = cv2.VideoCapture(0) while(1):
# get a frame
ret, frame = cap.read()
# show a frame
start =time.clock()
results = model.detect([frame], verbose=1)
r = results[0]
#cv2.imshow( "capture", frame)
visualize.display_instances(frame, r[ 'rois'], r[ 'masks'], r[ 'class_ids'],
class_names, r[ 'scores'])
end = time.clock()
print(end-start) if cv2.waitKey(1) & 0xFF == ord( 'q'): break
cap.release()
cv2.destroyAllWindows()
#image= cv2.imread( "C:Users18301DesktopMask_RCNN-masterimages9.jpg")
## Run detection
#
#results = model.detect([image], verbose=1)
#
#print(end-start)
## Visualize results
#r = results[0]
#visualize.display_instances(image, r[ 'rois'], r[ 'masks'], r[ 'class_ids'],
# class_names, r[ 'scores'])
关于训练好的mask rcnn模型,可从此处下载:
https://github.com/matterport/Mask_RCNN/releases,下载好后,配置路径即可
# -*- coding: utf-8 -*-
import os
import sys
import random
import math
import re
import time
import numpy as np
import cv2
import matplotlib
import matplotlib.pyplot as plt
import tensorflow as tf
from mrcnn.config import Config
#import utils
from mrcnn import model as modellib,utils
from mrcnn import visualize
import yaml
from mrcnn.model import log
from PIL import Image
#os.environ[ "CUDA_VISIBLE_DEVICES"] = "0"
# Root directory of the project
ROOT_DIR = os.getcwd()
#ROOT_DIR = os.path.abspath( "../")
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
iter_num=0
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
# Download COCO trained weights from Releases if needed if not os.path.exists(COCO_MODEL_PATH):
utils.download_trained_weights(COCO_MODEL_PATH) class ShapesConfig(Config): """Configuration for training on the toy shapes dataset.
Derives from the base Config class and overrides values specific
to the toy shapes dataset. """
# Give the configuration a recognizable name
NAME = "shapes"
# Train on 1 GPU and 8 images per GPU. We can put multiple images on each
# GPU because the images are small. Batch size is 8 (GPUs * images/GPU).
GPU_COUNT = 1
IMAGES_PER_GPU = 2
# Number of classes (including background)
NUM_CLASSES = 1 + 3 # background + 3 shapes
# Use small images for faster training. Set the limits of the small side
# the large side, and that determines the image shape.
IMAGE_MIN_DIM = 320
IMAGE_MAX_DIM = 384
# Use smaller anchors because our image and objects are small
RPN_ANCHOR_SCALES = (8 * 6, 16 * 6, 32 * 6, 64 * 6, 128 * 6) # anchor side in pixels
# Reduce training ROIs per image because the images are small and have
# few objects. Aim to allow ROI sampling to pick 33% positive ROIs.
TRAIN_ROIS_PER_IMAGE = 100
# Use a small epoch since the data is simple
STEPS_PER_EPOCH = 100
# use small validation steps since the epoch is small
VALIDATION_STEPS = 50
config = ShapesConfig()
config.display() class DrugDataset(utils.Dataset):
# 得到该图中有多少个实例(物体)
def get_obj_index(self, image):
n = np.max(image) return n
# 解析labelme中得到的yaml文件,从而得到mask每一层对应的实例标签
def from_yaml_get_class(self, image_id):
info = self.image_info[image_id]
with open(info[ 'yaml_path']) as f:
temp = yaml.load(f.read())
labels = temp[ 'label_names']
del labels[0] return labels
# 重新写draw_mask
def draw_mask(self, num_obj, mask, image,image_id):
#print( "draw_mask-->",image_id)
#print( "self.image_info",self.image_info)
info = self.image_info[image_id]
#print( "info-->",info)
#print( "info[width]----->",info[ 'width'], "-info[height]--->",info[ 'height']) for index in range(num_obj): for i in range(info[ 'width']): for j in range(info[ 'height']):
#print( "image_id-->",image_id, "-i--->",i, "-j--->",j)
#print( "info[width]----->",info[ 'width'], "-info[height]--->",info[ 'height'])
at_pixel = image.getpixel((i, j)) if at_pixel == index + 1:
mask[j, i, index] = 1 return mask
# 重新写load_shapes,里面包含自己的类别,可以任意添加
# 并在self.image_info信息中添加了path、mask_path 、yaml_path
# yaml_pathdataset_root_path = "/tongue_dateset/"
# img_floder = dataset_root_path + "rgb"
# mask_floder = dataset_root_path + "mask"
# dataset_root_path = "/tongue_dateset/"
def load_shapes(self, count, img_floder, mask_floder, imglist, dataset_root_path): """Generate the requested number of synthetic images.
count: number of images to generate.
height, width: the size of the generated images. """
# Add classes,可通过这种方式扩展多个物体
self.add_class( "shapes", 1, "tank") # 黑色素瘤
self.add_class( "shapes", 2, "triangle")
self.add_class( "shapes", 3, "white") for i in range(count):
# 获取图片宽和高
filestr = imglist[i].split( ".")[0]
#print(imglist[i], "-->",cv_img.shape[1], "--->",cv_img.shape[0])
#print( "id-->", i, " imglist[", i, "]-->", imglist[i], "filestr-->",filestr)
#filestr = filestr.split( "_")[1]
mask_path = mask_floder + "/" + filestr + ".png"
yaml_path = dataset_root_path + "labelme_json/" + filestr + "_json/info.yaml"
print(dataset_root_path + "labelme_json/" + filestr + "_json/img.png")
cv_img = cv2.imread(dataset_root_path + "labelme_json/" + filestr + "_json/img.png")
self.add_image( "shapes", image_id=i, path=img_floder + "/" + imglist[i],
width=cv_img.shape[1], height=cv_img.shape[0], mask_path=mask_path, yaml_path=yaml_path)
# 重写load_mask
def load_mask(self, image_id): """Generate instance masks for shapes of the given image ID. """
global iter_num
print( "image_id",image_id)
info = self.image_info[image_id]
count = 1 # number of object
img = Image.open(info[ 'mask_path'])
num_obj = self.get_obj_index(img)
mask = np.zeros([info[ 'height'], info[ 'width'], num_obj], dtype=np.uint8)
mask = self.draw_mask(num_obj, mask, img,image_id)
occlusion = np.logical_not(mask[:, :, -1]).astype(np.uint8) for i in range(count - 2, -1, -1):
mask[:, :, i] = mask[:, :, i] * occlusion
occlusion = np.logical_and(occlusion, np.logical_not(mask[:, :, i]))
labels = []
labels = self.from_yaml_get_class(image_id)
labels_form = [] for i in range(len(labels)): if labels[i].find( "tank") != -1:
# print "box"
labels_form.append( "tank")
elif labels[i].find( "triangle")!=-1:
#print "column"
labels_form.append( "triangle")
elif labels[i].find( "white")!=-1:
#print "package"
labels_form.append( "white")
class_ids = np.array([self.class_names.index(s) for s in labels_form]) return mask, class_ids.astype(np.int32)
def get_ax(rows=1, cols=1, size=8): """Return a Matplotlib Axes array to be used in
all visualizations in the notebook. Provide a
central point to control graph sizes.
Change the default size attribute to control the size
of rendered images """
_, ax = plt.subplots(rows, cols, figsize=(size * cols, size * rows)) return ax
#基础设置
dataset_root_path= "train_data/"
img_floder = dataset_root_path + "pic"
mask_floder = dataset_root_path + "cv2_mask"
#yaml_floder = dataset_root_path
imglist = os.listdir(img_floder)
count = len(imglist)
#train与val数据集准备
dataset_train = DrugDataset()
dataset_train.load_shapes(count, img_floder, mask_floder, imglist,dataset_root_path)
dataset_train.prepare()
#print( "dataset_train-->",dataset_train._image_ids)
dataset_val = DrugDataset()
dataset_val.load_shapes(7, img_floder, mask_floder, imglist,dataset_root_path)
dataset_val.prepare()
#print( "dataset_val-->",dataset_val._image_ids)
# Load and display random samples
#image_ids = np.random.choice(dataset_train.image_ids, 4)
#for image_id in image_ids:
# image = dataset_train.load_image(image_id)
# mask, class_ids = dataset_train.load_mask(image_id)
# visualize.display_top_masks(image, mask, class_ids, dataset_train.class_names)
# Create model in training mode
model = modellib.MaskRCNN(mode= "training", config=config,
model_dir=MODEL_DIR)
# Which weights to start with?
init_with = "coco" # imagenet, coco, or last if init_with == "imagenet":
model.load_weights(model.get_imagenet_weights(), by_name=True)
elif init_with == "coco":
# Load weights trained on MS COCO, but skip layers that
# are different due to the different number of classes
# See README for instructions to download the COCO weights
model.load_weights(COCO_MODEL_PATH, by_name=True,
exclude=[ "mrcnn_class_logits", "mrcnn_bbox_fc", "mrcnn_bbox", "mrcnn_mask"])
elif init_with == "last":
# Load the last model you trained and continue training
model.load_weights(model.find_last()[1], by_name=True)
# Train the head branches
# Passing layers= "heads" freezes all layers except the head
# layers. You can also pass a regular expression to select
# which layers to train by name pattern.
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE,
epochs=20,
layers= 'heads')
# Fine tune all layers
# Passing layers= "all" trains all layers. You can also
# pass a regular expression to select which layers to
# train by name pattern.
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE / 10,
epochs=40,
layers= "all")
关于训练过程的参数设置,可在config.py文件中修改,根据自己的要求啦~官方也给出了修改建议:https://github.com/matterport/Mask_RCNN/wiki
可修改的主要有:
BACKBONE = "resnet50" ;这个是迁移学习调用的模型,分为resnet101和resnet50,电脑性能不是特别好的话,建议选择resnet50,这样网络更小,训练的更快。
model.train(…, layers=‘heads’, …) # Train heads branches (least memory)
model.train(…, layers=‘3+’, …) # Train resnet stage 3 and up
model.train(…, layers=‘4+’, …) # Train resnet stage 4 and up
model.train(…, layers=‘all’, …) # Train all layers (most memory)#这里是选择训练的层数,根据自己的要求选择
IMAGE_MIN_DIM = 800
IMAGE_MAX_DIM = 1024#设置训练时的图像大小,最终以IMAGE_MAX_DIM为准,如果电脑性能不是太好,建议调小
GPU_COUNT = 1
IMAGES_PER_GPU = 2#这个是对GPU的设置,如果显存不够,建议把2调成1(虽然batch_size为1并不利于收敛)
TRAIN_ROIS_PER_IMAGE = 200;可根据自己数据集的真实情况来设定
MAX_GT_INSTANCES = 100;设置图像中最多可检测出来的物体数量
数据集按照上述格式建立,然后配置好路径即可训练,在windows训练的时候有个问题,就是会出现训练时一直卡在epoch1,这个问题是因为keras在低版本中不支持多线程(在windows上),推荐keras2.1.6,这个亲测可以~
训练的模型会保存在logs文件夹下,.h5格式,训练好后直接调用即可
# -*- coding: utf-8 -*-
import os
import sys
import random
import math
import numpy as np
import http:// skimage.io
import matplotlib
import matplotlib.pyplot as plt
import cv2
import time
from mrcnn.config import Config
from datetime import datetime
# Root directory of the project
ROOT_DIR = os.getcwd()
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn import utils
import mrcnn.model as modellib
from mrcnn import visualize
# Import COCO config
sys.path.append(os.path.join(ROOT_DIR, "samples/coco/")) # To find local version
from samples.coco import coco
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(MODEL_DIR , "mask_rcnn_coco.h5")
# Download COCO trained weights from Releases if needed if not os.path.exists(COCO_MODEL_PATH):
utils.download_trained_weights(COCO_MODEL_PATH)
print( "cuiwei***********************")
# Directory of images to run detection on
IMAGE_DIR = os.path.join(ROOT_DIR, "images") class ShapesConfig(Config): """Configuration for training on the toy shapes dataset.
Derives from the base Config class and overrides values specific
to the toy shapes dataset. """
# Give the configuration a recognizable name
NAME = "shapes"
# Train on 1 GPU and 8 images per GPU. We can put multiple images on each
# GPU because the images are small. Batch size is 8 (GPUs * images/GPU).
GPU_COUNT = 1
IMAGES_PER_GPU = 1
# Number of classes (including background)
NUM_CLASSES = 1 + 3 # background + 3 shapes
# Use small images for faster training. Set the limits of the small side
# the large side, and that determines the image shape.
IMAGE_MIN_DIM = 320
IMAGE_MAX_DIM = 384
# Use smaller anchors because our image and objects are small
RPN_ANCHOR_SCALES = (8 * 6, 16 * 6, 32 * 6, 64 * 6, 128 * 6) # anchor side in pixels
# Reduce training ROIs per image because the images are small and have
# few objects. Aim to allow ROI sampling to pick 33% positive ROIs.
TRAIN_ROIS_PER_IMAGE =100
# Use a small epoch since the data is simple
STEPS_PER_EPOCH = 100
# use small validation steps since the epoch is small
VALIDATION_STEPS = 50
#import train_tongue
#class InferenceConfig(coco.CocoConfig): class InferenceConfig(ShapesConfig):
# Set batch size to 1 since we 'll be running inference on
# one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
GPU_COUNT = 1
IMAGES_PER_GPU = 1
config = InferenceConfig()
model = modellib.MaskRCNN(mode= "inference", model_dir=MODEL_DIR, config=config)
# Create model object in inference mode.
model = modellib.MaskRCNN(mode= "inference", model_dir=MODEL_DIR, config=config)
# Load weights trained on MS-COCO
model.load_weights(COCO_MODEL_PATH, by_name=True)
# COCO Class names
# Index of the class in the list is its ID. For example, to get ID of
# the teddy bear class, use: class_names.index( 'teddy bear')
class_names = [ 'BG', 'tank', 'triangle', 'white']
# Load a random image from the images folder
file_names = next(os.walk(IMAGE_DIR))[2]
image = skimage.io.imread(os.path.join(IMAGE_DIR, random.choice(file_names)))
a=datetime.now()
# Run detection
results = model.detect([image], verbose=1)
b=datetime.now()
# Visualize results
print( "shijian",(b-a).seconds)
r = results[0]
visualize.display_instances(image, r[ 'rois'], r[ 'masks'], r[ 'class_ids'],
class_names, r[ 'scores'])
当然,这里由于训练数据太少,效果不是特别好~~~工业上的图像不是太好获取。。。
那么如何把定位坐标和分割像素位置输出呢?其实都在visualize.py文件中,就是里面的display_instances函数。
最后的输出结果:
其中,mask输出box区域内的每个像素为true还是false,依次遍历box里的行和列。
最后,该工程的源代码地址为:
https://download.csdn.net/download/qq_29462849/10540423,
其中train_test为训练代码,test_model为测试代码,配置好路径,即可直接运行~~~
跋
本文由我们星球特邀嘉宾Oliver Cui编写,他即将担任海康的深度学习算法工程师。告诉大家一个好消息,从今天起,他将和我一起在学习圈「3D视觉技术」(下方图片)里与大家一起讨论3D视觉相关技术,同时也为大家做好服务工作。大家如果遇到深度学习相关问题,随时可以向他发起提问,他会及时为大家解答。后期我们也将不定期举行线下活动,欢迎大家一起参与。
上述内容,如有侵犯版权,请联系作者,会自行删文。
欢迎加入我们公众号读者群一起和同行交流,目前有3D视觉技术、VSLAM技术、深度学习微信群,请扫描下面微信号加群,备注:”昵称+学校/公司+研究方向“,例如:”静静 + 上海交大 + 3D视觉“。请按照格式备注,否则不予通过。添加成功后会根据研究方向邀请进去相关微信群。
学习3D视觉核心技术,扫描查看介绍,3天内无条件退款
圈里有高质量教程资料、可答疑解惑、助你高效解决问题