一、测试错误,运行如下代码
python2 tools/test_net.py --cfg experiments/e2e_faster_rcnn_resnet-50-FPN_pascal2007.yaml TEST.WEIGHTS /home/learner/github/detectron/experiments/output/train/voc_2007_train/generalized_rcnn/model_final.pkl NUM_GPUS 1
报错如下:
INFO test_engine.py: 320: Wrote detections to: /home/learner/github/detectron/test/voc_2007_test/generalized_rcnn/detections.pkl INFO test_engine.py: 162: Total inference time: 241.493s INFO task_evaluation.py: 76: Evaluating detections Traceback (most recent call last): File "tools/test_net.py", line 116, incheck_expected_results=True, File "/home/learner/github/detectron/detectron/core/test_engine.py", line 128, in run_inference all_results = result_getter() File "/home/learner/github/detectron/detectron/core/test_engine.py", line 108, in result_getter multi_gpu=multi_gpu_testing File "/home/learner/github/detectron/detectron/core/test_engine.py", line 164, in test_net_on_dataset dataset, all_boxes, all_segms, all_keyps, output_dir File "/home/learner/github/detectron/detectron/datasets/task_evaluation.py", line 60, in evaluate_all dataset, all_boxes, output_dir, use_matlab=use_matlab File "/home/learner/github/detectron/detectron/datasets/task_evaluation.py", line 93, in evaluate_boxes dataset, all_boxes, output_dir, use_matlab=use_matlab File "/home/learner/github/detectron/detectron/datasets/voc_dataset_evaluator.py", line 46, in evaluate_boxes filenames = _write_voc_results_files(json_dataset, all_boxes, salt) File "/home/learner/github/detectron/detectron/datasets/voc_dataset_evaluator.py", line 69, in _write_voc_results_files assert index == image_index[i] AssertionError
报错大意为图片的标注文件的名称test.txt文件中的xml文件名称与test.json中xml文件顺序不相符。运行以下两个代码,将结果替换原先文件,可实现均一致,编号从小到大。
附配置文件:
Found Detectron ops lib: /usr/local/lib/libcaffe2_detectron_ops_gpu.so INFO test_net.py: 98: Called with args: INFO test_net.py: 99: Namespace(cfg_file='experiments/e2e_faster_rcnn_resnet-50-FPN_pascal2007.yaml', multi_gpu_testing=False, opts=['TEST.WEIGHTS', '/home/learner/github/detectron/experiments/output/train/voc_2007_train/generalized_rcnn/model_final.pkl', 'NUM_GPUS', '1'], range=None, vis=False, wait=True) INFO test_net.py: 105: Testing with config: INFO test_net.py: 106: {'BBOX_XFORM_CLIP': 4.135166556742356, 'CLUSTER': {'ON_CLUSTER': False}, 'DATA_LOADER': {'BLOBS_QUEUE_CAPACITY': 8, 'MINIBATCH_QUEUE_SIZE': 64, 'NUM_THREADS': 4}, 'DEDUP_BOXES': 0.0625, 'DOWNLOAD_CACHE': '/tmp/detectron-download-cache', 'EPS': 1e-14, 'EXPECTED_RESULTS': [], 'EXPECTED_RESULTS_ATOL': 0.005, 'EXPECTED_RESULTS_EMAIL': '', 'EXPECTED_RESULTS_RTOL': 0.1, 'EXPECTED_RESULTS_SIGMA_TOL': 4, 'FAST_RCNN': {'CONV_HEAD_DIM': 256, 'MLP_HEAD_DIM': 1024, 'NUM_STACKED_CONVS': 4, 'ROI_BOX_HEAD': 'fast_rcnn_heads.add_roi_2mlp_head', 'ROI_XFORM_METHOD': 'RoIAlign', 'ROI_XFORM_RESOLUTION': 7, 'ROI_XFORM_SAMPLING_RATIO': 2}, 'FPN': {'COARSEST_STRIDE': 32, 'DIM': 256, 'EXTRA_CONV_LEVELS': False, 'FPN_ON': True, 'MULTILEVEL_ROIS': True, 'MULTILEVEL_RPN': True, 'ROI_CANONICAL_LEVEL': 4, 'ROI_CANONICAL_SCALE': 224, 'ROI_MAX_LEVEL': 5, 'ROI_MIN_LEVEL': 2, 'RPN_ANCHOR_START_SIZE': 32, 'RPN_ASPECT_RATIOS': (0.5, 1, 2), 'RPN_MAX_LEVEL': 6, 'RPN_MIN_LEVEL': 2, 'USE_GN': False, 'ZERO_INIT_LATERAL': False}, 'GROUP_NORM': {'DIM_PER_GP': -1, 'EPSILON': 1e-05, 'NUM_GROUPS': 32}, 'KRCNN': {'CONV_HEAD_DIM': 256, 'CONV_HEAD_KERNEL': 3, 'CONV_INIT': 'GaussianFill', 'DECONV_DIM': 256, 'DECONV_KERNEL': 4, 'DILATION': 1, 'HEATMAP_SIZE': -1, 'INFERENCE_MIN_SIZE': 0, 'KEYPOINT_CONFIDENCE': 'bbox', 'LOSS_WEIGHT': 1.0, 'MIN_KEYPOINT_COUNT_FOR_VALID_MINIBATCH': 20, 'NMS_OKS': False, 'NORMALIZE_BY_VISIBLE_KEYPOINTS': True, 'NUM_KEYPOINTS': -1, 'NUM_STACKED_CONVS': 8, 'ROI_KEYPOINTS_HEAD': '', 'ROI_XFORM_METHOD': 'RoIAlign', 'ROI_XFORM_RESOLUTION': 7, 'ROI_XFORM_SAMPLING_RATIO': 0, 'UP_SCALE': -1, 'USE_DECONV': False, 'USE_DECONV_OUTPUT': False}, 'MATLAB': 'matlab', 'MEMONGER': True, 'MEMONGER_SHARE_ACTIVATIONS': False, 'MODEL': {'BBOX_REG_WEIGHTS': (10.0, 10.0, 5.0, 5.0), 'CLS_AGNOSTIC_BBOX_REG': False, 'CONV_BODY': 'FPN.add_fpn_ResNet50_conv5_body', 'EXECUTION_TYPE': 'dag', 'FASTER_RCNN': True, 'KEYPOINTS_ON': False, 'MASK_ON': False, 'NUM_CLASSES': 4, 'RPN_ONLY': False, 'TYPE': 'generalized_rcnn'}, 'MRCNN': {'CLS_SPECIFIC_MASK': True, 'CONV_INIT': 'GaussianFill', 'DILATION': 2, 'DIM_REDUCED': 256, 'RESOLUTION': 14, 'ROI_MASK_HEAD': '', 'ROI_XFORM_METHOD': 'RoIAlign', 'ROI_XFORM_RESOLUTION': 7, 'ROI_XFORM_SAMPLING_RATIO': 0, 'THRESH_BINARIZE': 0.5, 'UPSAMPLE_RATIO': 1, 'USE_FC_OUTPUT': False, 'WEIGHT_LOSS_MASK': 1.0}, 'NUM_GPUS': 1, 'OUTPUT_DIR': '.', 'PIXEL_MEANS': array([[[102.9801, 115.9465, 122.7717]]]), 'RESNETS': {'NUM_GROUPS': 1, 'RES5_DILATION': 1, 'SHORTCUT_FUNC': 'basic_bn_shortcut', 'STEM_FUNC': 'basic_bn_stem', 'STRIDE_1X1': True, 'TRANS_FUNC': 'bottleneck_transformation', 'WIDTH_PER_GROUP': 64}, 'RETINANET': {'ANCHOR_SCALE': 4, 'ASPECT_RATIOS': (0.5, 1.0, 2.0), 'BBOX_REG_BETA': 0.11, 'BBOX_REG_WEIGHT': 1.0, 'CLASS_SPECIFIC_BBOX': False, 'INFERENCE_TH': 0.05, 'LOSS_ALPHA': 0.25, 'LOSS_GAMMA': 2.0, 'NEGATIVE_OVERLAP': 0.4, 'NUM_CONVS': 4, 'POSITIVE_OVERLAP': 0.5, 'PRE_NMS_TOP_N': 1000, 'PRIOR_PROB': 0.01, 'RETINANET_ON': False, 'SCALES_PER_OCTAVE': 3, 'SHARE_CLS_BBOX_TOWER': False, 'SOFTMAX': False}, 'RFCN': {'PS_GRID_SIZE': 3}, 'RNG_SEED': 3, 'ROOT_DIR': '/home/learner/github/detectron', 'RPN': {'ASPECT_RATIOS': (0.5, 1, 2), 'RPN_ON': True, 'SIZES': (64, 128, 256, 512), 'STRIDE': 16}, 'SOLVER': {'BASE_LR': 0.0025, 'GAMMA': 0.1, 'LOG_LR_CHANGE_THRESHOLD': 1.1, 'LRS': [], 'LR_POLICY': 'steps_with_decay', 'MAX_ITER': 60000, 'MOMENTUM': 0.9, 'SCALE_MOMENTUM': True, 'SCALE_MOMENTUM_THRESHOLD': 1.1, 'STEPS': [0, 30000, 40000], 'STEP_SIZE': 30000, 'WARM_UP_FACTOR': 0.3333333333333333, 'WARM_UP_ITERS': 500, 'WARM_UP_METHOD': u'linear', 'WEIGHT_DECAY': 0.0001, 'WEIGHT_DECAY_GN': 0.0}, 'TEST': {'BBOX_AUG': {'AREA_TH_HI': 32400, 'AREA_TH_LO': 2500, 'ASPECT_RATIOS': (), 'ASPECT_RATIO_H_FLIP': False, 'COORD_HEUR': 'UNION', 'ENABLED': False, 'H_FLIP': False, 'MAX_SIZE': 4000, 'SCALES': (), 'SCALE_H_FLIP': False, 'SCALE_SIZE_DEP': False, 'SCORE_HEUR': 'UNION'}, 'BBOX_REG': True, 'BBOX_VOTE': {'ENABLED': False, 'SCORING_METHOD': 'ID', 'SCORING_METHOD_BETA': 1.0, 'VOTE_TH': 0.8}, 'COMPETITION_MODE': True, 'DATASETS': ('voc_2007_test',), 'DETECTIONS_PER_IM': 100, 'FORCE_JSON_DATASET_EVAL': False, 'KPS_AUG': {'AREA_TH': 32400, 'ASPECT_RATIOS': (), 'ASPECT_RATIO_H_FLIP': False, 'ENABLED': False, 'HEUR': 'HM_AVG', 'H_FLIP': False, 'MAX_SIZE': 4000, 'SCALES': (), 'SCALE_H_FLIP': False, 'SCALE_SIZE_DEP': False}, 'MASK_AUG': {'AREA_TH': 32400, 'ASPECT_RATIOS': (), 'ASPECT_RATIO_H_FLIP': False, 'ENABLED': False, 'HEUR': 'SOFT_AVG', 'H_FLIP': False, 'MAX_SIZE': 4000, 'SCALES': (), 'SCALE_H_FLIP': False, 'SCALE_SIZE_DEP': False}, 'MAX_SIZE': 833, 'NMS': 0.5, 'PRECOMPUTED_PROPOSALS': False, 'PROPOSAL_FILES': (), 'PROPOSAL_LIMIT': 2000, 'RPN_MIN_SIZE': 0, 'RPN_NMS_THRESH': 0.7, 'RPN_POST_NMS_TOP_N': 1000, 'RPN_PRE_NMS_TOP_N': 1000, 'SCALE': 500, 'SCORE_THRESH': 0.05, 'SOFT_NMS': {'ENABLED': False, 'METHOD': 'linear', 'SIGMA': 0.5}, 'WEIGHTS': '/home/learner/github/detectron/experiments/output/train/voc_2007_train/generalized_rcnn/model_final.pkl'}, 'TRAIN': {'ASPECT_GROUPING': True, 'AUTO_RESUME': True, 'BATCH_SIZE_PER_IM': 512, 'BBOX_THRESH': 0.5, 'BG_THRESH_HI': 0.5, 'BG_THRESH_LO': 0.0, 'COPY_WEIGHTS': False, 'CROWD_FILTER_THRESH': 0.7, 'DATASETS': ('voc_2007_train',), 'FG_FRACTION': 0.25, 'FG_THRESH': 0.5, 'FREEZE_AT': 2, 'FREEZE_CONV_BODY': False, 'GT_MIN_AREA': -1, 'IMS_PER_BATCH': 2, 'MAX_SIZE': 833, 'PROPOSAL_FILES': (), 'RPN_BATCH_SIZE_PER_IM': 256, 'RPN_FG_FRACTION': 0.5, 'RPN_MIN_SIZE': 0, 'RPN_NEGATIVE_OVERLAP': 0.3, 'RPN_NMS_THRESH': 0.7, 'RPN_POSITIVE_OVERLAP': 0.7, 'RPN_POST_NMS_TOP_N': 2000, 'RPN_PRE_NMS_TOP_N': 2000, 'RPN_STRADDLE_THRESH': 0, 'SCALES': (500,), 'SNAPSHOT_ITERS': 20000, 'USE_FLIPPED': True, 'WEIGHTS': '/home/learner/github/detectron/pretrained_model/R-50.pkl'}, 'USE_NCCL': False, 'VIS': False, 'VIS_TH': 0.9} loading annotations into memory... Done (t=0.02s) creating index... index created! loading annotations into memory... Done (t=0.02s) creating index... index created! WARNING cnn.py: 25: [====DEPRECATE WARNING====]: you are creating an object from CNNModelHelper class which will be deprecated soon. Please use ModelHelper object with brew module. For more information, please refer to caffe2.ai and python/brew.py, python/brew_test.py for more information. INFO net.py: 60: Loading weights from: /home/learner/github/detectron/experiments/output/train/voc_2007_train/generalized_rcnn/model_final.pkl
test.json文件生成(xmltojson.py):
# -*- coding: utf-8 -*- """ Created on Tue Aug 28 15:01:03 2018 @author: Administrator """ #!/usr/bin/python # -*- coding:utf-8 -*- # @Author: hbchen # @Time: 2018-01-29 # @Description: xml转换到coco数据集json格式 import os, sys, json,xmltodict from xml.etree.ElementTree import ElementTree, Element from collections import OrderedDict XML_PATH = "/home/learner/datasets/VOCdevkit2007/VOC2007/Annotations/test" JSON_PATH = "./test.json" json_obj = {} images = [] annotations = [] categories = [] categories_list = [] annotation_id = 1 def read_xml(in_path): '''读取并解析xml文件''' tree = ElementTree() tree.parse(in_path) return tree def if_match(node, kv_map): '''判断某个节点是否包含所有传入参数属性 node: 节点 kv_map: 属性及属性值组成的map''' for key in kv_map: if node.get(key) != kv_map.get(key): return False return True def get_node_by_keyvalue(nodelist, kv_map): '''根据属性及属性值定位符合的节点,返回节点 nodelist: 节点列表 kv_map: 匹配属性及属性值map''' result_nodes = [] for node in nodelist: if if_match(node, kv_map): result_nodes.append(node) return result_nodes def find_nodes(tree, path): '''查找某个路径匹配的所有节点 tree: xml树 path: 节点路径''' return tree.findall(path) print ("-----------------Start------------------") xml_names = [] for xml in os.listdir(XML_PATH): #os.path.splitext(xml) xml=xml.replace('Cow_','') xml_names.append(xml) '''xml_path_list=os.listdir(XML_PATH) os.path.split xml_path_list.sort(key=len)''' xml_names.sort(key=lambda x:int(x[:-4])) new_xml_names = [] for i in xml_names: j = 'Cow_' + i new_xml_names.append(j) #print xml_names #print new_xml_names for xml in new_xml_names: tree = read_xml(XML_PATH + "/" + xml) object_nodes = get_node_by_keyvalue(find_nodes(tree, "object"), {}) if len(object_nodes) == 0: print (xml, "no object") continue else: image = OrderedDict() file_name = os.path.splitext(xml)[0]; # 文件名 #print os.path.splitext(xml) para1 = file_name + ".jpg" height_nodes = get_node_by_keyvalue(find_nodes(tree, "size/height"), {}) para2 = int(height_nodes[0].text) width_nodes = get_node_by_keyvalue(find_nodes(tree, "size/width"), {}) para3 = int(width_nodes[0].text) fname=file_name[4:] para4 = int(fname) for f,i in [("file_name",para1),("height",para2),("width",para3),("id",para4)]: image.setdefault(f,i) #print(image) images.append(image) #构建images name_nodes = get_node_by_keyvalue(find_nodes(tree, "object/name"), {}) xmin_nodes = get_node_by_keyvalue(find_nodes(tree, "object/bndbox/xmin"), {}) ymin_nodes = get_node_by_keyvalue(find_nodes(tree, "object/bndbox/ymin"), {}) xmax_nodes = get_node_by_keyvalue(find_nodes(tree, "object/bndbox/xmax"), {}) ymax_nodes = get_node_by_keyvalue(find_nodes(tree, "object/bndbox/ymax"), {}) # print ymax_nodes for index, node in enumerate(object_nodes): annotation = {} segmentation = [] bbox = [] seg_coordinate = [] #坐标 seg_coordinate.append(int(xmin_nodes[index].text)) seg_coordinate.append(int(ymin_nodes[index].text)) seg_coordinate.append(int(xmin_nodes[index].text)) seg_coordinate.append(int(ymax_nodes[index].text)) seg_coordinate.append(int(xmax_nodes[index].text)) seg_coordinate.append(int(ymax_nodes[index].text)) seg_coordinate.append(int(xmax_nodes[index].text)) seg_coordinate.append(int(ymin_nodes[index].text)) segmentation.append(seg_coordinate) width = int(xmax_nodes[index].text) - int(xmin_nodes[index].text) height = int(ymax_nodes[index].text) - int(ymin_nodes[index].text) area = width * height bbox.append(int(xmin_nodes[index].text)) bbox.append(int(ymin_nodes[index].text)) bbox.append(width) bbox.append(height) annotation["segmentation"] = segmentation annotation["area"] = area annotation["iscrowd"] = 0 fname=file_name[4:] annotation["image_id"] = int(fname) annotation["bbox"] = bbox cate=name_nodes[index].text if cate=='head': category_id=1 elif cate=='eye': category_id=2 elif cate=='nose': category_id=3 annotation["category_id"] = category_id annotation["id"] = annotation_id annotation_id += 1 annotation["ignore"] = 0 annotations.append(annotation) if category_id in categories_list: pass else: categories_list.append(category_id) categorie = {} categorie["supercategory"] = "none" categorie["id"] = category_id categorie["name"] = name_nodes[index].text categories.append(categorie) json_obj["images"] = images json_obj["type"] = "instances" json_obj["annotations"] = annotations json_obj["categories"] = categories f = open(JSON_PATH, "w") #json.dump(json_obj, f) json_str = json.dumps(json_obj) f.write(json_str) print ("------------------End-------------------")
test.txt生成(test.py):
import os,sys XML_PATH = "/home/learner/datasets/VOCdevkit2007/VOC2007/Annotations/test" final_path = "./test.txt" xml_names = [] for xml in os.listdir(XML_PATH): #os.path.splitext(xml) xml=xml.replace('Cow_','') xml_names.append(xml) '''xml_path_list=os.listdir(XML_PATH) os.path.split xml_path_list.sort(key=len)''' xml_names.sort(key=lambda x:int(x[:-4])) new_xml_names = [] for i in xml_names: j = 'Cow_' + i new_xml_names.append(j) #print xml_names #print new_xml_names f = open(final_path,"w") for xml in new_xml_names: file_name = os.path.splitext(xml)[0]; f.write(file_name) f.write('\n')
二、计算inference类别标的有问题,比如本应该是head,框显示car:
commend如下:
python2 tools/infer_simple.py --cfg experiments/e2e_faster_rcnn_resnet-50-FPN_pascal2007.yaml --output-dir experiments/test_out/ --wts experiments/output/train/voc_2007_train/generalized_rcnn/model_final.pkl test_demo_cow
修改dummy_datasets.py中的类别信息:
def get_COCO_dataset(): """A dummy VOC dataset""" ds = AttrDict() classes = ['__background__', 'eye', 'nose', 'head'] ds.classes = {i:name for i, name in enumerate(classes)} return ds
三、train的代码
会报一些错,需要将起初权重改为R-50.pkl,参考http://www.yueye.org/2018/train-object-detection-model-using-detectron.html。
四、更换model
运行代码
python2 tools/train_net.py --cfg experiments/e2e_faster_rcnn_X-101-64x4d-FPN_1x.yaml OUTPUT_DIR experiments/outputx
报错如下:
INFO net.py: 254: End of model: generalized_rcnn json_stats: {"accuracy_cls": 0.228516, "eta": "17 days, 2:30:10", "iter": 0, "loss": 2.118356, "loss_bbox": 0.047415, "loss_cls": 1.357394, "loss_rpn_bbox_fpn2": 0.000000, "loss_rpn_bbox_fpn3": 0.000000, "loss_rpn_bbox_fpn4": 0.017527, "loss_rpn_bbox_fpn5": 0.000000, "loss_rpn_bbox_fpn6": 0.002472, "loss_rpn_cls_fpn2": 0.510349, "loss_rpn_cls_fpn3": 0.129855, "loss_rpn_cls_fpn4": 0.039958, "loss_rpn_cls_fpn5": 0.005469, "loss_rpn_cls_fpn6": 0.007917, "lr": 0.003333, "mb_qsize": 64, "mem": 6441, "time": 8.210058} json_stats: {"accuracy_cls": 0.961914, "eta": "2 days, 17:47:12", "iter": 20, "loss": 1.014767, "loss_bbox": 0.064824, "loss_cls": 0.274080, "loss_rpn_bbox_fpn2": 0.011057, "loss_rpn_bbox_fpn3": 0.005120, "loss_rpn_bbox_fpn4": 0.004911, "loss_rpn_bbox_fpn5": 0.000987, "loss_rpn_bbox_fpn6": 0.002837, "loss_rpn_cls_fpn2": 0.424010, "loss_rpn_cls_fpn3": 0.099319, "loss_rpn_cls_fpn4": 0.031872, "loss_rpn_cls_fpn5": 0.007901, "loss_rpn_cls_fpn6": 0.009875, "lr": 0.003600, "mb_qsize": 64, "mem": 6453, "time": 1.315882} json_stats: {"accuracy_cls": 0.941406, "eta": "1 day, 22:37:55", "iter": 40, "loss": 0.620170, "loss_bbox": 0.130436, "loss_cls": 0.284250, "loss_rpn_bbox_fpn2": 0.025175, "loss_rpn_bbox_fpn3": 0.006801, "loss_rpn_bbox_fpn4": 0.000663, "loss_rpn_bbox_fpn5": 0.001616, "loss_rpn_bbox_fpn6": 0.000000, "loss_rpn_cls_fpn2": 0.087775, "loss_rpn_cls_fpn3": 0.043141, "loss_rpn_cls_fpn4": 0.019907, "loss_rpn_cls_fpn5": 0.008191, "loss_rpn_cls_fpn6": 0.002730, "lr": 0.003867, "mb_qsize": 64, "mem": 6469, "time": 0.932848} json_stats: {"accuracy_cls": 0.947266, "eta": "1 day, 23:10:17", "iter": 60, "loss": 0.508122, "loss_bbox": 0.112828, "loss_cls": 0.233173, "loss_rpn_bbox_fpn2": 0.003497, "loss_rpn_bbox_fpn3": 0.003379, "loss_rpn_bbox_fpn4": 0.002439, "loss_rpn_bbox_fpn5": 0.000000, "loss_rpn_bbox_fpn6": 0.002276, "loss_rpn_cls_fpn2": 0.051088, "loss_rpn_cls_fpn3": 0.028643, "loss_rpn_cls_fpn4": 0.017298, "loss_rpn_cls_fpn5": 0.006002, "loss_rpn_cls_fpn6": 0.006945, "lr": 0.004133, "mb_qsize": 64, "mem": 6469, "time": 0.943747} /home/learner/github/detectron/detectron/utils/boxes.py:175: RuntimeWarning: overflow encountered in multiply pred_ctr_x = dx * widths[:, np.newaxis] + ctr_x[:, np.newaxis] /home/learner/github/detectron/detectron/utils/boxes.py:176: RuntimeWarning: overflow encountered in multiply pred_ctr_y = dy * heights[:, np.newaxis] + ctr_y[:, np.newaxis] /usr/local/lib/python2.7/dist-packages/numpy/lib/function_base.py:3250: RuntimeWarning: Invalid value encountered in median r = func(a, **kwargs) json_stats: {"accuracy_cls": 0.974609, "eta": "1 day, 23:20:54", "iter": 80, "loss": NaN, "loss_bbox": NaN, "loss_cls": NaN, "loss_rpn_bbox_fpn2": NaN, "loss_rpn_bbox_fpn3": NaN, "loss_rpn_bbox_fpn4": NaN, "loss_rpn_bbox_fpn5": NaN, "loss_rpn_bbox_fpn6": NaN, "loss_rpn_cls_fpn2": NaN, "loss_rpn_cls_fpn3": NaN, "loss_rpn_cls_fpn4": NaN, "loss_rpn_cls_fpn5": NaN, "loss_rpn_cls_fpn6": 0.004436, "lr": 0.004400, "mb_qsize": 64, "mem": 6473, "time": 0.947391} CRITICAL train.py: 98: Loss is NaN INFO loader.py: 126: Stopping enqueue thread INFO loader.py: 113: Stopping mini-batch loading thread INFO loader.py: 113: Stopping mini-batch loading thread INFO loader.py: 113: Stopping mini-batch loading thread INFO loader.py: 113: Stopping mini-batch loading thread Traceback (most recent call last): File "tools/train_net.py", line 132, inmain() File "tools/train_net.py", line 114, in main checkpoints = detectron.utils.train.train_model() File "/home/learner/github/detectron/detectron/utils/train.py", line 86, in train_model handle_critical_error(model, 'Loss is NaN') File "/home/learner/github/detectron/detectron/utils/train.py", line 100, in handle_critical_error raise Exception(msg) Exception: Loss is NaN
修改BASE_LR的值从0.01改为0.001(在此e2e_faster_rcnn_X-101-64x4d-FPN_1x.yaml文件中)
X参考博客:
Inference:
https://blog.csdn.net/blateyang/article/details/79815490
https://blog.csdn.net/Blateyang/article/details/80655802
https://github.com/facebookresearch/Detectron/issues/485
https://github.com/royhuang9/Detectron/blob/master/README.md
其余参考博客:
http://www.yueye.org/2018/train-object-detection-model-using-detectron.html
os.path
https://blog.csdn.net/T1243_3/article/details/80170006
源码解读
https://blog.csdn.net/zziahgf/article/details/79652946
各种model
https://github.com/royhuang9/Detectron/blob/master/MODEL_ZOO.md