BUG:mxnet.base.MXNetError: Error in operator conv_1_conv2d:

mxnet.base.MXNetError: Error in operator conv_1_conv2d: [11:05:25] src/operator/nn/convolution.cc:152: Check failed: dshp.ndim() == 4U (1 vs. 4) : Input data should be 4D in batch-num_filter-y-x
 

1. 出现这种bug,一般情况下是因为输入有问题,维度不符合预设

2. 检查输入路径是否为空

3. 检查输入的是什么东西,可能是None也会报这个错

4. 检查img.shape,过大或者过小似乎也会触发这个错误:类似insightface里的问题, 针对shape的问题需要进行图片的预处理,代码如下:

im_scale为计算得到的缩放比例。 scales参数设定非常重要,进行缩放的时候应该是尽可能考虑缩小尺寸而不是放大尺寸,因为放大尺寸太大的时候会使得input变得很模糊从而导致人脸检测器无法正常识别。所以scales可以相对设的小一点【512,990】也不错

 def get_input(self, face_img): #changed by jin
    ##added by jin
    scales = [1024, 1980]
    im_shape = face_img.shape
    target_size = scales[0]
    max_size = scales[1]
    im_size_min = np.min(im_shape[0:2])
    im_size_max = np.max(im_shape[0:2])
    #im_scale = 1.0
    #if im_size_min>target_size or im_size_max>max_size:
    im_scale = float(target_size) / float(im_size_min)
    # prevent bigger axis from being more than max_size:
    if np.round(im_scale * im_size_max) > max_size:
        im_scale = float(max_size) / float(im_size_max)
    
    print('im_scale', im_scale)
    
    scales = [im_scale]
    flip = False
    #
    #
    if self.antimask==1:
      ret = self.detector.detect(face_img,threshold=0.5, scales=scales, do_flip=flip) #threshold should be 0.5 or not??
    else:
      ret = self.detector.detect_face(face_img, det_type=self.args.det)
    if ret is None:
      return None,None
    bbox, points = ret
    if bbox.shape[0]==0:
      return None,None
    bbox = bbox[0,0:4]
    points = points[0,:].reshape((2,5)).T
    #print(bbox)
    #print(points)
    print(face_img.shape)
    nimg = face_preprocess.preprocess(face_img, bbox, points, image_size='112,112')
    print(nimg.shape)
    nimg = cv2.cvtColor(nimg, cv2.COLOR_BGR2RGB)
    aligned = np.transpose(nimg, (2,0,1))
    return aligned,bbox

 

你可能感兴趣的:(BUG:mxnet.base.MXNetError: Error in operator conv_1_conv2d:)