Camera Face Detection

新的Camera feature: Face Detection
1. camera app, in packages/apps/Camera/src/com/android/camera/camera.java

startFaceDetection()

stopFaceDetection()


在initializeFirstTime()函数中会调用startFaceDetection() ,以开始做人脸识别的准备动作。

在startFaceDetection中会首先判断getMaxNumDetectedFaces是否大于0,若没有大于0,则表示硬件不支持人脸识别,直接退出函数,否则

mCameraDevice.setFaceDetectionListener(this);
mCameraDevice.startFaceDetection();

设置人脸识别的监听者为this(注意这边的参数,这意味着在mCameraDevice中的mFaceListener其实为这边的package中的camera类),用于回调函数,并调用mCameraDevice的startFaceDetection。

在这文件中还有一个关键函数:

onFaceDetection()

这在mCameraDevice中会调用到,用作最后的显示动作。

这里的mCameraDevice就是framework层的camera类。


这边还有个Faceview类,简单做个介绍,位于
packages/apps/Camera/src/com/android/camera/ui/FaceView.java

public class FaceView extends View implements FocusIndicator{   
....
}  

In FaceView:
frameworks/base/media/java/android/media/FaceDetector.java
frameworks/base/graphics/java/android/graphics/Matrix.java
frameworks/base/graphics/java/android/graphics/RectF.java

frameworks/base/graphics/java/android/graphics/drawable/Drawable.java

主要是关于人脸识别后的view的一些操作。


2. 接着上面说mCameraDevice

位于frameworks/base/core/java/android/hardware/Camera.java

相关变量的定义:
    private FaceDetectionListener mFaceListener;
    private boolean mFaceDetectionRunning = false;
    private static final int CAMERA_FACE_DETECTION_HW = 0;
    private static final int CAMERA_FACE_DETECTION_SW = 1;

  
    public interface FaceDetectionListener{
            void onFaceDetection(Face[] faces, Camera camera);//这里调用的就是package中camera类的函数
        }
    public final void setFaceDetectionListener(FaceDetectionListener listener){
            mFaceListener = listener;//设置监听者
    }
    public final void startFaceDetection() {
        if (mFaceDetectionRunning) {
            throw new RuntimeException("Face detection is already running");
        }
        _startFaceDetection(CAMERA_FACE_DETECTION_HW);//真正开始FaceDetection的地方
        mFaceDetectionRunning = true;
    }
    public final void stopFaceDetection() {
        _stopFaceDetection();//停止FaceDetection
        mFaceDetectionRunning = false;

    }


    private native final void _startFaceDetection(int type);

    private native final void _stopFaceDetection();

    _startFaceDetection()和_stopFaceDetection()都是jni的方法,即用到本地调用,_startFaceDetection()会调用到android_hardware_Camera_startFaceDetection这个函数:

    status_t rc = camera->sendCommand(CAMERA_CMD_START_FACE_DETECTION, type, 0);

    这边会调用到Hal层的sendCommand方法,传递CAMERA_CMD_START_FACE_DETECTION消息,在Hal要对这个Message进行处理,以开始处理人脸识别。这边介绍一下ti的做法:

in camera hal:

status_t CameraHal::sendCommand(int32_t cmd, int32_t arg1, int32_t arg2)
{

       ……

       switch(cmd)
            {
            case CAMERA_CMD_START_SMOOTH_ZOOM:
                ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_START_SMOOTH_ZOOM, arg1);
                break;
            case CAMERA_CMD_STOP_SMOOTH_ZOOM:
                ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_STOP_SMOOTH_ZOOM);
            case CAMERA_CMD_START_FACE_DETECTION:
                ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_START_FD);
                break;
            case CAMERA_CMD_STOP_FACE_DETECTION:
                ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_STOP_FD);
                break;
            default:
                break;
            };

}

遇到CAMERA_CMD_START_FACE_DETECTION这个消息后,会继续调用mCameraAdapter->sendCommand,

之后会继续调用:

 case CameraAdapter::CAMERA_START_FD:
             ret = startFaceDetection();
             break;

依然会去调用startFaceDetection这个函数,CameraAdapter中的startFaceDetection并未做什么事情,这里面主要是做一些初始化的设置动作,需要自己实现,我们可以参考OMXFD中的。
hardware/ti/omap4xxx/camera/OMXCameraAdapter/OMXFD.cpp  //This file contains functionality for handling face detection.
OMXCameraAdapter::setParametersFD()
OMXCameraAdapter::startFaceDetection()
OMXCameraAdapter::stopFaceDetection()
OMXCameraAdapter::pauseFaceDetection()
OMXCameraAdapter::setFaceDetection()
OMXCameraAdapter::detectFaces()
OMXCameraAdapter::encodeFaceCoordinates()

hardware/ti/omap4xxx/camera/OMXCameraAdapter/OMXAlgo.cpp //This file contains functionality for handling algorithm configurations.
OMXCameraAdapter::setAlgoPriority()

hardware/ti/omap4xxx/domx/omx_core/inc/OMX_Core.h    //depend on hardware/ti/omap4xxx/domx modules.
OMX_SetConfig()

hardware/ti/omap4xxx/domx/omx_core/inc/OMX_TI_IVCommon.h

    typedef struct OMX_CONFIG_EXTRADATATYPE {
        OMX_U32 nSize;
        OMX_VERSIONTYPE nVersion;
        OMX_U32 nPortIndex;
        OMX_EXT_EXTRADATATYPE eExtraDataType;
        OMX_TI_CAMERAVIEWTYPE eCameraView;
        OMX_BOOL bEnable;
    } OMX_CONFIG_EXTRADATATYPE;

    typedef struct OMX_CONFIG_OBJDETECTIONTYPE {
        OMX_U32 nSize;
        OMX_VERSIONTYPE nVersion;
        OMX_U32 nPortIndex;
        OMX_BOOL bEnable;
        OMX_BOOL bFrameLimited;
        OMX_U32 nFrameLimit;
        OMX_U32 nMaxNbrObjects;
        OMX_S32 nLeft;
        OMX_S32 nTop;
        OMX_U32 nWidth;
        OMX_U32 nHeight;
        OMX_OBJDETECTQUALITY eObjDetectQuality;
        OMX_U32 nPriority;
        OMX_U32 nDeviceOrientation;
    } OMX_CONFIG_OBJDETECTIONTYPE;


在notifyEvent中,

 case CameraHalEvent::EVENT_FACE:
                    faceEvtData = evt->mEventData->faceEvent;
                    if ( ( NULL != mCameraHal ) &&
                         ( NULL != mNotifyCb) &&
                         ( mCameraHal->msgTypeEnabled(CAMERA_MSG_PREVIEW_METADATA) ) )
                        {
                        // WA for an issue inside CameraService
                        camera_memory_t *tmpBuffer = mRequestMemory(-1, 1, 1, NULL);
                        mDataCb(CAMERA_MSG_PREVIEW_METADATA,
                                tmpBuffer,
                                0,
                                faceEvtData->getFaceResult(),
                                mCallbackCookie);
                        faceEvtData.clear();
                        if ( NULL != tmpBuffer ) {
                            tmpBuffer->release(tmpBuffer);
                        }
                        }
                    break;

一旦有人脸识别的数据,就回传 faceEvtData->getFaceResult(),给上层的监听者,消息是CAMERA_MSG_PREVIEW_METADATA

NotificationThread->mAppCallbackNotifier->notificationThread()->

if(mEventQ.hasMsg()) {
        ///Received an event from one of the event providers
        CAMHAL_LOGDA("Notification Thread received an event from event provider (CameraAdapter)");
        notifyEvent();
     }

它实在Callbacknotifier中初始化中另起一个线程用来监听事件,如果有人脸识别的动作的化,则回传相应的结果给上层的监听者以做显示,这跟preview(notifyEvent())是分开,是另外的一份单独的数据。

而在framework层的camera里,会有如下函数进行消息处理:

    public void handleMessage(Message msg) {
             switch(msg.what) {
             ......
             case CAMERA_MSG_PREVIEW_METADATA:
                 if (mFaceListener != null) {
                     mFaceListener.onFaceDetection((Face[])msg.obj, mCamera);
                 }
                 return;
    }

这样就可以在预览的界面上看到人脸识别的窗口了。


    public static class Face{
        public Face() {
        }
        public Rect rect;
        public int score;
        public int id = -1;
        public Point leftEye = null;
        public Point rightEye = null;
        public Point mouth = null;
    }
    public class Parameters {        //new camera parameters.
        private static final String KEY_MAX_NUM_DETECTED_FACES_HW = "max-num-detected-faces-hw";
        private static final String KEY_MAX_NUM_DETECTED_FACES_SW = "max-num-detected-faces-sw";
    
        public int getMaxNumDetectedFaces() {
            return getInt(KEY_MAX_NUM_DETECTED_FACES_HW, 0);
        }    
    }
}

3.总结

Camera app 和 android api已经准备好,我们的工作主要集中在camera的Hal层。我们需要了解所有的face detection的相关方法以实现它。


你可能感兴趣的:(struct,cmd,null,Class,Parameters,人脸识别)