DAY 1:
Now here's the REQUEST:To check how many people are there staring at one screen by detecting their faces.
1.There is a already defined interface, camera.startFaceDetection() in android since API 14 .But i can't control the detecting rate or the maximum of faces can be detected(The max is 10).But the effect is satisfactory because it's sensitive and accurate.
private boolean faceDetectionRunning;
public int startFaceDetection() {
if (faceDetectionRunning) {
return 0;
}
// check if face detection is supported or not
// using Camera.Parameters
if (mCamera.getParameters().getMaxNumDetectedFaces() <= 0) {
Log.e(TAG, "Face Detection not supported");
return -1;
}
Log.d(TAG, "Maximum faces can be detected=" + mCamera.getParameters().getMaxNumDetectedFaces());
MyFaceDetectionListener fDListener = new MyFaceDetectionListener();
mCamera.setFaceDetectionListener(fDListener);
mCamera.startFaceDetection();
faceDetectionRunning = true;
return 1;
}
public int stopFaceDetection() {
if (faceDetectionRunning) {
mCamera.stopFaceDetection();
faceDetectionRunning = false;
return 1;
}
return 0;
}
private class MyFaceDetectionListener
implements Camera.FaceDetectionListener {
@Override
public void onFaceDetection(Camera.Face[] faces, Camera camera) {
if (faces.length == 0) {
Log.i(TAG, "No faces detected");
} else if (faces.length > 0) {
Log.i(TAG, "Faces Detected = " +
String.valueOf(faces.length));
List faceRects;
faceRects = new ArrayList();
for (Camera.Face face : faces) {
int left = face.rect.left;
int right = face.rect.right;
int top = face.rect.top;
int bottom = face.rect.bottom;
Rect uRect = new Rect(left, top, right, bottom);
faceRects.add(uRect);
}
// add function to draw rects on view/surface/canvas
}
}
}
2.There is another CLASS can be used for face detection called android.media.FaceDetector which accepts a bitmap and a Face array.So i can set the photo capture interval as the detection rate.NOTE THAT: The bitmap detector receives MUST be converted to RGB 565, or there won't be any faces detected.
The effect is NOT as good as the first one!
Now i set the interval as 500 millisecs between taking pictures,and then create a bitmap from the byte data as a picture has been taken , and detect faces in this bitmap. It seems to make the device overheating.
mPictureCallback = new Camera.PictureCallback() {
@Override
public void onPictureTaken(byte[] data, Camera camera) {
mFrameBmp = createRGB565BitmapFromBytes(data);
Log.d(TAG, "bitmap info, width=" + mFrameBmp.getWidth() + ", height=" + mFrameBmp.getHeight());
clearDetectedFaces();
Log.d(TAG, "detectedFaces=" + mFaceDetector.findFaces(mFrameBmp, mFaces));
}
};
public static Bitmap createRGB565BitmapFromBytes(byte[] data) {
BitmapFactory.Options options = new BitmapFactory.Options();
options.inMutable = true;
options.inPreferredConfig=Bitmap.Config.RGB_565;
return BitmapFactory.decodeByteArray(data, 0, data.length, options);
// Canvas canvas = new Canvas(bmp);
}
DAY 2:
The effect of the second way is unacceptable. And I found that com.google.android.gms.FaceDetector seems to be available, but most devices do not support google service.
So i 'm going to check how android controls the max faces can be detected .
The camera.startFaceDetection() invokes native_startFaceDetection() which invokes android_hardware_camera_startFaceDetection() in /framework/base/core/jni/android_hardware_camera.cpp.
static void android_hardware_Camera_startFaceDetection(JNIEnv *env, jobject thiz,
jint type)
{
ALOGV("startFaceDetection");
JNICameraContext* context;
sp camera = get_native_camera(env, thiz, &context);
if (camera == 0) return;
status_t rc = camera->sendCommand(CAMERA_CMD_START_FACE_DETECTION, type, 0);
if (rc == BAD_VALUE) {
char msg[64];
snprintf(msg, sizeof(msg), "invalid face detection type=%d", type);
jniThrowException(env, "java/lang/IllegalArgumentException", msg);
} else if (rc != NO_ERROR) {
jniThrowRuntimeException(env, "start face detection failed");
}
}
The working code in the block above seems "status_t rc = camera->sendCommand(CAMERA_CMD_START_FACE_DETECTION, type, 0);"But i still have not found how android implements it.
DAY 3.
There are two samples on Github. One is android-vision from googlesamples which uses google-service so that i can't make it runs in the box.The other is an official sample from opencv, and that requires ndk for building.
Let's check out how the code works.
The CV library is loaded in onResume() callback. If successfully, it will trigger onManagerConnected() callback, in which it instantiate the face detector(s) by loading corresponding cascade file(s). And then it starts the camera preview. We will receive the each frame returned by camera preview's onCameraFrame() callback, and the faces will be detected , if existed, by the detector we built before.
The official c++ sample code to detect both face and eyes is
//-- Detect faces
face_cascade.detectMultiScale( frame_gray, faces, 1.1, 2, 0|CV_HAAR_SCALE_IMAGE, Size(30, 30) );
for( size_t i = 0; i < faces.size(); i++ )
{
Point center( faces[i].x + faces[i].width*0.5, faces[i].y + faces[i].height*0.5 );
ellipse( frame, center, Size( faces[i].width*0.5, faces[i].height*0.5), 0, 0, 360, Scalar( 255, 0, 255 ), 4, 8, 0 );
Mat faceROI = frame_gray( faces[i] );
std::vector eyes;
//-- In each face, detect eyes
eyes_cascade.detectMultiScale( faceROI, eyes, 1.1, 2, 0 |CV_HAAR_SCALE_IMAGE, Size(30, 30) );
for( size_t j = 0; j < eyes.size(); j++ )
{
Point center( faces[i].x + eyes[j].x + eyes[j].width*0.5, faces[i].y + eyes[j].y + eyes[j].height*0.5 );
int radius = cvRound( (eyes[j].width + eyes[j].height)*0.25 );
circle( frame, center, radius, Scalar( 255, 0, 0 ), 4, 8, 0 );
}
}
The param above frame_gray, is a gray image of one of camera preview capture frames.