原文:Augmented Reality in Android with Google’s Face API
作者:Joey deVilla
译者:kmyhy
如果你用过 Snapchat 的“镜头”功能,你使用的就是增强现实+面部识别技术。
增强现实——AR——是一种技术——它是一个令人印象深刻的名称,简单地说,它在真实世界的图像的基础上覆盖以计算机生成的图像。而面部识别,对于人类来说轻而易举,但对于计算机来说面部识别仍然是一个新技术,特别对于移动设备来说尤其如此。
一般来说,要编写带 AR 和面部识别的 app 需要高深的编程能力,但通过 Google 的移动视觉套件和 Face API,却使事情变得简单。
在本教程中,你将编写一个类似 Snapchat 镜头的 app,叫做 FaceSpotter。它会在镜头视野中绘制出一个卡通人物。
在这篇教程中,你将学到:
注意:本教程假设你熟悉 Android+Java 开发。如果你是一个新手,不知道 Andriod Studio,请阅读我们的 Android 教程。
Google 的 Face API 用于面部检测,从图片中找出人的面部,以及位置(它们在图片中的位置)以及朝向(它们面朝何方,相对于镜头而言)。它可以检测出特征点(面部五官),进行分析,判断眼睛是睁着的还是闭着的,以及是不是笑脸。Face API 还能在移动图片中检测并跟随面孔,即面部跟踪。
注意 Face API 仅限于侦测人类的面孔。“猫播们”,不好意思了……
Face API 不能用于面部识别,面部识别会将指定的面孔进行唯一标识。它无法想 Facebook 一样从图片中侦测出面孔并找出它是谁。
一旦你从图片中侦测出面孔及其特征点,你就可以用你自己的现实来增强这张图片!想精灵宝可梦这样的 app,或者 Snapchat,能够用用户的相机通过增强现实制造出一种有趣的效果,你也可以!
从这里下载 FaceSpotter 的开始项目,然后用 Android Studio 打开它。Build & run,它会向你询问相机权限。
点击 ALLOW,将镜头对准某个家伙的面孔。
app 左下角的按钮可以在前置和后置摄像头之间切换。
项目已经准备就绪,方便你快速进入面部侦测和跟踪。我们先来看看项目中有些什么。
打开项目的 build.gradle (Module: app):
在 dependencies 节的最后,你会看到:
compile 'com.google.android.gms:play-services-vision:10.2.0'
compile 'com.android.support:design:25.2.0'
第一句导入了 Android Vision API,它支持的不仅仅是面部侦测,也包括了二维码侦测和文字识别。
第二句导入了 Android Design 支持库,它提供了 Snackbar widget,用于通知用户这个 app 需要访问相机。
FaceSpotter 在 AndroidManifest.xml 中声明需要使用相机并请求用户许可:
<uses-feature android:name="android.hardware.camera" />
<uses-permission android:name="android.permission.CAMERA" />
开始项目包含了几个预定义的类:
让我们看一下如何使用它们。
FaceActivity 定义了这个 app 唯一的 activity,用于处理触摸事件,在运行时请求相机权限(支持 Android 6.0 以上)。FaceActivity 还创建了两个 FaceSpotter 会用到的对象 CameraSource 和 FaceDetector。
打开 FaceActivity.java 找到 createCameraSource 方法:
private void createCameraSource() {
Context context = getApplicationContext();
// 1
FaceDetector detector = createFaceDetector(context);
// 2
int facing = CameraSource.CAMERA_FACING_FRONT;
if (!mIsFrontFacing) {
facing = CameraSource.CAMERA_FACING_BACK;
}
// 3
mCameraSource = new CameraSource.Builder(context, detector)
.setFacing(facing)
.setRequestedPreviewSize(320, 240)
.setRequestedFps(60.0f)
.setAutoFocusEnabled(true)
.build();
}
代码解释如下:
用前两步的结果,以 Builder 模式创建一个 camera source。这些 builder 方法分别是:
然后看一下 createFaceDetector 方法:
@NonNull
private FaceDetector createFaceDetector(final Context context) {
// 1
FaceDetector detector = new FaceDetector.Builder(context)
.setLandmarkType(FaceDetector.ALL_LANDMARKS)
.setClassificationType(FaceDetector.ALL_CLASSIFICATIONS)
.setTrackingEnabled(true)
.setMode(FaceDetector.FAST_MODE)
.setProminentFaceOnly(mIsFrontFacing)
.setMinFaceSize(mIsFrontFacing ? 0.35f : 0.15f)
.build();
// 2
MultiProcessor.Factory factory = new MultiProcessor.Factory() {
@Override
public Tracker create(Face face) {
return new FaceTracker(mGraphicOverlay, context, mIsFrontFacing);
}
};
// 3
Detector.Processor processor = new MultiProcessor.Builder<>(factory).build();
detector.setProcessor(processor);
// 4
if (!detector.isOperational()) {
Log.w(TAG, "Face detector dependencies are not yet available.");
// Check the device's storage. If there's little available storage, the native
// face detection library will not be downloaded, and the app won't work,
// so notify the user.
IntentFilter lowStorageFilter = new IntentFilter(Intent.ACTION_DEVICE_STORAGE_LOW);
boolean hasLowStorage = registerReceiver(null, lowStorageFilter) != null;
if (hasLowStorage) {
Log.w(TAG, getString(R.string.low_storage_error));
DialogInterface.OnClickListener listener = new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int id) {
finish();
}
};
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.setTitle(R.string.app_name)
.setMessage(R.string.low_storage_error)
.setPositiveButton(R.string.disappointed_ok, listener)
.show();
}
}
return detector;
}
代码解释如下:
以 Builder 模式创建一个 FaceDetector 对象,并设置如下属性:
创建一个工厂类,用于生成新 FaceTracker 实例。
介绍完背景知识之后,我们来试着检测几个面孔!
首先添加一个 view 用于绘制面部侦测数据。
打开 FaceGraphic.java。你会看到 mFace 的变量用关键字 volatile 声明。mFace 用于保存 FaceTracker 发送来的面孔数据,可能被许多线程写入。将它标记为 volatile 保证你每次读它的值时,总是会得到最后被“写入”的结果。这很关键,因为面孔数据会修改得比较频繁。
从 FaceGraphic 中删除 draw() 方法,添加方法:
// 1
void update(Face face) {
mFace = face;
postInvalidate(); // Trigger a redraw of the graphic (i.e. cause draw() to be called).
}
@Override
public void draw(Canvas canvas) {
// 2
// Confirm that the face and its features are still visible
// before drawing any graphics over it.
Face face = mFace;
if (face == null) {
return;
}
// 3
float centerX = translateX(face.getPosition().x + face.getWidth() / 2.0f);
float centerY = translateY(face.getPosition().y + face.getHeight() / 2.0f);
float offsetX = scaleX(face.getWidth() / 2.0f);
float offsetY = scaleY(face.getHeight() / 2.0f);
// 4
// Draw a box around the face.
float left = centerX - offsetX;
float right = centerX + offsetX;
float top = centerY - offsetY;
float bottom = centerY + offsetY;
// 5
canvas.drawRect(left, top, right, bottom, mHintOutlinePaint);
// 6
// Draw the face's id.
canvas.drawText(String.format("id: %d", face.getId()), centerX, centerY, mHintTextPaint);
}
代码解释如下:
在 FaceActivity 中,face detector 将它从相机数据流中侦测到的面孔数据发送给绑定的 multiprocessor。每当接收到一个面孔,multiprocessor 会生成一个新的 FaceTracker 实例。
在 FaceTracker.java 的构造函授后面添加下列方法:
// 1
@Override
public void onNewItem(int id, Face face) {
mFaceGraphic = new FaceGraphic(mOverlay, mContext, mIsFrontFacing);
}
// 2
@Override
public void onUpdate(FaceDetector.Detections detectionResults, Face face) {
mOverlay.add(mFaceGraphic);
mFaceGraphic.update(face);
}
// 3
@Override
public void onMissing(FaceDetector.Detections detectionResults) {
mOverlay.remove(mFaceGraphic);
}
@Override
public void onDone() {
mOverlay.remove(mFaceGraphic);
}
代码解释如下:
运行 app。它会在每个检测到的面孔上添加一个框,并加上一个 ID 号:
Face API 可以识别面部特征。
接下来将修改 app 以便它能识别所跟踪的面孔的下列部位:
这个信息保存在 FaceData 对象中,而不是 Face 对象。
对于面部特征来说,左和右引用的是目标的左和右。从前置摄像头看,目标的右眼位于屏幕的右边,但从后置摄像头看,则位于左边。
打开 FaceTracker.java 修改 onUpdate() 方法。调用 update() 的那句会有一个编译错误,因为我们还没有完成为了让 app 使用 FaceData 模型的修改,你会在后面再来解决它。
@Override
public void onUpdate(FaceDetector.Detections detectionResults, Face face) {
mOverlay.add(mFaceGraphic);
// Get face dimensions.
mFaceData.setPosition(face.getPosition());
mFaceData.setWidth(face.getWidth());
mFaceData.setHeight(face.getHeight());
// Get the positions of facial landmarks.
updatePreviousLandmarkPositions(face);
mFaceData.setLeftEyePosition(getLandmarkPosition(face, Landmark.LEFT_EYE));
mFaceData.setRightEyePosition(getLandmarkPosition(face, Landmark.RIGHT_EYE));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.LEFT_CHEEK));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.RIGHT_CHEEK));
mFaceData.setNoseBasePosition(getLandmarkPosition(face, Landmark.NOSE_BASE));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.LEFT_EAR));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.LEFT_EAR_TIP));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.RIGHT_EAR));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.RIGHT_EAR_TIP));
mFaceData.setMouthLeftPosition(getLandmarkPosition(face, Landmark.LEFT_MOUTH));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.BOTTOM_MOUTH));
mFaceData.setMouthRightPosition(getLandmarkPosition(face, Landmark.RIGHT_MOUTH));
mFaceGraphic.update(mFaceData);
}
注意你现在在 FaceGraphic 的 update 方法中传入的是一个 FaceData 而不是从 onUpdate 参数得来的 Face 对象了。
这允许你定义传递给 FaceTracker 的面部信息,反过来当面孔移动得太快时你可以用一些计算技巧,根据面部特征的最后一次的位置推断它们当前的位置。你将用 mPreviousLandmarkPositions、getLandmarkPosition 方法和 updatePreviousLandmarkPositions 方法实现这个目的。
然后打开 FaceGraphic.java。
首先,因为从 FaceTracker 中接收到的是 FaceData 对象而不是 Face 对象,你需要将一个:
private volatile Face mFace;
修改为:
private volatile FaceData mFaceData;
修改 update() 方法为:
void update(FaceData faceData) {
mFaceData = faceData;
postInvalidate(); // Trigger a redraw of the graphic (i.e. cause draw() to be called).
}
最后,需要修改 draw() 方法,在跟踪的面孔上面画一些点和文字标记出面部特征:
@Override
public void draw(Canvas canvas) {
final float DOT_RADIUS = 3.0f;
final float TEXT_OFFSET_Y = -30.0f;
// Confirm that the face and its features are still visible before drawing any graphics over it.
if (mFaceData == null) {
return;
}
// 1
PointF detectPosition = mFaceData.getPosition();
PointF detectLeftEyePosition = mFaceData.getLeftEyePosition();
PointF detectRightEyePosition = mFaceData.getRightEyePosition();
PointF detectNoseBasePosition = mFaceData.getNoseBasePosition();
PointF detectMouthLeftPosition = mFaceData.getMouthLeftPosition();
PointF detectMouthBottomPosition = mFaceData.getMouthBottomPosition();
PointF detectMouthRightPosition = mFaceData.getMouthRightPosition();
if ((detectPosition == null) ||
(detectLeftEyePosition == null) ||
(detectRightEyePosition == null) ||
(detectNoseBasePosition == null) ||
(detectMouthLeftPosition == null) ||
(detectMouthBottomPosition == null) ||
(detectMouthRightPosition == null)) {
return;
}
// 2
float leftEyeX = translateX(detectLeftEyePosition.x);
float leftEyeY = translateY(detectLeftEyePosition.y);
canvas.drawCircle(leftEyeX, leftEyeY, DOT_RADIUS, mHintOutlinePaint);
canvas.drawText("left eye", leftEyeX, leftEyeY + TEXT_OFFSET_Y, mHintTextPaint);
float rightEyeX = translateX(detectRightEyePosition.x);
float rightEyeY = translateY(detectRightEyePosition.y);
canvas.drawCircle(rightEyeX, rightEyeY, DOT_RADIUS, mHintOutlinePaint);
canvas.drawText("right eye", rightEyeX, rightEyeY + TEXT_OFFSET_Y, mHintTextPaint);
float noseBaseX = translateX(detectNoseBasePosition.x);
float noseBaseY = translateY(detectNoseBasePosition.y);
canvas.drawCircle(noseBaseX, noseBaseY, DOT_RADIUS, mHintOutlinePaint);
canvas.drawText("nose base", noseBaseX, noseBaseY + TEXT_OFFSET_Y, mHintTextPaint);
float mouthLeftX = translateX(detectMouthLeftPosition.x);
float mouthLeftY = translateY(detectMouthLeftPosition.y);
canvas.drawCircle(mouthLeftX, mouthLeftY, DOT_RADIUS, mHintOutlinePaint);
canvas.drawText("mouth left", mouthLeftX, mouthLeftY + TEXT_OFFSET_Y, mHintTextPaint);
float mouthRightX = translateX(detectMouthRightPosition.x);
float mouthRightY = translateY(detectMouthRightPosition.y);
canvas.drawCircle(mouthRightX, mouthRightY, DOT_RADIUS, mHintOutlinePaint);
canvas.drawText("mouth right", mouthRightX, mouthRightY + TEXT_OFFSET_Y, mHintTextPaint);
float mouthBottomX = translateX(detectMouthBottomPosition.x);
float mouthBottomY = translateY(detectMouthBottomPosition.y);
canvas.drawCircle(mouthBottomX, mouthBottomY, DOT_RADIUS, mHintOutlinePaint);
canvas.drawText("mouth bottom", mouthBottomX, mouthBottomY + TEXT_OFFSET_Y, mHintTextPaint);
}
注意这些地方:
运行 app。你会看到:
多张面孔是这个样子:
你已经识别出面孔上的特征点了,接下来开开始画卡通图片吧!但首先,我们要来学习表情类型。
Face 类提供了这些和表情类型有关的方法:
两者都会返回 0(非常不可能)到 1(肯定)之间的小数。你可以将这个结果用于判断眼睛是否睁着以及面孔是否在笑,并将这些信息传递给 FaceGraphic。
修改 FaceTracker 使它支持表情分类。首先,在 FaceTracker 中添加两个新实例变量用于保存眼睛的上一次状态。在使用面部特征时,当对象在快速移动时,face detector 有可能检测眼睛状态失败,这时提供一个之前的状态会方便许多:
private boolean mPreviousIsLeftEyeOpen = true;
private boolean mPreviousIsRightEyeOpen = true;
onUpdate 也要修改:
@Override
public void onUpdate(FaceDetector.Detections detectionResults, Face face) {
mOverlay.add(mFaceGraphic);
updatePreviousLandmarkPositions(face);
// Get face dimensions.
mFaceData.setPosition(face.getPosition());
mFaceData.setWidth(face.getWidth());
mFaceData.setHeight(face.getHeight());
// Get the positions of facial landmarks.
mFaceData.setLeftEyePosition(getLandmarkPosition(face, Landmark.LEFT_EYE));
mFaceData.setRightEyePosition(getLandmarkPosition(face, Landmark.RIGHT_EYE));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.LEFT_CHEEK));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.RIGHT_CHEEK));
mFaceData.setNoseBasePosition(getLandmarkPosition(face, Landmark.NOSE_BASE));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.LEFT_EAR));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.LEFT_EAR_TIP));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.RIGHT_EAR));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.RIGHT_EAR_TIP));
mFaceData.setMouthLeftPosition(getLandmarkPosition(face, Landmark.LEFT_MOUTH));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.BOTTOM_MOUTH));
mFaceData.setMouthRightPosition(getLandmarkPosition(face, Landmark.RIGHT_MOUTH));
// 1
final float EYE_CLOSED_THRESHOLD = 0.4f;
float leftOpenScore = face.getIsLeftEyeOpenProbability();
if (leftOpenScore == Face.UNCOMPUTED_PROBABILITY) {
mFaceData.setLeftEyeOpen(mPreviousIsLeftEyeOpen);
} else {
mFaceData.setLeftEyeOpen(leftOpenScore > EYE_CLOSED_THRESHOLD);
mPreviousIsLeftEyeOpen = mFaceData.isLeftEyeOpen();
}
float rightOpenScore = face.getIsRightEyeOpenProbability();
if (rightOpenScore == Face.UNCOMPUTED_PROBABILITY) {
mFaceData.setRightEyeOpen(mPreviousIsRightEyeOpen);
} else {
mFaceData.setRightEyeOpen(rightOpenScore > EYE_CLOSED_THRESHOLD);
mPreviousIsRightEyeOpen = mFaceData.isRightEyeOpen();
}
// 2
// See if there's a smile!
// Determine if person is smiling.
final float SMILING_THRESHOLD = 0.8f;
mFaceData.setSmiling(face.getIsSmilingProbability() > SMILING_THRESHOLD);
mFaceGraphic.update(mFaceData);
}
有几个地方需要修改:
现在,你已经获得了面部特征点和表情分类,可以用一些卡通图片贴在所跟踪的脸上了:
FaceGraphic 的 draw 方法需要修改为:
@Override
public void draw(Canvas canvas) {
final float DOT_RADIUS = 3.0f;
final float TEXT_OFFSET_Y = -30.0f;
// Confirm that the face and its features are still visible
// before drawing any graphics over it.
if (mFaceData == null) {
return;
}
PointF detectPosition = mFaceData.getPosition();
PointF detectLeftEyePosition = mFaceData.getLeftEyePosition();
PointF detectRightEyePosition = mFaceData.getRightEyePosition();
PointF detectNoseBasePosition = mFaceData.getNoseBasePosition();
PointF detectMouthLeftPosition = mFaceData.getMouthLeftPosition();
PointF detectMouthBottomPosition = mFaceData.getMouthBottomPosition();
PointF detectMouthRightPosition = mFaceData.getMouthRightPosition();
if ((detectPosition == null) ||
(detectLeftEyePosition == null) ||
(detectRightEyePosition == null) ||
(detectNoseBasePosition == null) ||
(detectMouthLeftPosition == null) ||
(detectMouthBottomPosition == null) ||
(detectMouthRightPosition == null)) {
return;
}
// Face position and dimensions
PointF position = new PointF(translateX(detectPosition.x),
translateY(detectPosition.y));
float width = scaleX(mFaceData.getWidth());
float height = scaleY(mFaceData.getHeight());
// Eye coordinates
PointF leftEyePosition = new PointF(translateX(detectLeftEyePosition.x),
translateY(detectLeftEyePosition.y));
PointF rightEyePosition = new PointF(translateX(detectRightEyePosition.x),
translateY(detectRightEyePosition.y));
// Eye state
boolean leftEyeOpen = mFaceData.isLeftEyeOpen();
boolean rightEyeOpen = mFaceData.isRightEyeOpen();
// Nose coordinates
PointF noseBasePosition = new PointF(translateX(detectNoseBasePosition.x),
translateY(detectNoseBasePosition.y));
// Mouth coordinates
PointF mouthLeftPosition = new PointF(translateX(detectMouthLeftPosition.x),
translateY(detectMouthLeftPosition.y));
PointF mouthRightPosition = new PointF(translateX(detectMouthRightPosition.x),
translateY(detectMouthRightPosition.y));
PointF mouthBottomPosition = new PointF(translateX(detectMouthBottomPosition.x),
translateY(detectMouthBottomPosition.y));
// Smile state
boolean smiling = mFaceData.isSmiling();
// Calculate the distance between the eyes using Pythagoras' formula,
// and we'll use that distance to set the size of the eyes and irises.
final float EYE_RADIUS_PROPORTION = 0.45f;
final float IRIS_RADIUS_PROPORTION = EYE_RADIUS_PROPORTION / 2.0f;
float distance = (float) Math.sqrt(
(rightEyePosition.x - leftEyePosition.x) * (rightEyePosition.x - leftEyePosition.x) +
(rightEyePosition.y - leftEyePosition.y) * (rightEyePosition.y - leftEyePosition.y));
float eyeRadius = EYE_RADIUS_PROPORTION * distance;
float irisRadius = IRIS_RADIUS_PROPORTION * distance;
// Draw the eyes.
drawEye(canvas, leftEyePosition, eyeRadius, leftEyePosition, irisRadius, leftEyeOpen, smiling);
drawEye(canvas, rightEyePosition, eyeRadius, rightEyePosition, irisRadius, rightEyeOpen, smiling);
// Draw the nose.
drawNose(canvas, noseBasePosition, leftEyePosition, rightEyePosition, width);
// Draw the mustache.
drawMustache(canvas, noseBasePosition, mouthLeftPosition, mouthRightPosition);
}
下面是画眼睛、鼻子和胡须的方法:
private void drawEye(Canvas canvas,
PointF eyePosition, float eyeRadius,
PointF irisPosition, float irisRadius,
boolean eyeOpen, boolean smiling) {
if (eyeOpen) {
canvas.drawCircle(eyePosition.x, eyePosition.y, eyeRadius, mEyeWhitePaint);
if (smiling) {
mHappyStarGraphic.setBounds(
(int)(irisPosition.x - irisRadius),
(int)(irisPosition.y - irisRadius),
(int)(irisPosition.x + irisRadius),
(int)(irisPosition.y + irisRadius));
mHappyStarGraphic.draw(canvas);
} else {
canvas.drawCircle(irisPosition.x, irisPosition.y, irisRadius, mIrisPaint);
}
} else {
canvas.drawCircle(eyePosition.x, eyePosition.y, eyeRadius, mEyelidPaint);
float y = eyePosition.y;
float start = eyePosition.x - eyeRadius;
float end = eyePosition.x + eyeRadius;
canvas.drawLine(start, y, end, y, mEyeOutlinePaint);
}
canvas.drawCircle(eyePosition.x, eyePosition.y, eyeRadius, mEyeOutlinePaint);
}
private void drawNose(Canvas canvas,
PointF noseBasePosition,
PointF leftEyePosition, PointF rightEyePosition,
float faceWidth) {
final float NOSE_FACE_WIDTH_RATIO = (float)(1 / 5.0);
float noseWidth = faceWidth * NOSE_FACE_WIDTH_RATIO;
int left = (int)(noseBasePosition.x - (noseWidth / 2));
int right = (int)(noseBasePosition.x + (noseWidth / 2));
int top = (int)(leftEyePosition.y + rightEyePosition.y) / 2;
int bottom = (int)noseBasePosition.y;
mPigNoseGraphic.setBounds(left, top, right, bottom);
mPigNoseGraphic.draw(canvas);
}
private void drawMustache(Canvas canvas,
PointF noseBasePosition,
PointF mouthLeftPosition, PointF mouthRightPosition) {
int left = (int)mouthLeftPosition.x;
int top = (int)noseBasePosition.y;
int right = (int)mouthRightPosition.x;
int bottom = (int)Math.min(mouthLeftPosition.y, mouthRightPosition.y);
if (mIsFrontFacing) {
mMustacheGraphic.setBounds(left, top, right, bottom);
} else {
mMustacheGraphic.setBounds(right, top, left, bottom);
}
mMustacheGraphic.draw(canvas);
}
运行 app,将镜头对准脸。对于两只眼睛都是睁着,且没有笑的脸来说,你会看到:
这是我在眨右眼(因此它显示为闭着的)同时微笑(因此我的眼中有一个微笑的小星星)的样子:
这个 app 同时在几张脸上画卡通图形…
甚至是在插图上,只要它足够真实:
它现在和 Snapchat 更像了!
Face API 提供另一个数据:欧拉角。
“欧拉”一词及发音来自于数学家 Leonhard Euler,它用于描述侦测的脸的方向。这个 API 使用 x、y、z 坐标系:
并报告每张脸的下列欧拉角。
打开 FaceTracker.java ,在 onUpdate() 方法中添加这两行代码以支持欧拉角:
// Get head angles.
mFaceData.setEulerY(face.getEulerY());
mFaceData.setEulerZ(face.getEulerZ());
你用欧拉 z 角去修改 FaceGraphic ,让它画一顶帽子在面孔头上,当欧拉 z 角倾斜到任何一边的角度大于 20 度时。
打开 FaceGraphic.java,在 draw 方法最后添加代码:
// Head tilt
float eulerY = mFaceData.getEulerY();
float eulerZ = mFaceData.getEulerZ();
// Draw the hat only if the subject's head is titled at a sufficiently jaunty angle.
final float HEAD_TILT_HAT_THRESHOLD = 20.0f;
if (Math.abs(eulerZ) > HEAD_TILT_HAT_THRESHOLD) {
drawHat(canvas, position, width, height, noseBasePosition);
}
然后添加一个 drawHat 方法:
private void drawHat(Canvas canvas, PointF facePosition, float faceWidth, float faceHeight, PointF noseBasePosition) {
final float HAT_FACE_WIDTH_RATIO = (float)(1.0 / 4.0);
final float HAT_FACE_HEIGHT_RATIO = (float)(1.0 / 6.0);
final float HAT_CENTER_Y_OFFSET_FACTOR = (float)(1.0 / 8.0);
float hatCenterY = facePosition.y + (faceHeight * HAT_CENTER_Y_OFFSET_FACTOR);
float hatWidth = faceWidth * HAT_FACE_WIDTH_RATIO;
float hatHeight = faceHeight * HAT_FACE_HEIGHT_RATIO;
int left = (int)(noseBasePosition.x - (hatWidth / 2));
int right = (int)(noseBasePosition.x + (hatWidth / 2));
int top = (int)(hatCenterY - (hatHeight / 2));
int bottom = (int)(hatCenterY + (hatHeight / 2));
mHatGraphic.setBounds(left, top, right, bottom);
mHatGraphic.draw(canvas);
}
运行 app。现在当头倾斜到一顶角度后,一顶帅气的帽子出现了:
最后用一个简单的物理引擎让眼珠滴溜溜地弹动。只需要对 FaceGraphic 做一点简单修改。首先,你需要声明两个实例变量,为每只眼睛各提供一个物理引擎。在 Drawable 变量下增加:
// We want each iris to move independently, so each one gets its own physics engine.
private EyePhysics mLeftPhysics = new EyePhysics();
private EyePhysics mRightPhysics = new EyePhysics();
第二处需要改变的地方是调用 FaceGraphic 的 draw 方法。目前,你将眼珠的位置设置为眼睛的同一位置。
现在,修改 draw 方法中 “draw the eyes” 一段的代码,使用物理引擎去计算眼珠的位置:
// Draw the eyes.
PointF leftIrisPosition = mLeftPhysics.nextIrisPosition(leftEyePosition, eyeRadius, irisRadius);
drawEye(canvas, leftEyePosition, eyeRadius, leftIrisPosition, irisRadius, leftEyeOpen, smiling);
PointF rightIrisPosition = mRightPhysics.nextIrisPosition(rightEyePosition, eyeRadius, irisRadius);
drawEye(canvas, rightEyePosition, eyeRadius, rightIrisPosition, irisRadius, rightEyeOpen, smiling);
运行 app,现在每个人都有一双曲棍球式(googly,谷歌式,双关语)的眼睛!
你可以从这里下载完成后的项目。
现在,你虽然不能说从一支增强现实和面部侦测的新手变成了老鸟,但总算知道如何在 Android app 中使用二者了吧!
现在,你已经完成了这个 app 的几个迭代,从最初的版本到完成版本,你应该很容易理解这张 FaceSpotter 对象关系图了吧:
接下来你应该浏览 Google 的移动视觉网站,尤其是 Face API 一节。
阅读他人代码是一种好的学习方式,Google 的 android-vision GitHub repository是一座引发无数想法和代码的宝藏。
如果你有任何问题和评论,请在下面留言。