Android Camera fw学习(三)-startPreview流程分析
感兴趣可以加QQ群85486140,大家一起交流相互学习下!
备注:本文是Android5.1学习笔记。博文按照软件启动流程分析。
如果看过前面的两篇博文,我们应该已经知道,在进行preview之前,我们创建了客户端的java和native Camera对象,在mediaServer进程创建了对应客户端的本地对象(Camera2Client),此外也获取到底层HAL3默认信息,现在上层app 根据这个默认参数,更新参数,设置preview surface,为后面的startPreview准备。
###一、Camera app初步接触
为了更方便的分析,这里贴出部分app部分代码,例子是Android原生PDK,代码路径在
pdk/apps/TestingCamera/。目前需要关注都在下面的setUpCamera()
函数中。
####1.camera – setUpCamera
//源码路径:pdk/apps/TestingCamera/src/com/android/testingcamera/TestingCamera.java
void setUpCamera() {
if (mCameraId == NO_CAMERA_ID) return;
log("Setting up camera " + mCameraId);
logIndent(1);
if (mState < CAMERA_OPEN) {
log("Opening camera " + mCameraId);
try {
//这是第一步准备工作,前面博文已经分析过了。
mCamera = Camera.open(mCameraId);
} catch (RuntimeException e) {
logE("Exception opening camera: " + e.getMessage());
resetCamera();
mCameraSpinner.setSelection(0);
logIndent(-1);
return;
}
mState = CAMERA_OPEN;
}
//设置error回调,供camera.java中回调。
mCamera.setErrorCallback(this);
//这是显示方向
setCameraDisplayOrientation();
//获取底层默认配置参数,这个下面会做进一步说明
mParams = mCamera.getParameters();
// Set up preview size selection
log("Configuring camera");
logIndent(1);
//下面都是在更新当前Camera app参数,然后会把新的参数发给hal3
updatePreviewSizes(mParams);
updatePreviewFrameRate(mCameraId);
updatePreviewFormats(mParams);
updateAfModes(mParams);
updateFlashModes(mParams);
updateSnapshotSizes(mParams);
updateCamcorderProfile(mCameraId);
updateVideoRecordSize(mCameraId);
updateVideoFrameRate(mCameraId);
updateColorEffects(mParams);
//这里省略一些代码,主要是上层app会根据底层获取到参数,更新部分控件的属性,
//这里我们可以不用关心,感兴趣的可以查看源码。
// Update parameters based on above updates
mCamera.setParameters(mParams);
if (mPreviewHolder != null) {
log("Setting preview display");
try {
//可以理解这个就是设置preview buffer的地方,下面有进一步分析。
mCamera.setPreviewDisplay(mPreviewHolder);
} catch(IOException e) {
Log.e(TAG, "Unable to set up preview!");
}
}
logIndent(-1);
enableOpenOnlyControls(true);
resizePreview();
if (mPreviewToggle.isChecked()) {
log("Starting preview" );
//参数都准备好了,启动preview
mCamera.startPreview();
mState = CAMERA_PREVIEW;
} else {
mState = CAMERA_OPEN;
enablePreviewOnlyControls(false);
}
}
这里设置camera参数,
####2.设置preview buffer-setPreviewDisplay(SurfaceHolder holder)
上面参数中的holder对象为SurfaceHolder类对象,该类为一个抽象接口类,我们可以用它来控制surface的大小,格式,像素格式(native 层管理surface是surfacecontrol类,这俩类穿一条裤子的)。下面是google官方的介绍。
Abstract interface to someone holding a display surface. Allows you to
control the surface size and format, edit the pixels in the surface, and
monitor changes to the surface. This interface is typically available
through the {@link SurfaceView} class.
When using this interface from a thread other than the one running
its {@link SurfaceView}, you will want to carefully read the
methods
{@link #lockCanvas} and {@link Callback#surfaceCreated Callback.surfaceCreated()}.
该函数中主要设置preview buffer的。下面我么可以看到函数内部调用了SurfaceHolder的getSurface()方法获取surface对象。surface由graphic那边统一管理。目前不用关心surface怎么创建,这里面牵扯到的东西太多了。
public final void setPreviewDisplay(SurfaceHolder holder) throws IOException {
if (holder != null) {
setPreviewSurface(holder.getSurface());
} else {
setPreviewSurface((Surface)null);
}
}
但是这里在setUpCamera()函数中,mPreviewHolder还是NULL,所以根本没有设置preview buffer,这样的话preview环境还没准备好,startPreviw肯定会失败。难道真的会失败?
if (mPreviewHolder != null) {
log("Setting preview display");
try {
//可以理解这个就是设置preview buffer的地方,下面有进一步分析。
mCamera.setPreviewDisplay(mPreviewHolder);
} catch(IOException e) {
Log.e(TAG, "Unable to set up preview!");
}
当然不会让它失败,在setUpCamera()中跟着设置setPreviewDisplay()下面有一个resizePreview();
调用,该函数实现如下,不过阅读下注释应该就明白了。
void resizePreview() {
//为了触发布局操作,重新设置preview布局参数,
// Reset preview layout parameters, to trigger layout pass
// This will eventually call layoutPreview below
Resources res = getResources();
mPreviewView.setLayoutParams(
new LinearLayout.LayoutParams(LayoutParams.MATCH_PARENT, 0,
mCallbacksEnabled ?
res.getInteger(R.integer.preview_with_callback_weight):
res.getInteger(R.integer.preview_only_weight) ));
}
该该函数功能就是重新触发布局操作,那为什么要触发布局呢。我们看看触发布局时做了些什么我们需要关注的。这里我们先来关注一个surfaceHolder的接口。
/**
* This is called immediately after any structural changes (format or
* size) have been made to the surface. You should at this point update
* the imagery in the surface. This method is always called at least
* once, after {@link #surfaceCreated}.
*
* @param holder The SurfaceHolder whose surface has changed.
* @param format The new PixelFormat of the surface.
* @param width The new width of the surface.
* @param height The new height of the surface.
*/
public void surfaceChanged(SurfaceHolder holder, int format, int width,int height);
注释内容表明的意思就是,当surface对象的格式和大小发生变化时,这个接口就会被调用。那么上面重新设置surface的布局参数,明摆着要触发surfaceChanged函数调用。来看看该函数在camera应用中如何实现的。
public void surfaceChanged(SurfaceHolder holder,
int format,
int width,
int height) {
//直接去getHolder,这里mPreviewView调用了getHolder()方法,获取到一个SurfaceHolder对象,这里暂时不要去考虑如何拿到SurfaceHolder,我们只需要知道我们拿到preview buffer了。
if (holder == mPreviewView.getHolder()) {
if (mState >= CAMERA_OPEN) {
final int previewWidth =
mPreviewSizes.get(mPreviewSize).width;
final int previewHeight =
mPreviewSizes.get(mPreviewSize).height;
if ( Math.abs((float)previewWidth / previewHeight -
(float)width/height) > 0.01f) {
Handler h = new Handler();
h.post(new Runnable() {
@Override
//创建一个线程去设置布局参数,刷新UI
public void run() {
layoutPreview();
}
});
}
}
//下面这里还是NULL
if (mPreviewHolder != null) {
return;
}
log("Surface holder available: " + width + " x " + height);
//这里保存从previewSurface拿到的surfaceHolder对象
mPreviewHolder = holder;
try {
if (mCamera != null) {
//重新设置preview buffer生产者
mCamera.setPreviewDisplay(holder);
}
} catch (IOException e) {
logE("Unable to set up preview!");
}
} else if (holder == mCallbackView.getHolder()) {
mCallbackHolder = holder;
}
}
针对当前应用程序在setupCamera()没有直接设置preview buffer(其它Camera应用可能就不这么干了),而是在surfaceChanged()回调函数中设置的。这里不必过多关注上面代码的细节,只需要了解到当前的preview buffer是在这里设置的。同时创建一个线程刷新UI.下面已经将preview buffer送到了native了。由于他们语言不通,不能直接调用,中间隔了个虚拟机,可以理解成这个样子。
public class SurfaceView extends View {
static private final String TAG = "SurfaceView";
static private final boolean DEBUG = false;
//.......
final Surface mSurface = new Surface(); // Current surface in use
final Surface mNewSurface = new Surface(); // New surface we are switching to
###二、native 设置preview surface生产者代理对象
####1.android_hardware_Camera_setPreviewSurface()
static void android_hardware_Camera_setPreviewSurface(JNIEnv *env, jobject thiz, jobject jSurface)
{
ALOGV("setPreviewSurface");
//获取客户端camaera本地对象
sp<Camera> camera = get_native_camera(env, thiz, NULL);
if (camera == 0) return;
sp<IGraphicBufferProducer> gbp;
sp<Surface> surface;
if (jSurface) {
surface = android_view_Surface_getSurface(env, jSurface);
if (surface != NULL) {
gbp = surface->getIGraphicBufferProducer();
}
}
if (camera->setPreviewTarget(gbp) != NO_ERROR) {
jniThrowException(env, "java/io/IOException", "setPreviewTexture failed");
}
}
####2.设置preview buffer生产者到CameraService中
status_t Camera::setPreviewTarget(const sp<IGraphicBufferProducer>& bufferProducer)
{
ALOGV("setPreviewTarget(%p)", bufferProducer.get());
sp <ICamera> c = mCamera;
if (c == 0) return NO_INIT;
ALOGD_IF(bufferProducer == 0, "app passed NULL surface");
return c->setPreviewTarget(bufferProducer);
}
这里mCamera就是前面博文讲到的实现Icamera接口的camera2Client本地对象。本地对象会通过标准接口将buffer生产者设置到CameraService中。请看下面代码
status_t Camera2Client::setPreviewTarget(
const sp<IGraphicBufferProducer>& bufferProducer) {
//去掉一些错误检查代码,这里省略
sp<IBinder> binder;
sp<ANativeWindow> window;
if (bufferProducer != 0) { //这里显然不为NULL.
binder = bufferProducer->asBinder();
// Using controlledByApp flag to ensure that the buffer queue remains in
// async mode for the old camera API, where many applications depend
// on that behavior.
//在前一篇博文中,我们了解到拿到生产者代理对象后,创建了一个用于管理surface的SurfaceCtrl对象。
//这里直接就创建了本地Surface对象。
window = new Surface(bufferProducer, /*controlledByApp*/ true);
}
//binder就是生产者代理对象,而window就是刚刚创建的surface对象。继续往下看。
return setPreviewWindowL(binder, window);
}
上面函数主要根据preview生产者代理对象,创建一个本地surface对象,然后将该Surface和生产者对象传递给下一个接口。
status_t Camera2Client::setPreviewWindowL(const sp<IBinder>& binder,
sp<ANativeWindow> window) {
ATRACE_CALL();
status_t res;
//这里第一次设置生产者对象,mPreviewSurface = NULL,下面为假
if (binder == mPreviewSurface) {
ALOGV("%s: Camera %d: New window is same as old window",
__FUNCTION__, mCameraId);
return NO_ERROR;
}
Parameters::State state;
{
SharedParameters::Lock l(mParameters);
//这里在创建Camera2Client本地对象后,进行设备和参数初始化后会将state设置为STOPPED。详情
//请看Parameter.cpp中的Parameters::initialize()实现。
state = l.mParameters.state;
}
switch (state) {
case Parameters::DISCONNECTED:
case Parameters::RECORD:
case Parameters::STILL_CAPTURE:
case Parameters::VIDEO_SNAPSHOT:
ALOGE("%s: Camera %d: Cannot set preview display while in state %s",
__FUNCTION__, mCameraId,
Parameters::getStateName(state));
return INVALID_OPERATION;
case Parameters::STOPPED://这里是stoped状态
case Parameters::WAITING_FOR_PREVIEW_WINDOW:
// OK
break;
case Parameters::PREVIEW:
//省略一部分无关代码
break;
}
mPreviewSurface = binder;//保存当前生产者代理对象。
//将preview surface本地对象设置到preview流处理线程中
res = mStreamingProcessor->setPreviewWindow(window);
//次数省略与当前分析无干的代码,后面待分析到时,在贴出来。
return OK;
}
mStreamingProcessor为preview和Recording流处理对象,函数的主要功能是根据状态机的状态,走不同的流程,这里一开始状态机为STOPPED状态,直接保存生产者代理对象,并将预览的的surface对象传给流 mStreamingProcessor流处理对象。
//frameworks/av/services/camera/libcameraservice/api1/client2/StreamingProcessor.cpp
status_t StreamingProcessor::setPreviewWindow(sp<ANativeWindow> window) {
ATRACE_CALL();
status_t res;
//如果已经存在有效的,preview流,就需要先删除预览流对象
res = deletePreviewStream();
if (res != OK) return res;
Mutex::Autolock m(mMutex);
mPreviewWindow = window;//保存preview surface对象。
return OK;
}
到这一步,预览的surface生产者代理对象已经传给了preview预览流处理线程。线程会不断的dequeue,enqueue buffer.下面在starpreview会继续分析。
###三、开启startpreview
####1.java startPrevew接口介绍
这里现贴出,java framework的startPreview说明吧。
/**
* Starts capturing and drawing preview frames to the screen.
* Preview will not actually start until a surface is supplied
* with {@link #setPreviewDisplay(SurfaceHolder)} or
* {@link #setPreviewTexture(SurfaceTexture)}.
*
* If {@link #setPreviewCallback(Camera.PreviewCallback)},
* {@link #setOneShotPreviewCallback(Camera.PreviewCallback)}, or
* {@link #setPreviewCallbackWithBuffer(Camera.PreviewCallback)} were
* called, {@link Camera.PreviewCallback#onPreviewFrame(byte[], Camera)}
* will be called when preview data becomes available.
*/
public native final void startPreview();
上面是camera java fw startPreview()接口,可以发现该接口声明的是一个jni接口。那么应用端调用startPreviw其实直接调到jni中。重点是官方注释,大概要注意的就是下面2条。
####2.native jni startPrevew接口
static void android_hardware_Camera_startPreview(JNIEnv *env, jobject thiz)
{
ALOGV("startPreview");
//下面首先获取应用端camra native对象,这里不知道大家还记得前面介绍过的。
//这里Camera实现了ICamera接口,它正是一个代理对象,能与CameraService中的camera2Client本地对象通信的。
sp<Camera> camera = get_native_camera(env, thiz, NULL);
if (camera == 0) return;
//在本地方法中
if (camera->startPreview() != NO_ERROR) {
jniThrowRuntimeException(env, "startPreview failed");
return;
}
}
// start preview mode
status_t Camera::startPreview()
{
ALOGV("startPreview");
sp <ICamera> c = mCamera;
if (c == 0) return NO_INIT;
return c->startPreview();
}
上面首先获取camera本地对象,然后调用camera本地对象中实现Icamera接口的代理对象mCamera,进程间binder通信调用到CameraService中的startPreview()接口。下面就不贴出ICamera中实现的处理binder消息的代码,直接跳到Camera2Client.cpp中的startPreview.
status_t Camera2Client::startPreview() {
ATRACE_CALL();
ALOGV("%s: E", __FUNCTION__);
Mutex::Autolock icl(mBinderSerializationLock);
status_t res;
if ( (res = checkPid(__FUNCTION__) ) != OK) return res;
SharedParameters::Lock l(mParameters);
return startPreviewL(l.mParameters, false);
}
由于下面的startPreviewL会在其它地方调用到,所以上面又对它做了一次封装,并传入restart = false.这里我们是第一次打开Camera,必须是false。继续看下面这段代码,这段代码是startPreviw的核心地带。
status_t Camera2Client::startPreviewL(Parameters ¶ms, bool restart) {
ATRACE_CALL();
status_t res;
ALOGV("%s: state == %d, restart = %d", __FUNCTION__, params.state, restart);
//如果不是第一次调用StartPreviewL,而且传进来的restart = false 就会直接返回,这里只是做了保护
//防止误调用。
if ( (params.state == Parameters::PREVIEW ||
params.state == Parameters::RECORD ||
params.state == Parameters::VIDEO_SNAPSHOT)
&& !restart) {
// Succeed attempt to re-enter a streaming state
ALOGI("%s: Camera %d: Preview already active, ignoring restart",
__FUNCTION__, mCameraId);
return OK;
}
// 下面是camera状态机,如果要置状态机的状态为PREVIEW之后的状态,必须要先进行preview,
// 才能进行录像,拍照,录像拍照等操作。这里就是为了防止进行了拍照,录像等操作,而preview没有启动的
// 状态。即restart = true(1), params.state = RECORD,STILL_CAPTURE,VIDEO_SNAPSHOT。
// enum State {
// DISCONNECTED,
// STOPPED,
// WAITING_FOR_PREVIEW_WINDOW,
// PREVIEW,
// RECORD,
// STILL_CAPTURE,
// VIDEO_SNAPSHOT
// } state;
if (params.state > Parameters::PREVIEW && !restart) {
ALOGE("%s: Can't start preview in state %s",
__FUNCTION__,
Parameters::getStateName(params.state));
return INVALID_OPERATION;
}
//判断preview流处理对象是否已经设置过了有效的预览Surface对象,我们上面已经设置过了。
if (!mStreamingProcessor->haveValidPreviewWindow()) {
params.state = Parameters::WAITING_FOR_PREVIEW_WINDOW;
return OK;
}
//将状态机的状态设置为SOPPED
params.state = Parameters::STOPPED;
//过程1:获取preview stareamId,下面有详细介绍。
int lastPreviewStreamId = mStreamingProcessor->getPreviewStreamId();
//过程2:根据参数,更新流处理对象。
res = mStreamingProcessor->updatePreviewStream(params);
if (res != OK) {
ALOGE("%s: Camera %d: Unable to update preview stream: %s (%d)",
__FUNCTION__, mCameraId, strerror(-res), res);
return res;
}
//
//TODO:find a better commadf
//如果当前的previewStreamId和上一次的不同,就将previewStreamChanged置为true。
bool previewStreamChanged = mStreamingProcessor->getPreviewStreamId() != lastPreviewStreamId;
// We could wait to create the JPEG output stream until first actual use
// (first takePicture call). However, this would substantially increase the
// first capture latency on HAL3 devices, and potentially on some HAL2
// devices. So create it unconditionally at preview start. As a drawback,
// this increases gralloc memory consumption for applications that don't
// ever take a picture.
// TODO: Find a better compromise, though this likely would involve HAL
// changes.
//下面为了减少拍照延迟,这里首先会创建好Jpeg流。详情等我们分析拍照时在细说。
int lastJpegStreamId = mJpegProcessor->getStreamId();
res = updateProcessorStream(mJpegProcessor, params);
if (res != OK) {
ALOGE("%s: Camera %d: Can't pre-configure still image "
"stream: %s (%d)",
__FUNCTION__, mCameraId, strerror(-res), res);
return res;
}
bool jpegStreamChanged = mJpegProcessor->getStreamId() != lastJpegStreamId;
//这里创建了一个int32_t类型的数组,用来记录各种输出流的ID。
Vector<int32_t> outputStreams;
//--------------------------------------------------------------------------
//这中间有一段分析处理回调参数,更新回调流处理对象代码,这里为了简洁,先去掉这部分代码,等后面在分析
//callback stream时在好好分析。
//--------------------------------------------------------------------------
//这里如果是ZSL模式,非录像模式、没有创建录像流的话。就更新Zsl流,同样这里我们也不做分析,现在只关心
//normal preview
if (params.zslMode && !params.recordingHint &&
getRecordingStreamId() == NO_STREAM) {
res = updateProcessorStream(mZslProcessor, params);
if (res != OK) {
ALOGE("%s: Camera %d: Unable to update ZSL stream: %s (%d)",
__FUNCTION__, mCameraId, strerror(-res), res);
return res;
}
if (jpegStreamChanged) {
ALOGV("%s: Camera %d: Clear ZSL buffer queue when Jpeg size is changed",
__FUNCTION__, mCameraId);
mZslProcessor->clearZslQueue();
}
outputStreams.push(getZslStreamId());
} else {
mZslProcessor->deleteStream();
}
//注意则个地方非常重要,保存preview预览流的StreamId,便于后面根据索引查找。
outputStreams.push(getPreviewStreamId());
if (!params.recordingHint) {//如果不是recording模式,
if (!restart) {//第一次startPreview
//过程3,更新preview预览请求,非常重要,下面有详细分析。
res = mStreamingProcessor->updatePreviewRequest(params);
if (res != OK) {
ALOGE("%s: Camera %d: Can't set up preview request: "
"%s (%d)", __FUNCTION__, mCameraId,
strerror(-res), res);
return res;
}
}
//过程4.启动预览流,下面有详细分析。注意已经将outputStreams数组传到startStream接口中。
//是为了确定是否将对应的请求下发给HAL。
res = mStreamingProcessor->startStream(StreamingProcessor::PREVIEW,
outputStreams);
} else {
if (!restart) {
//根据参数,更新录像请求请求,后面分析录像时,在好好分析。
res = mStreamingProcessor->updateRecordingRequest(params);
if (res != OK) {
ALOGE("%s: Camera %d: Can't set up preview request with "
"record hint: %s (%d)", __FUNCTION__, mCameraId,
strerror(-res), res);
return res;
}
}
res = mStreamingProcessor->startStream(StreamingProcessor::RECORD,
outputStreams);
}
if (res != OK) {
ALOGE("%s: Camera %d: Unable to start streaming preview: %s (%d)",
__FUNCTION__, mCameraId, strerror(-res), res);
return res;
}
//camera状态机置成Parameters::PREVIEW。
params.state = Parameters::PREVIEW;
return OK;
}
该函数中主要做了,下面几件事情。
#####1).过程1-获取StreamId。
int StreamingProcessor::getPreviewStreamId() const {
Mutex::Autolock m(mMutex);
return mPreviewStreamId;
}
//--------------------------------------
//这里有下面几个成员需要讲一下。
// Preview-related members
Vector<int32_t> mActiveStreamIds;
int32_t mPreviewRequestId;
int mPreviewStreamId;
CameraMetadata mPreviewRequest;
sp<ANativeWindow> mPreviewWindow;
int32_t mRecordingRequestId;
int mRecordingStreamId;
CameraMetadata mRecordingRequest;
int mRecordingFrameCount;
sp<ANativeWindow> mRecordingWindow;
//------------------------------------
上面获取PreviewStreamId地方直接返回了成员变量。mPreviewStreamId成员是在创建preview stream时,由Camera3Device 类中的createStream接口来赋值的。下面CreateStream会介绍到。下面有几个非常重要的成员在这里现做一下介绍。
static const int32_t kPreviewRequestIdStart = 10000000;
static const int32_t kPreviewRequestIdEnd = 20000000;
static const int32_t kRecordingRequestIdStart = 20000000;
static const int32_t kRecordingRequestIdEnd = 30000000;
static const int32_t kCaptureRequestIdStart = 30000000;
static const int32_t kCaptureRequestIdEnd = 40000000;
#####2).过程2-updatePreviewStream更新创建预览Stream对象。
status_t StreamingProcessor::updatePreviewStream(const Parameters ¶ms) {
ATRACE_CALL();
Mutex::Autolock m(mMutex);
ALOGD("%s, params.previewWidth = %d, params.previewHeight = %d", __FUNCTION__, params.previewWidth, params.previewHeight);
status_t res;
sp<CameraDeviceBase> device = mDevice.promote();
if (device == 0) {
ALOGE("%s: Camera %d: Device does not exist", __FUNCTION__, mId);
return INVALID_OPERATION;
}
//这里由于是第一次startPreview,而且mPreviewStreamId = NO_STREAM
if (mPreviewStreamId != NO_STREAM) {
// Check if stream parameters have to change
uint32_t currentWidth, currentHeight;
//获取流信息,这里获取宽,高
res = device->getStreamInfo(mPreviewStreamId,
¤tWidth, ¤tHeight, 0);
if (res != OK) {
ALOGE("%s: Camera %d: Error querying preview stream info: "
"%s (%d)", __FUNCTION__, mId, strerror(-res), res);
return res;
}
//如果当前APP请求的宽高,和当前流信息中宽高不一样
//就需要删除当前stream,重新创建Stream。
if (currentWidth != (uint32_t)params.previewWidth ||
currentHeight != (uint32_t)params.previewHeight) {
ALOGV("%s: Camera %d: Preview size switch: %d x %d -> %d x %d",
__FUNCTION__, mId, currentWidth, currentHeight,
params.previewWidth, params.previewHeight);
res = device->waitUntilDrained();
if (res != OK) {
ALOGE("%s: Camera %d: Error waiting for preview to drain: "
"%s (%d)", __FUNCTION__, mId, strerror(-res), res);
return res;
}
//删除当前Stream
res = device->deleteStream(mPreviewStreamId);
if (res != OK) {
return res;
}
//将mPreviewStreamId置为NO_STREAM,下面就重新创建STREAM。
mPreviewStreamId = NO_STREAM;
}
}
if (mPreviewStreamId == NO_STREAM) {
//这里直接调用Camera3Device中createStream接口。不过这里要非常注意它创建的stream格式是CAMERA2_HAL_PIXEL_FORMAT_OPAQUE
res = device->createStream(mPreviewWindow,
params.previewWidth, params.previewHeight,
CAMERA2_HAL_PIXEL_FORMAT_OPAQUE, &mPreviewStreamId);
}
//更新stream是否需要转变,这里是显示方向,用来后面通知SurfaceFlinger旋转方向。
res = device->setStreamTransform(mPreviewStreamId,
params.previewTransform);
if (res != OK) {
ALOGE("%s: Camera %d: Unable to set preview stream transform: "
"%s (%d)", __FUNCTION__, mId, strerror(-res), res);
return res;
}
return OK;
}
函数字面意思就是更新流信息。
代码片段1
status_t Camera3Device::createStream(sp<ANativeWindow> consumer,
uint32_t width, uint32_t height, int format, int *id) {
//------------------
sp<Camera3OutputStream> newStream;
//当formt == HAL_PIXEL_FORMAT_BLOB只在创建jpeg流的时候,才会走到。
if (format == HAL_PIXEL_FORMAT_BLOB) {
//此处省略
} else {
//mNextStreamId用来标记当前已经创建流的种类,该变量在camera3Device初始化已经初始化为0.
//所以下面PreviewStream的id = 0,其中consumer就是我们前面说过的本地surface对象。
//到这里,应该已经猜到了dequeue,enqueue buffer操作都是在流对象中做的。感兴趣的可以
//可以查看源码,确实是这样来的。
newStream = new Camera3OutputStream(mNextStreamId, consumer,
width, height, format);
}
newStream->setStatusTracker(mStatusTracker);//设置流状态跟踪器,暂时不关心
//mOutputStreams是camera3Device的stream仓库管理器。
res = mOutputStreams.add(mNextStreamId, newStream);
if (res < 0) {
SET_ERR_L("Can't add new stream to set: %s (%d)", strerror(-res), res);
return res;
}
*id = mNextStreamId++;
mNeedConfig = true;
//------------------------------
}
到这里上面代码片段res = mOutputStreams.add(mNextStreamId, newStream);
,*id = mNextStreamId++;
让人深思。可以在这里推断出,各种streamId都是统一编号的,而且都是唯一的。每创建一个Stream对象都会根据StreamId保存到mOutputStreams对象中,统一管理。下图是常用的stream类型他们都是Camera3OutputStream对象。
序号 | stream_id | stream_object |
---|---|---|
1 | mPreviewStreamId | Camera3OutputStream(…,format) |
2 | mRecordingStreamId | Camera3OutputStream(…,format) |
3 | mCaptureStreamId | Camera3OutputStream(…,format) |
4 | mZslStreamId | Camera3OutputStream(…,format),这里要说名下的就是ZSL Stream,google工程师重新定义了ZSLStream的类Camera3ZslStream,该类继承了Camera3OutputStream。只不过是是在在创建Stream的同时,创建了一个BufferQueue为后续ZSL拍照做准备。 |
由于CameraService在同一个时刻只能有一个Client工作,下面的Camera2Client对象中包含的6个线程对象。它们各司其职,过程有点复杂,这里只要知道各线层对象负责什么功能就行了,目前可以不必深究。
上图中需要深入研究的是Camera3Device对象。该类对象会注册到工厂类CameraDeviceFactory.cpp中,上面的各个线程使用的device对象就是通过工厂类获取的。camera3Device中维护了一个流仓库mOutputStream
,所有输出流都会根据索引保存到这个流程库中,紧接着将这些流打包到请求流队列mRequestQueue队列中。然后由RequestTread线层将帧请求发送给hal.
#####3).过程3-根据预览参数,更新previewStream
status_t StreamingProcessor::updatePreviewRequest(const Parameters ¶ms) {
ATRACE_CALL();
status_t res;
sp<CameraDeviceBase> device = mDevice.promote();
//-------省略错误检查代码
Mutex::Autolock m(mMutex);
if (mPreviewRequest.entryCount() == 0) {
sp<Camera2Client> client = mClient.promote();
//这里省略很多代码,就是如果hal的api版本超过了CAMERA_DEVICE_API_VERSION_3_0
//就会创建默认对应CAMERA3的preview metadata数据,反之创建CAMERA2的previewMetadata数据,其实还是从hal构建的默认的数据。紧接着,在下面在更新一下就行了。
// Use CAMERA3_TEMPLATE_ZERO_SHUTTER_LAG for ZSL streaming case.
if (client->getCameraDeviceVersion() >= CAMERA_DEVICE_API_VERSION_3_0) {
}
}
//这里会根据应用端下发下来的metadata更新预览流Stream中的metadata数据
res = params.updateRequest(&mPreviewRequest);
//-------这里省略一些错误检查代码,下面这个操作非常重要
res = mPreviewRequest.update(ANDROID_REQUEST_ID,
&mPreviewRequestId, 1);
//-------省略错误检查代码
return OK;
}
这里主要完成创建更新preview metadata数据的工作,具体就下面两点。
#####4)过程4.启动预览流,
status_t StreamingProcessor::startStream(StreamType type,
const Vector<int32_t> &outputStreams) {
ATRACE_CALL();
status_t res;
if (type == NONE) return INVALID_OPERATION;
sp<CameraDeviceBase> device = mDevice.promote();
ALOGV("%s: Camera %d: type = %d", __FUNCTION__, mId, type);
Mutex::Autolock m(mMutex);
// If a recording stream is being started up and no recording
// stream is active yet, free up any outstanding buffers left
// from the previous recording session. There should never be
// any, so if there are, warn about it.
//如果之前已经存在录像流,但是没有在激活状态,就释放掉他们的buffer.
bool isRecordingStreamIdle = !isStreamActive(mActiveStreamIds, mRecordingStreamId);
bool startRecordingStream = isStreamActive(outputStreams, mRecordingStreamId);
if (startRecordingStream && isRecordingStreamIdle) {
releaseAllRecordingFramesLocked();
}
//根据是预览还是录像的类型,选择对应的metadata对象,这里我们传进来的是PREVIEW,
//拿到了之前我们更新的预览metadata对象
CameraMetadata &request = (type == PREVIEW) ?
mPreviewRequest : mRecordingRequest;
//下面这个也非常重要,记住目前我们只启动了preview,那么outputStreams只保留了
//preview stream id.
res = request.update(
ANDROID_REQUEST_OUTPUT_STREAMS,
outputStreams);
res = request.sort();
//根据preview metadta创建request请求,详情请看下面解释。
res = device->setStreamingRequest(request);
mActiveRequest = type;//当前的状态是preview.
mPaused = false;
mActiveStreamIds = outputStreams;//当前激活状态只有preview Stream.
return OK;
}
这里是启动流的前期处理,真正的操作还是在Camera3Device中。该函数主要做了下面3件事情
代码片段2-设置request对象(其实底层是创建)
status_t Camera3Device::setStreamingRequest(const CameraMetadata &request,
int64_t* /*lastFrameNumber*/) {
ATRACE_CALL();
List<const CameraMetadata> requests;
requests.push_back(request);
return setStreamingRequestList(requests, /*lastFrameNumber*/NULL);
}
status_t Camera3Device::setStreamingRequestList(const List<const CameraMetadata> &requests,
int64_t *lastFrameNumber) {
ATRACE_CALL();
//第二个参数告诉requeset线程这是一个循环请求的流对象。
return submitRequestsHelper(requests, /*repeating*/true, lastFrameNumber);
}
status_t Camera3Device::submitRequestsHelper(
const List<const CameraMetadata> &requests, bool repeating,
/*out*/
int64_t *lastFrameNumber) {
ATRACE_CALL();
Mutex::Autolock il(mInterfaceLock);
Mutex::Autolock l(mLock);
status_t res = checkStatusOkToCaptureLocked();
if (res != OK) {
// error logged by previous call
return res;
}
//这里requestList其实是一个List对象,可以理解成数组,可惜这里我们只有一个元素
RequestList requestList;
//关键是下面这个函数,代码在下面代码片段3,
res = convertMetadataListToRequestListLocked(requests, /*out*/&requestList);
if (res != OK) {
// error logged by previous call
return res;
}
//这preview要求是循环的
if (repeating) {
res = mRequestThread->setRepeatingRequests(requestList, lastFrameNumber);
} else {
res = mRequestThread->queueRequestList(requestList, lastFrameNumber);
}
if (res == OK) {
waitUntilStateThenRelock(/*active*/true, kActiveTimeout);
if (res != OK) {
SET_ERR_L("Can't transition to active in %f seconds!",
kActiveTimeout/1e9);
}
ALOGV("Camera %d: Capture request %" PRId32 " enqueued", mId,
(*(requestList.begin()))->mResultExtras.requestId);
} else {
CLOGE("Cannot queue request. Impossible.");
return BAD_VALUE;
}
return res;
}
代码片段3-由metadata创建request对象列表
status_t Camera3Device::convertMetadataListToRequestListLocked(
const List<const CameraMetadata> &metadataList, RequestList *requestList) {
//-------
int32_t burstId = 0;
for (List<const CameraMetadata>::const_iterator it = metadataList.begin();
it != metadataList.end(); ++it) {
//上面参数数组中只有一个previewStream,但是在录像时,这里还会有录像Stream
//如果是video snapshot,这里还会有jpeg Stream.
//这里就是根据对应的metadata 包中包含的数据创建请求对象CaptureRequest.
//函数精华请查阅下面代码。最好跳转过去瞅瞅
sp<CaptureRequest> newRequest = setUpRequestLocked(*it);
if (newRequest == 0) {
CLOGE("Can't create capture request");
return BAD_VALUE;
}
// Setup burst Id and request Id
//目前还不清楚这里是干嘛用的???????连拍场景吗???
newRequest->mResultExtras.burstId = burstId++;
//找到previewRequest_ID,每一个Stream都一个记录
//request_id成员,这里起始值是0.
if (it->exists(ANDROID_REQUEST_ID)) {
if (it->find(ANDROID_REQUEST_ID).count == 0) {
CLOGE("RequestID entry exists; but must not be empty in metadata");
return BAD_VALUE;
}
//这里有初始化了当前请求request的请求id(每循环一次会+1)
newRequest->mResultExtras.requestId = it->find(ANDROID_REQUEST_ID).data.i32[0];
} else {
CLOGE("RequestID does not exist in metadata");
return BAD_VALUE;
}
//将当前request保存到请求列表中
requestList->push_back(newRequest);
ALOGV("%s: requestId = %" PRId32, __FUNCTION__, newRequest->mResultExtras.requestId);
}
return OK;
}
函数执行完后,创建的request对象只初始化下面几个成员。
名字 | 备注 |
---|---|
mSettings | 目前这里只将preview metadata引用赋值到这里 |
mOutputStreams | 输出流对象数组,现在里面只保存了preview Stream |
mResultExtras.burstId | 目前我的理解是给连拍使用,还不确定 |
newRequest->mResultExtras.requestId | 当前请求ID,初次的话值=0 |
代码片段4-真正创建request对象的地方
sp<Camera3Device::CaptureRequest> Camera3Device::setUpRequestLocked(
const CameraMetadata &request) {
status_t res;
if (mStatus == STATUS_UNCONFIGURED || mNeedConfig) {
//配置所有Stream对象,下面填充request对象时会做判断是否配置完成。
res = configureStreamsLocked();
//------省略错误检查
}
//直接就下去了,继续ing.
sp<CaptureRequest> newRequest = createCaptureRequest(request);
return newRequest;
}
//-----------------------------------------------------
sp<Camera3Device::CaptureRequest> Camera3Device::createCaptureRequest(
const CameraMetadata &request) {
ATRACE_CALL();
status_t res;
//这里直接先new出来一个请求对象,下面一个一个填充它的成员.下面是CaptureRequest
//成员,先一睹为快。
//class CaptureRequest : public LightRefBase {
// public:
// CameraMetadata mSettings;
// sp mInputStream;
// Vector >
// mOutputStreams;
// CaptureResultExtras mResultExtras;
//};
sp<CaptureRequest> newRequest = new CaptureRequest;
//第一个就是将preview metadata保存到mSettings域中。
newRequest->mSettings = request;
camera_metadata_entry_t inputStreams =
newRequest->mSettings.find(ANDROID_REQUEST_INPUT_STREAMS);
if (inputStreams.count > 0) {
//这里些判断是否有输入buffer,需要hal做进一步处理。一般在ZSL模式下拍照时,
//需要将预览帧,从新放到input buffer中由hal编码在回来。
}
//前面已经知道ANDROID_REQUEST_OUTPUT_STREAMS数据项只有1个预览Stream_ID的数据项
//下面就会根据该数据找到previw Stream对象。
camera_metadata_entry_t streams =
newRequest->mSettings.find(ANDROID_REQUEST_OUTPUT_STREAMS);
if (streams.count == 0) {
CLOGE("Zero output streams specified!");
return NULL;
}
for (size_t i = 0; i < streams.count; i++) {
//从预览流id,找到实际在mOutputStreams中的索引。
int idx = mOutputStreams.indexOfKey(streams.data.i32[i]);
if (idx == NAME_NOT_FOUND) {
CLOGE("Request references unknown stream %d",
streams.data.u8[i]);
return NULL;
}
//取出Stream对象。
sp<Camera3OutputStreamInterface> stream =
mOutputStreams.editValueAt(idx);
// Lazy completion of stream configuration (allocation/registration)
// on first use,下面判断流是否配置过,这里先不用管它
if (stream->isConfiguring()) {
res = stream->finishConfiguration(mHal3Device);
if (res != OK) {
SET_ERR_L("Unable to finish configuring stream %d: %s (%d)",
stream->getId(), strerror(-res), res);
return NULL;
}
}
//将preview Stream保存到requeset对象中的mOutputStreams列表中。
//这里只有preview Stream 进行一次循环就出来了,如果是video,video_snapshot
//场景则会进行多次,可以看到是一个request对象可以包含多个Steam.
newRequest->mOutputStreams.push(stream);
}
newRequest->mSettings.erase(ANDROID_REQUEST_OUTPUT_STREAMS);
return newRequest;//返回当前打包的request对象
}
该函数为创建请求request对象的操作,只不过这里只初始化了mSettings和mOutputStreams对象。后续的操作还要返回到前面代码片段3中。
名字 | 备注 |
---|---|
mSettings | 目前这里只将preview metadata引用赋值到这里 |
mOutputStreams | 输出流对象数组,现在里面只保存了preview Stream |
代码片段4-将创建的reqeust对象入暂存队列
status_t Camera3Device::RequestThread::setRepeatingRequests(
const RequestList &requests,
/*out*/
int64_t *lastFrameNumber) {
Mutex::Autolock l(mRequestLock);
if (lastFrameNumber != NULL) {
*lastFrameNumber = mRepeatingLastFrameNumber;
}
//现将重复的请求放到暂存区mRepeatingRequests对象中
//下面先清空暂存区
mRepeatingRequests.clear();
//将我们的秦秋插入到数组中。
mRepeatingRequests.insert(mRepeatingRequests.begin(),
requests.begin(), requests.end());
//此时已经有新的请求了,这个时候就要激活request线程了,
//具体请看下面函数实现
unpauseForNewRequests();
mRepeatingLastFrameNumber = NO_IN_FLIGHT_REPEATING_FRAMES;
return OK;
}
代码片段5-激活request线程
void Camera3Device::RequestThread::unpauseForNewRequests() {
// With work to do, mark thread as unpaused.
// If paused by request (setPaused), don't resume, to avoid
// extra signaling/waiting overhead to waitUntilPaused
//该信号激活request线程,request线程卡在了等待该信号的地方。
mRequestSignal.signal();
Mutex::Autolock p(mPauseLock);
if (!mDoPause) {
ALOGV("%s: RequestThread: Going active", __FUNCTION__);
if (mPaused) {
sp<StatusTracker> statusTracker = mStatusTracker.promote();
if (statusTracker != 0) {
statusTracker->markComponentActive(mStatusId);
}
}
mPaused = false;
}
}
代码片段6-线程阻塞获取下一个请求对象的实现
sp<Camera3Device::CaptureRequest>
Camera3Device::RequestThread::waitForNextRequest() {
status_t res;
sp<CaptureRequest> nextRequest;
// Optimized a bit for the simple steady-state case (single repeating
// request), to avoid putting that request in the queue temporarily.
Mutex::Autolock l(mRequestLock);
while (mRequestQueue.empty()) {
//下面暂存区显然已经不是空的了,
if (!mRepeatingRequests.empty()) {
// Always atomically enqueue all requests in a repeating request
// list. Guarantees a complete in-sequence set of captures to
// application.
const RequestList &requests = mRepeatingRequests;
RequestList::const_iterator firstRequest =
requests.begin();
nextRequest = *firstRequest;
//将暂存区的request下方到请求队列mRequestQueue中。
mRequestQueue.insert(mRequestQueue.end(),
++firstRequest,
requests.end());
// No need to wait any longer
mRepeatingLastFrameNumber = mFrameNumber + requests.size() - 1;
break;
}
//request线程在这里开始等待,下次线层循环,会在上面将我们之前放到
//暂存区的请求加入到请求队列中。
res = mRequestSignal.waitRelative(mRequestLock, kRequestTimeout);
if ((mRequestQueue.empty() && mRepeatingRequests.empty()) ||
exitPending()) {
Mutex::Autolock pl(mPauseLock);
if (mPaused == false) {
ALOGV("%s: RequestThread: Going idle", __FUNCTION__);
mPaused = true;
// Let the tracker know
sp<StatusTracker> statusTracker = mStatusTracker.promote();
if (statusTracker != 0) {
statusTracker->markComponentIdle(mStatusId, Fence::NO_FENCE);
}
}
// Stop waiting for now and let thread management happen
return NULL;
}
}
//如果下一次请求对象为NULL,说明已经没有请求了,就会释放请求队列中的所以request对象。
if (nextRequest == NULL) {
// Don't have a repeating request already in hand, so queue
// must have an entry now.
RequestList::iterator firstRequest =
mRequestQueue.begin();
nextRequest = *firstRequest;
mRequestQueue.erase(firstRequest);
}
//------去掉一些暂时不用关心的代码
//下面又更新了request中的3个成员,其中mFrameNumber就是我们在hal中
//processCaptureRequest中看到的FrameNumber。
if (nextRequest != NULL) {
nextRequest->mResultExtras.frameNumber = mFrameNumber++;
nextRequest->mResultExtras.afTriggerId = mCurrentAfTriggerId;
nextRequest->mResultExtras.precaptureTriggerId = mCurrentPreCaptureTriggerId;
}
return nextRequest;
}
代码片段7-request处理线程实现
bool Camera3Device::RequestThread::threadLoop() {
status_t res;
// Handle paused state.
if (waitIfPaused()) {
return true;
}
// Get work to do
//线程不断的通过下面这个接口,获取请求对象。
sp<CaptureRequest> nextRequest = waitForNextRequest();
if (nextRequest == NULL) {
return true;
}
// Create request to HAL
//将之前的request打包成hal的camera3_capture_request_t对象
camera3_capture_request_t request = camera3_capture_request_t();
//保存frameNumber.
request.frame_number = nextRequest->mResultExtras.frameNumber;
Vector<camera3_stream_buffer_t> outputBuffers;
// Get the request ID, if any
int requestId;
camera_metadata_entry_t requestIdEntry =
nextRequest->mSettings.find(ANDROID_REQUEST_ID);
if (requestIdEntry.count > 0) {
//取出请求id,下面会用于激活一个等待设备空闲的信号,
requestId = requestIdEntry.data.i32[0];
} else {
ALOGW("%s: Did not have android.request.id set in the request",
__FUNCTION__);
requestId = NAME_NOT_FOUND;
}
// Insert any queued triggers (before metadata is locked)
int32_t triggerCount;
//添加触发af的请求id,这个会在hal中触发对焦,ae动作。
res = insertTriggers(nextRequest);
if (res < 0) {
SET_ERR("RequestThread: Unable to insert triggers "
"(capture request %d, HAL device: %s (%d)",
request.frame_number, strerror(-res), res);
cleanUpFailedRequest(request, nextRequest, outputBuffers);
return false;
}
triggerCount = res;
//前面是否不是已经触发过af,touch ae动作。
bool triggersMixedIn = (triggerCount > 0 || mPrevTriggers > 0);
// If the request is the same as last, or we had triggers last time
//如果之前有af,ae动作,而且是重新创建的request对象,这里会创建一个假的
//触发id,即1.
//这里是ANDROID_CONTROL_AF_TRIGGER_ID和ANDROID_CONTROL_AE_PRECAPTURE_ID
//dummy id就是1.
if (mPrevRequest != nextRequest || triggersMixedIn) {
/**
* HAL workaround:
* Insert a dummy trigger ID if a trigger is set but no trigger ID is
*/
res = addDummyTriggerIds(nextRequest);
if (res != OK) {
SET_ERR("RequestThread: Unable to insert dummy trigger IDs "
"(capture request %d, HAL device: %s (%d)",
request.frame_number, strerror(-res), res);
cleanUpFailedRequest(request, nextRequest, outputBuffers);
return false;
}
/**
* The request should be presorted so accesses in HAL
* are O(logn). Sidenote, sorting a sorted metadata is nop.
*/
//到这里后面就不在更新metadata了,所有这里做了内部排序。
nextRequest->mSettings.sort();
//这里锁住metadata数据,(之前在hal中使用该metadata数据,但是我自以为是将
//数据打包进CameraMetadata对象中,以为这样才能使用类中的方法更新metadata的
//参数,可是当时我失败了,原因这里已经lock了,可以先unlock,在使用)
request.settings = nextRequest->mSettings.getAndLock();
mPrevRequest = nextRequest;
ALOGVV("%s: Request settings are NEW", __FUNCTION__);
//------
camera3_stream_buffer_t inputBuffer;
uint32_t totalNumBuffers = 0;
// Fill in buffers
//目前还没有inputBuffer.
if (nextRequest->mInputStream != NULL) {
request.input_buffer = &inputBuffer;
res = nextRequest->mInputStream->getInputBuffer(&inputBuffer);
if (res != OK) {
// Can't get input buffer from gralloc queue - this could be due to
// disconnected queue or other producer misbehavior, so not a fatal
// error
ALOGE("RequestThread: Can't get input buffer, skipping request:"
" %s (%d)", strerror(-res), res);
Mutex::Autolock l(mRequestLock);
if (mListener != NULL) {
mListener->notifyError(
ICameraDeviceCallbacks::ERROR_CAMERA_REQUEST,
nextRequest->mResultExtras);
}
cleanUpFailedRequest(request, nextRequest, outputBuffers);
return true;
}
totalNumBuffers += 1;
} else {
request.input_buffer = NULL;
}
//outputBuffers是前面定义的局部变量,这里根据request中mOutputStreams对象
//包含的流对象数目,添加元素。
outputBuffers.insertAt(camera3_stream_buffer_t(), 0,
nextRequest->mOutputStreams.size());
request.output_buffers = outputBuffers.array();
//下面是最关键的地方了,其中getBuffer()就是申请buffer的地方,进入函数
//会发现就是dequeue buffer操作。这里是生产者对象在操作,不管是
//preview stream,还是recording stream.
for (size_t i = 0; i < nextRequest->mOutputStreams.size(); i++) {
res = nextRequest->mOutputStreams.editItemAt(i)->
getBuffer(&outputBuffers.editItemAt(i));
if (res != OK) {
//这里如果申请buffer失败,则会callback通知应用端,buffer申请失败了。
}
request.num_output_buffers++;
}
totalNumBuffers += request.num_output_buffers;
// Log request in the in-flight queue
sp<Camera3Device> parent = mParent.promote();
if (parent == NULL) {
// Should not happen, and nowhere to send errors to, so just log it
CLOGE("RequestThread: Parent is gone");
cleanUpFailedRequest(request, nextRequest, outputBuffers);
return false;
}
//下面是非常重要的,这里将request根据frame_number,保存到Camera3Device
//对象中的mInFlightMap数组中,这个过程非常重要,后面当hal回传帧信息时,会
//根据framnumber,在mInFlightMap查找request对象,并将时间戳,shuttter
//等信息保存到request的mResultExtras成员中,便于其它函数查找相关信息。
//特别是在processCaptureResult()方法中
res = parent->registerInFlight(request.frame_number,
totalNumBuffers, nextRequest->mResultExtras,
/*hasInput*/request.input_buffer != NULL);
ALOGVV("%s: registered in flight requestId = %" PRId32 ", frameNumber = %" PRId64
", burstId = %" PRId32 ".",
__FUNCTION__,
nextRequest->mResultExtras.requestId, nextRequest->mResultExtras.frameNumber,
nextRequest->mResultExtras.burstId);
if (res != OK) {
SET_ERR("RequestThread: Unable to register new in-flight request:"
" %s (%d)", strerror(-res), res);
cleanUpFailedRequest(request, nextRequest, outputBuffers);
return false;
}
// Inform waitUntilRequestProcessed thread of a new request ID
{
Mutex::Autolock al(mLatestRequestMutex);
//记录当前requestId,并激活一些等待线程。这里由于有一些地方流
//变化了,需要底层处理完当前流,才能进行一些其它操作,所以这里
//
mLatestRequestId = requestId;
mLatestRequestSignal.signal();
}
// Submit request and block until ready for next one
ATRACE_ASYNC_BEGIN("frame capture", request.frame_number);
ATRACE_BEGIN("camera3->process_capture_request");
//将类型为camera3_capture_request_t的request对象发送给hal.
res = mHal3Device->ops->process_capture_request(mHal3Device, &request);
ATRACE_END();
if (res != OK) {
//省略
}
// Update the latest request sent to HAL
if (request.settings != NULL) { // Don't update them if they were unchanged
Mutex::Autolock al(mLatestRequestMutex);
//保存最近一次requtest的metadata数据。
camera_metadata_t* cloned = clone_camera_metadata(request.settings);
mLatestRequest.acquire(cloned);
}
//这里进行解锁,便于后面更新数据。
if (request.settings != NULL) {
nextRequest->mSettings.unlock(request.settings);
}
// Remove any previously queued triggers (after unlock)
//移除metadata中的af,ae触发tag
res = removeTriggers(mPrevRequest);
mPrevTriggers = triggerCount;
return true;
}
到这里,framework的请求发送给hal,hal根据请求中的参数,配置底层的设置,并将buffer映射到cameraserver所在进程的进程空间,然后由后续的模块填充。这里做了下面几个操作。
###四、总结
看到这里已经明白了(我是明白了_)camera工作时,存在了5中流处理线程和一个专门向hal发送请求的request线程。线程之间通过信号来同步,稍不注意就搞不明白代码是如何运行的了。其中很容易让我们忽视的就是在流发送之前的parent->registerInFlight()该操作将当前的请求保存到一个数组(可以理解成)中。
这个数组对象在后续回帧操作中,会将相应帧的shutter,时间戳信息填充到对应的request中,紧接着就把对应帧的信息返回给app。好了先到这吧,下一篇分析Camera recording流程。