本节我们来看一下Camera预览是如何循环的。我自己使用的Android8.0的系统源码是通过百度云盘分享的,大家可从Android 8.0系统源码分析--开篇中下载,百度云盘的下载链接和密码都有。
大家使用API2开发相机APP时都清楚,我们起预览时调用CameraCaptureSession类的setRepeatingRequest方法,该方法的实现是由CameraCaptureSessionImpl来完成的,CameraCaptureSessionImpl文件路径为frameworks\base\core\java\android\hardware\camera2\impl\CameraCaptureSessionImpl.java,setRepeatingRequest方法的源码如下:
@Override
public synchronized int setRepeatingRequest(CaptureRequest request, CaptureCallback callback,
Handler handler) throws CameraAccessException {
if (request == null) {
throw new IllegalArgumentException("request must not be null");
} else if (request.isReprocess()) {
throw new IllegalArgumentException("repeating reprocess requests are not supported");
}
checkNotClosed();
handler = checkHandler(handler, callback);
if (DEBUG) {
Log.v(TAG, mIdString + "setRepeatingRequest - request " + request + ", callback " +
callback + " handler" + " " + handler);
}
return addPendingSequence(mDeviceImpl.setRepeatingRequest(request,
createCaptureCallbackProxy(handler, callback), mDeviceHandler));
}
这里需要注意,第一个参数CaptureRequest只有一个Request,而在后面会将它包装成List,那么很明显,List的元素个数就只有一个,也就是我们这里传下去的参数了。可以看到该方法中又调用了mDeviceImpl.setRepeatingRequest,mDeviceImpl成员变量的类型就是CameraDeviceImpl,和当前文件在同级目录下,我们继续看一下它的setRepeatingRequest方法的实现,源码如下:
public int setRepeatingRequest(CaptureRequest request, CaptureCallback callback,
Handler handler) throws CameraAccessException {
List requestList = new ArrayList();
requestList.add(request);
return submitCaptureRequest(requestList, callback, handler, /*streaming*/true);
}
这里就看到了,我们上层传下来的Request被进一步包装成List,而List的元素只有一个,然后继续调用submitCaptureRequest方法进行处理,submitCaptureRequest方法的源码如下:
private int submitCaptureRequest(List requestList, CaptureCallback callback,
Handler handler, boolean repeating) throws CameraAccessException {
// Need a valid handler, or current thread needs to have a looper, if
// callback is valid
handler = checkHandler(handler, callback);
// Make sure that there all requests have at least 1 surface; all surfaces are non-null
for (CaptureRequest request : requestList) {
if (request.getTargets().isEmpty()) {
throw new IllegalArgumentException(
"Each request must have at least one Surface target");
}
for (Surface surface : request.getTargets()) {
if (surface == null) {
throw new IllegalArgumentException("Null Surface targets are not allowed");
}
}
}
synchronized (mInterfaceLock) {
checkIfCameraClosedOrInError();
if (repeating) {
stopRepeating();
}
SubmitInfo requestInfo;
CaptureRequest[] requestArray = requestList.toArray(new CaptureRequest[requestList.size()]);
requestInfo = mRemoteDevice.submitRequestList(requestArray, repeating);
if (DEBUG) {
Log.v(TAG, "last frame number " + requestInfo.getLastFrameNumber());
}
if (callback != null) {
mCaptureCallbackMap.put(requestInfo.getRequestId(),
new CaptureCallbackHolder(
callback, requestList, handler, repeating, mNextSessionId - 1));
} else {
if (DEBUG) {
Log.d(TAG, "Listen for request " + requestInfo.getRequestId() + " is null");
}
}
if (repeating) {
if (mRepeatingRequestId != REQUEST_ID_NONE) {
checkEarlyTriggerSequenceComplete(mRepeatingRequestId,
requestInfo.getLastFrameNumber());
}
mRepeatingRequestId = requestInfo.getRequestId();
} else {
mRequestLastFrameNumbersList.add(
new RequestLastFrameNumbersHolder(requestList, requestInfo));
}
if (mIdle) {
mDeviceHandler.post(mCallOnActive);
}
mIdle = false;
return requestInfo.getRequestId();
}
}
该方法先对handler、Surface参数进行检查,如果检查出错,就直接抛出异常,request.getTargets()得到的就是我们在APP层放进去的Surface对象了;参数检查完成后,又将它转换成数组,然后调用mRemoteDevice.submitRequestList(requestArray, repeating)提交到CameraServer进程当中。第二个参数repeating表示是否重复,也就是预览的意思,该参数的两种取值为true就表示是预览请求,需要重复;为false表示是拍照,只有一帧,不需要重复,该参数往下传递,会在CameraServer中决定当前Request插入到哪个队列当中,我们等下就可以看到了。这里的mRemoteDevice我们在之前Android 8.0系统源码分析--openCamera启动过程源码分析一文中已经详细的分析过了,它是CameraServer进程当中执行openCamera成功后返回给Client端Binder对象的代理,它和CameraServer进程当中的CameraDeviceClient对象是对应的,只不过这里的mRemoteDevice还经过了Framework一点包装处理而已,所以这里的mRemoteDevice.submitRequestList(requestArray, repeating)就会通过Binder进程间通信调用到CameraDeviceClient对象中了。
到处都是Binder,所以还是请大家用心学习,打好基础,我们才能理解底层到底是怎么实现的。
好,我们继续看CameraDeviceClient类的submitRequestList方法,CameraDeviceClient文件的路径为frameworks\av\services\camera\libcameraservice\api2\CameraDeviceClient.cpp,submitRequestList方法的源码如下:
binder::Status CameraDeviceClient::submitRequestList(
const std::vector& requests,
bool streaming,
/*out*/
hardware::camera2::utils::SubmitInfo *submitInfo) {
ATRACE_CALL();
ALOGV("%s-start of function. Request list size %zu", __FUNCTION__, requests.size());
binder::Status res = binder::Status::ok();
status_t err;
if ( !(res = checkPidStatus(__FUNCTION__) ).isOk()) {
return res;
}
Mutex::Autolock icl(mBinderSerializationLock);
if (!mDevice.get()) {
return STATUS_ERROR(CameraService::ERROR_DISCONNECTED, "Camera device no longer alive");
}
if (requests.empty()) {
ALOGE("%s: Camera %s: Sent null request. Rejecting request.",
__FUNCTION__, mCameraIdStr.string());
return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT, "Empty request list");
}
List metadataRequestList;
std::list surfaceMapList;
submitInfo->mRequestId = mRequestIdCounter;
uint32_t loopCounter = 0;
for (auto&& request: requests) {
if (request.mIsReprocess) {
if (!mInputStream.configured) {
ALOGE("%s: Camera %s: no input stream is configured.", __FUNCTION__,
mCameraIdStr.string());
return STATUS_ERROR_FMT(CameraService::ERROR_ILLEGAL_ARGUMENT,
"No input configured for camera %s but request is for reprocessing",
mCameraIdStr.string());
} else if (streaming) {
ALOGE("%s: Camera %s: streaming reprocess requests not supported.", __FUNCTION__,
mCameraIdStr.string());
return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
"Repeating reprocess requests not supported");
}
}
CameraMetadata metadata(request.mMetadata);
if (metadata.isEmpty()) {
ALOGE("%s: Camera %s: Sent empty metadata packet. Rejecting request.",
__FUNCTION__, mCameraIdStr.string());
return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
"Request settings are empty");
} else if (request.mSurfaceList.isEmpty()) {
ALOGE("%s: Camera %s: Requests must have at least one surface target. "
"Rejecting request.", __FUNCTION__, mCameraIdStr.string());
return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
"Request has no output targets");
}
if (!enforceRequestPermissions(metadata)) {
// Callee logs
return STATUS_ERROR(CameraService::ERROR_PERMISSION_DENIED,
"Caller does not have permission to change restricted controls");
}
/**
* Write in the output stream IDs and map from stream ID to surface ID
* which we calculate from the capture request's list of surface target
*/
SurfaceMap surfaceMap;
Vector outputStreamIds;
for (sp surface : request.mSurfaceList) {
if (surface == 0) continue;
sp gbp = surface->getIGraphicBufferProducer();
int idx = mStreamMap.indexOfKey(IInterface::asBinder(gbp));
// Trying to submit request with surface that wasn't created
if (idx == NAME_NOT_FOUND) {
ALOGE("%s: Camera %s: Tried to submit a request with a surface that"
" we have not called createStream on",
__FUNCTION__, mCameraIdStr.string());
return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
"Request targets Surface that is not part of current capture session");
}
const StreamSurfaceId& streamSurfaceId = mStreamMap.valueAt(idx);
if (surfaceMap.find(streamSurfaceId.streamId()) == surfaceMap.end()) {
surfaceMap[streamSurfaceId.streamId()] = std::vector();
outputStreamIds.push_back(streamSurfaceId.streamId());
}
surfaceMap[streamSurfaceId.streamId()].push_back(streamSurfaceId.surfaceId());
ALOGV("%s: Camera %s: Appending output stream %d surface %d to request",
__FUNCTION__, mCameraIdStr.string(), streamSurfaceId.streamId(),
streamSurfaceId.surfaceId());
}
metadata.update(ANDROID_REQUEST_OUTPUT_STREAMS, &outputStreamIds[0],
outputStreamIds.size());
if (request.mIsReprocess) {
metadata.update(ANDROID_REQUEST_INPUT_STREAMS, &mInputStream.id, 1);
}
metadata.update(ANDROID_REQUEST_ID, &(submitInfo->mRequestId), /*size*/1);
loopCounter++; // loopCounter starts from 1
ALOGV("%s: Camera %s: Creating request with ID %d (%d of %zu)",
__FUNCTION__, mCameraIdStr.string(), submitInfo->mRequestId,
loopCounter, requests.size());
metadataRequestList.push_back(metadata);
surfaceMapList.push_back(surfaceMap);
}
mRequestIdCounter++;
if (streaming) {
err = mDevice->setStreamingRequestList(metadataRequestList, surfaceMapList,
&(submitInfo->mLastFrameNumber));
if (err != OK) {
String8 msg = String8::format(
"Camera %s: Got error %s (%d) after trying to set streaming request",
mCameraIdStr.string(), strerror(-err), err);
ALOGE("%s: %s", __FUNCTION__, msg.string());
res = STATUS_ERROR(CameraService::ERROR_INVALID_OPERATION,
msg.string());
} else {
Mutex::Autolock idLock(mStreamingRequestIdLock);
mStreamingRequestId = submitInfo->mRequestId;
}
} else {
err = mDevice->captureList(metadataRequestList, surfaceMapList,
&(submitInfo->mLastFrameNumber));
if (err != OK) {
String8 msg = String8::format(
"Camera %s: Got error %s (%d) after trying to submit capture request",
mCameraIdStr.string(), strerror(-err), err);
ALOGE("%s: %s", __FUNCTION__, msg.string());
res = STATUS_ERROR(CameraService::ERROR_INVALID_OPERATION,
msg.string());
}
ALOGV("%s: requestId = %d ", __FUNCTION__, submitInfo->mRequestId);
}
ALOGV("%s: Camera %s: End of function", __FUNCTION__, mCameraIdStr.string());
return res;
}
进来还是先进行参数检查,mDevice成员变量是在构造参数时赋值好的,是在父类Camera2ClientBase对象的构造函数中new出来的;要提交request,那么参数requests肯定不能为空了。接下来的for循环是对函数入参requests的一些检查,完成后填充到局部变量metadataRequestList、surfaceMapList中,作为参数继续调用mDevice的方法进一步处理,进一步处理的分类判断条件非常明确,就是我们在Framework中传入的参数repeating,如果是预览,就调用mDevice->setStreamingRequestList(metadataRequestList, surfaceMapList, &(submitInfo->mLastFrameNumber)),如果是拍照就调用mDevice->captureList(metadataRequestList, surfaceMapList, &(submitInfo->mLastFrameNumber))。
下面的内容就到了我们本节的重点了,我们要讲的就是为什么我们在上层只调用了一次setRepeatingRequest,而且只有一个Request,但是预览帧数据却源源不断的输出上来,我们就从这里往下分析一下Request循环的原理。
mDevice成员变量是sp
status_t Camera3Device::captureList(const List &requests,
const std::list &surfaceMaps,
int64_t *lastFrameNumber) {
ATRACE_CALL();
return submitRequestsHelper(requests, surfaceMaps, /*repeating*/false, lastFrameNumber);
}
status_t Camera3Device::setStreamingRequest(const CameraMetadata &request,
int64_t* /*lastFrameNumber*/) {
ATRACE_CALL();
List requests;
std::list surfaceMaps;
convertToRequestList(requests, surfaceMaps, request);
return setStreamingRequestList(requests, /*surfaceMap*/surfaceMaps,
/*lastFrameNumber*/NULL);
}
status_t Camera3Device::setStreamingRequestList(const List &requests,
const std::list &surfaceMaps,
int64_t *lastFrameNumber) {
ATRACE_CALL();
return submitRequestsHelper(requests, surfaceMaps, /*repeating*/true, lastFrameNumber);
}
很简单,都是直接调用submitRequestsHelper方法来进一步处理的,submitRequestsHelper方法的源码如下:
status_t Camera3Device::submitRequestsHelper(
const List &requests,
const std::list &surfaceMaps,
bool repeating,
/*out*/
int64_t *lastFrameNumber) {
ATRACE_CALL();
Mutex::Autolock il(mInterfaceLock);
Mutex::Autolock l(mLock);
status_t res = checkStatusOkToCaptureLocked();
if (res != OK) {
// error logged by previous call
return res;
}
RequestList requestList;
res = convertMetadataListToRequestListLocked(requests, surfaceMaps,
repeating, /*out*/&requestList);
if (res != OK) {
// error logged by previous call
return res;
}
if (repeating) {
res = mRequestThread->setRepeatingRequests(requestList, lastFrameNumber);
} else {
res = mRequestThread->queueRequestList(requestList, lastFrameNumber);
}
if (res == OK) {
waitUntilStateThenRelock(/*active*/true, kActiveTimeout);
if (res != OK) {
SET_ERR_L("Can't transition to active in %f seconds!",
kActiveTimeout/1e9);
}
ALOGV("Camera %s: Capture request %" PRId32 " enqueued", mId.string(),
(*(requestList.begin()))->mResultExtras.requestId);
} else {
CLOGE("Cannot queue request. Impossible.");
return BAD_VALUE;
}
return res;
}
这里有一个convertMetadataListToRequestListLocked方法,是进行Metadata转换的,我们就不深究了,大家有兴趣可以看己看一下。转换完成后,根据repeating的值分别调用mRequestThread成员变量的setRepeatingRequests、queueRequestList方法。我们先来看一下拍照时的queueRequestList方法,源码如下:
status_t Camera3Device::RequestThread::queueRequestList(
List > &requests,
/*out*/
int64_t *lastFrameNumber) {
Mutex::Autolock l(mRequestLock);
for (List >::iterator it = requests.begin(); it != requests.end();
++it) {
mRequestQueue.push_back(*it);
}
if (lastFrameNumber != NULL) {
*lastFrameNumber = mFrameNumber + mRequestQueue.size() - 1;
ALOGV("%s: requestId %d, mFrameNumber %" PRId32 ", lastFrameNumber %" PRId64 ".",
__FUNCTION__, (*(requests.begin()))->mResultExtras.requestId, mFrameNumber,
*lastFrameNumber);
}
unpauseForNewRequests();
return OK;
}
该方法中的逻辑非常清晰,for循环中就是将入参requests放入到成员变量mRequestQueue当中,所以这里大家一定要注意,mRequestQueue是存储拍照Request的,然后给输出参数lastFrameNumber赋值,mFrameNumber就是当前的帧号,它是从0开始递增的一个整数,它的递增也是在有效的预览循环开始后开始递增的,等会我们就会看到了。
接着看setRepeatingRequests方法,源码如下:
status_t Camera3Device::RequestThread::setRepeatingRequests(
const RequestList &requests,
/*out*/
int64_t *lastFrameNumber) {
Mutex::Autolock l(mRequestLock);
if (lastFrameNumber != NULL) {
*lastFrameNumber = mRepeatingLastFrameNumber;
}
mRepeatingRequests.clear();
mRepeatingRequests.insert(mRepeatingRequests.begin(),
requests.begin(), requests.end());
unpauseForNewRequests();
mRepeatingLastFrameNumber = hardware::camera2::ICameraDeviceUser::NO_IN_FLIGHT_REPEATING_FRAMES;
return OK;
}
这里是把mRepeatingLastFrameNumber赋值给输出参数lastFrameNumber,然后将mRepeatingRequests清空,再将入参requests放入到mRepeatingRequests当中,从这里和queueRequestList方法的实现对比,就可以看出来,mRepeatingRequests是用来存储预览Request的,这就是两个成员变量不同的作用了,一定要分清楚。这里有些奇怪,为什么只是将requests插入到队列中就完了呢?我们来看一下RequestThread就明白了。
RequestThread就是我们本节最重点的对象了,它是一条线程,是在Camera3Device构造成功,调用initializeCommonLocked进行初始化时构造的,initializeCommonLocked方法的源码如下:
status_t Camera3Device::initializeCommonLocked() {
/** Start up status tracker thread */
mStatusTracker = new StatusTracker(this);
status_t res = mStatusTracker->run(String8::format("C3Dev-%s-Status", mId.string()).string());
if (res != OK) {
SET_ERR_L("Unable to start status tracking thread: %s (%d)",
strerror(-res), res);
mInterface->close();
mStatusTracker.clear();
return res;
}
/** Register in-flight map to the status tracker */
mInFlightStatusId = mStatusTracker->addComponent();
/** Create buffer manager */
mBufferManager = new Camera3BufferManager();
mTagMonitor.initialize(mVendorTagId);
/** Start up request queue thread */
mRequestThread = new RequestThread(this, mStatusTracker, mInterface.get());
res = mRequestThread->run(String8::format("C3Dev-%s-ReqQueue", mId.string()).string());
if (res != OK) {
SET_ERR_L("Unable to start request queue thread: %s (%d)",
strerror(-res), res);
mInterface->close();
mRequestThread.clear();
return res;
}
mPreparerThread = new PreparerThread();
internalUpdateStatusLocked(STATUS_UNCONFIGURED);
mNextStreamId = 0;
mDummyStreamId = NO_STREAM;
mNeedConfig = true;
mPauseStateNotify = false;
// Measure the clock domain offset between camera and video/hw_composer
camera_metadata_entry timestampSource =
mDeviceInfo.find(ANDROID_SENSOR_INFO_TIMESTAMP_SOURCE);
if (timestampSource.count > 0 && timestampSource.data.u8[0] ==
ANDROID_SENSOR_INFO_TIMESTAMP_SOURCE_REALTIME) {
mTimestampOffset = getMonoToBoottimeOffset();
}
// Will the HAL be sending in early partial result metadata?
camera_metadata_entry partialResultsCount =
mDeviceInfo.find(ANDROID_REQUEST_PARTIAL_RESULT_COUNT);
if (partialResultsCount.count > 0) {
mNumPartialResults = partialResultsCount.data.i32[0];
mUsePartialResult = (mNumPartialResults > 1);
}
camera_metadata_entry configs =
mDeviceInfo.find(ANDROID_SCALER_AVAILABLE_STREAM_CONFIGURATIONS);
for (uint32_t i = 0; i < configs.count; i += 4) {
if (configs.data.i32[i] == HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED &&
configs.data.i32[i + 3] ==
ANDROID_SCALER_AVAILABLE_STREAM_CONFIGURATIONS_INPUT) {
mSupportedOpaqueInputSizes.add(Size(configs.data.i32[i + 1],
configs.data.i32[i + 2]));
}
}
return OK;
}
调用mRequestThread->run(String8::format("C3Dev-%s-ReqQueue", mId.string()).string())方法时,传入的参数就是当前线程的名字,所以我们也可以使用ps -T 【pid】来查看当前进程的所有线程,就可以看到有一条以“ReqQueue”名字结尾的线程,也就是RequestThread了,而且它是在Camera3Device初始化时就启动的了,它也是我们预览循环的主体。
RequestThread是继承Android的Thread类的,主函数就是threadLoop,我们来看一下RequestThread的threadLoop函数的实现,源码如下:
bool Camera3Device::RequestThread::threadLoop() {
ATRACE_CALL();
status_t res;
// Handle paused state.
if (waitIfPaused()) {
return true;
}
// Wait for the next batch of requests.
waitForNextRequestBatch();
if (mNextRequests.size() == 0) {
return true;
}
// Get the latest request ID, if any
int latestRequestId;
camera_metadata_entry_t requestIdEntry = mNextRequests[mNextRequests.size() - 1].
captureRequest->mSettings.find(ANDROID_REQUEST_ID);
if (requestIdEntry.count > 0) {
latestRequestId = requestIdEntry.data.i32[0];
} else {
ALOGW("%s: Did not have android.request.id set in the request.", __FUNCTION__);
latestRequestId = NAME_NOT_FOUND;
}
// Prepare a batch of HAL requests and output buffers.
res = prepareHalRequests();
if (res == TIMED_OUT) {
// Not a fatal error if getting output buffers time out.
cleanUpFailedRequests(/*sendRequestError*/ true);
// Check if any stream is abandoned.
checkAndStopRepeatingRequest();
return true;
} else if (res != OK) {
cleanUpFailedRequests(/*sendRequestError*/ false);
return false;
}
// Inform waitUntilRequestProcessed thread of a new request ID
{
Mutex::Autolock al(mLatestRequestMutex);
mLatestRequestId = latestRequestId;
mLatestRequestSignal.signal();
}
// Submit a batch of requests to HAL.
// Use flush lock only when submitting multilple requests in a batch.
// TODO: The problem with flush lock is flush() will be blocked by process_capture_request()
// which may take a long time to finish so synchronizing flush() and
// process_capture_request() defeats the purpose of cancelling requests ASAP with flush().
// For now, only synchronize for high speed recording and we should figure something out for
// removing the synchronization.
bool useFlushLock = mNextRequests.size() > 1;
if (useFlushLock) {
mFlushLock.lock();
}
ALOGVV("%s: %d: submitting %zu requests in a batch.", __FUNCTION__, __LINE__,
mNextRequests.size());
bool submitRequestSuccess = false;
nsecs_t tRequestStart = systemTime(SYSTEM_TIME_MONOTONIC);
if (mInterface->supportBatchRequest()) {
submitRequestSuccess = sendRequestsBatch();
} else {
submitRequestSuccess = sendRequestsOneByOne();
}
nsecs_t tRequestEnd = systemTime(SYSTEM_TIME_MONOTONIC);
mRequestLatency.add(tRequestStart, tRequestEnd);
if (useFlushLock) {
mFlushLock.unlock();
}
// Unset as current request
{
Mutex::Autolock l(mRequestLock);
mNextRequests.clear();
}
return submitRequestSuccess;
}
waitIfPaused方法表示如果是pause状态的话,就什么不作,直接返回true,刚好该返回值就决定了线程是否要继续循环,所以如果是pause状态的话,就继续进行线程循环,这里就会有疑问了?刚开始初始化完成,预览还未下发的时候,这里就会一直空处理,啥逻辑也没有,直接返回,那不是白白浪费CPU时间片吗?不急,我们耐心看一下waitIfPaused方法的实现就明白了,Google的精英们是绝不会犯这样低级的错误的,waitIfPaused方法的源码如下:
bool Camera3Device::RequestThread::waitIfPaused() {
status_t res;
Mutex::Autolock l(mPauseLock);
while (mDoPause) {
if (mPaused == false) {
mPaused = true;
ALOGV("%s: RequestThread: Paused", __FUNCTION__);
// Let the tracker know
sp statusTracker = mStatusTracker.promote();
if (statusTracker != 0) {
statusTracker->markComponentIdle(mStatusId, Fence::NO_FENCE);
}
}
res = mDoPauseSignal.waitRelative(mPauseLock, kRequestTimeout);
if (res == TIMED_OUT || exitPending()) {
return true;
}
}
// We don't set mPaused to false here, because waitForNextRequest needs
// to further manage the paused state in case of starvation.
return false;
}
如果是上面所说的情况下,就会执行mDoPauseSignal.waitRelative(mPauseLock, kRequestTimeout)等待,kRequestTimeout是定义在Camera3Device.h头文件中,定义的源码如下:
static const nsecs_t kRequestTimeout = 50e6; // 50 ms
所以这里就可以看到,如果没有Request请求时,将会等待50ms,再进行下一次判断。我们继续回到threadLoop方法中,我们先不细看,根据该方法中的实现理一下大体的思路,有几个比较关键的点:waitForNextRequestBatch准备下一次的Request请求;prepareHalRequests为上一步准备好的Request请求的hal_request赋值,继续完善这个Request;最后根据mInterface->supportBatchRequest()是否支持批处理,分别调用sendRequestsBatch、sendRequestsOneByOne将准备好的Request发送到HAL进程,也就是CameraHalServer当中去处理了,最终返回submitRequestSuccess,如果该值为true,那么继续循环,如果为false,那么肯定是中间出问题,RequestThread线程就会退出了。
理清了这个思路,我们大体也就明白了,相机整个预览循环的工作就是在这里完成的,也全部是围绕着mNextRequests成员变量来进行的。下面我们就来仔细看一下waitForNextRequestBatch、prepareHalRequests、sendRequestsBatch(我们假定支持批处理)三个函数的实现。
waitForNextRequestBatch方法的源码如下:
void Camera3Device::RequestThread::waitForNextRequestBatch() {
// Optimized a bit for the simple steady-state case (single repeating
// request), to avoid putting that request in the queue temporarily.
Mutex::Autolock l(mRequestLock);
assert(mNextRequests.empty());
NextRequest nextRequest;
nextRequest.captureRequest = waitForNextRequestLocked();
if (nextRequest.captureRequest == nullptr) {
return;
}
nextRequest.halRequest = camera3_capture_request_t();
nextRequest.submitted = false;
mNextRequests.add(nextRequest);
// Wait for additional requests
const size_t batchSize = nextRequest.captureRequest->mBatchSize;
for (size_t i = 1; i < batchSize; i++) {
NextRequest additionalRequest;
additionalRequest.captureRequest = waitForNextRequestLocked();
if (additionalRequest.captureRequest == nullptr) {
break;
}
additionalRequest.halRequest = camera3_capture_request_t();
additionalRequest.submitted = false;
mNextRequests.add(additionalRequest);
}
if (mNextRequests.size() < batchSize) {
ALOGE("RequestThread: only get %zu out of %zu requests. Skipping requests.",
mNextRequests.size(), batchSize);
cleanUpFailedRequests(/*sendRequestError*/true);
}
return;
}
首先断言成员变量mNextRequests中的元素为空,从这个逻辑我们也可以想到,每一帧处理完成后,肯定会把它清空,它的作用非常明确,就是将下一帧需要处理的Request添加进来,每一帧处理完成后直接清空,下一帧再继续添加。然后定义一个局部变量nextRequest,它就是要添加到mNextRequests当中的目标,调用waitForNextRequestLocked方法来给它的captureRequest成员变量赋值,halRequest成员变量是通过camera3_capture_request_t结构体构造的,但是它当中的所有参数都没有赋值,所以在这里它还只是个空壳子。submitted表示是否提交处理了,在这里肯定是false了,那么它什么时候为true呢?很明确,分界线就是我们最后将该Request提交给HAL进程进行处理,处理了之后,它才会为true。batchSize一般都为1,这是我加日志明确看过的,但是它所表示的具体的意思,暂时还没搞清楚。另外,这里请大家一定要看清楚,for循环的判断条件是for (size_t i = 1; i < batchSize; i++),i 的初始值为1,而batchSize也等于1,所以for循环是不会进入的,也就是说经过这里的处理,mNextRequests只添加了一个nextRequest,它的size就是1。
我们继续来看它所调用的waitForNextRequestLocked方法,源码如下:
sp
Camera3Device::RequestThread::waitForNextRequestLocked() {
status_t res;
sp nextRequest;
while (mRequestQueue.empty()) {
if (!mRepeatingRequests.empty()) {
// Always atomically enqueue all requests in a repeating request
// list. Guarantees a complete in-sequence set of captures to
// application.
const RequestList &requests = mRepeatingRequests;
RequestList::const_iterator firstRequest =
requests.begin();
nextRequest = *firstRequest;
mRequestQueue.insert(mRequestQueue.end(),
++firstRequest,
requests.end());
// No need to wait any longer
mRepeatingLastFrameNumber = mFrameNumber + requests.size() - 1;
break;
}
res = mRequestSignal.waitRelative(mRequestLock, kRequestTimeout);
if ((mRequestQueue.empty() && mRepeatingRequests.empty()) ||
exitPending()) {
Mutex::Autolock pl(mPauseLock);
if (mPaused == false) {
ALOGV("%s: RequestThread: Going idle", __FUNCTION__);
mPaused = true;
// Let the tracker know
sp statusTracker = mStatusTracker.promote();
if (statusTracker != 0) {
statusTracker->markComponentIdle(mStatusId, Fence::NO_FENCE);
}
}
// Stop waiting for now and let thread management happen
return NULL;
}
}
if (nextRequest == NULL) {
// Don't have a repeating request already in hand, so queue
// must have an entry now.
RequestList::iterator firstRequest =
mRequestQueue.begin();
nextRequest = *firstRequest;
mRequestQueue.erase(firstRequest);
if (mRequestQueue.empty() && !nextRequest->mRepeating) {
sp listener = mListener.promote();
if (listener != NULL) {
listener->notifyRequestQueueEmpty();
}
}
}
// In case we've been unpaused by setPaused clearing mDoPause, need to
// update internal pause state (capture/setRepeatingRequest unpause
// directly).
Mutex::Autolock pl(mPauseLock);
if (mPaused) {
ALOGV("%s: RequestThread: Unpaused", __FUNCTION__);
sp statusTracker = mStatusTracker.promote();
if (statusTracker != 0) {
statusTracker->markComponentActive(mStatusId);
}
}
mPaused = false;
// Check if we've reconfigured since last time, and reset the preview
// request if so. Can't use 'NULL request == repeat' across configure calls.
if (mReconfigured) {
mPrevRequest.clear();
mReconfigured = false;
}
if (nextRequest != NULL) {
nextRequest->mResultExtras.frameNumber = mFrameNumber++;
nextRequest->mResultExtras.afTriggerId = mCurrentAfTriggerId;
nextRequest->mResultExtras.precaptureTriggerId = mCurrentPreCaptureTriggerId;
// Since RequestThread::clear() removes buffers from the input stream,
// get the right buffer here before unlocking mRequestLock
if (nextRequest->mInputStream != NULL) {
res = nextRequest->mInputStream->getInputBuffer(&nextRequest->mInputBuffer);
if (res != OK) {
// Can't get input buffer from gralloc queue - this could be due to
// disconnected queue or other producer misbehavior, so not a fatal
// error
ALOGE("%s: Can't get input buffer, skipping request:"
" %s (%d)", __FUNCTION__, strerror(-res), res);
sp listener = mListener.promote();
if (listener != NULL) {
listener->notifyError(
hardware::camera2::ICameraDeviceCallbacks::ERROR_CAMERA_REQUEST,
nextRequest->mResultExtras);
}
return NULL;
}
}
}
return nextRequest;
}
方法一开始的while循环可以很直接的看到,先从mRequestQueue队列取数据,那么意思很明白,拍照请求的优先级肯定是高于预览请求的,比如当前RequestThread的拍照队列和预览队列都有一个Request,此时threadloop循环要取一帧进行处理,在这里的while进行判断时,拍照队列不为空,则while循环直接跳过,if (nextRequest == NULL)判断为true,就直接从拍照队列中取数据了。好,我们结合当前预览的场景来分析一下,当前拍照队列为请求为空,而if (!mRepeatingRequests.empty())判断成立,因为我们在前面是通过setRepeatingRequest调用下来的,在那里已经把当时封装好的Request插入到预览队列中了。然后从预览队列中取头节点赋值给局部变量nextRequest,接着为什么将其插入到拍照队列中?这步逻辑看来看去都没搞清楚是什么意思。最后给mRepeatingLastFrameNumber预览帧变量赋值,此时局部变量nextRequest不为空,该方法结尾处if (nextRequest != NULL)判断为true,if判断中第一句就是给帧号赋值,可以看到mFrameNumber++,先赋值后自增,这就是前面我们所说的帧号递增的出处了,这个帧号大家一定要非常重视,从CameraServer到CameraHalServer,一个帧号对应一个结果,也就是说我发一个请求给你,你就必须回复一个结果给我,我才能根据这个结果进行相应的预览或者拍照后处理,所以这个变量非常重要,它是对标两个进程之间请求的一个核心变量。剩下的还有一些其他参数的赋值,填充完成后将nextRequest返回。
那么它准备好了NextRequest,再往上回到waitForNextRequestBatch方法当中,下一帧的请求Request也就添加到成员变量mNextRequests当中了。继续往上,回到threadLoop方法当中,当时size等于1,接下来执行prepareHalRequests方法,上一行的注释写的也非常清楚了,“Prepare a batch of HAL requests and output buffers”,准备下一批的HAL请求和输出的Buffer。
我们就继续看一下prepareHalRequests方法的实现,源码如下:
status_t Camera3Device::RequestThread::prepareHalRequests() {
ATRACE_CALL();
for (size_t i = 0; i < mNextRequests.size(); i++) {
auto& nextRequest = mNextRequests.editItemAt(i);
sp captureRequest = nextRequest.captureRequest;
camera3_capture_request_t* halRequest = &nextRequest.halRequest;
Vector* outputBuffers = &nextRequest.outputBuffers;
// Prepare a request to HAL
halRequest->frame_number = captureRequest->mResultExtras.frameNumber;
// Insert any queued triggers (before metadata is locked)
status_t res = insertTriggers(captureRequest);
if (res < 0) {
SET_ERR("RequestThread: Unable to insert triggers "
"(capture request %d, HAL device: %s (%d)",
halRequest->frame_number, strerror(-res), res);
return INVALID_OPERATION;
}
int triggerCount = res;
bool triggersMixedIn = (triggerCount > 0 || mPrevTriggers > 0);
mPrevTriggers = triggerCount;
// If the request is the same as last, or we had triggers last time
if (mPrevRequest != captureRequest || triggersMixedIn) {
/**
* HAL workaround:
* Insert a dummy trigger ID if a trigger is set but no trigger ID is
*/
res = addDummyTriggerIds(captureRequest);
if (res != OK) {
SET_ERR("RequestThread: Unable to insert dummy trigger IDs "
"(capture request %d, HAL device: %s (%d)",
halRequest->frame_number, strerror(-res), res);
return INVALID_OPERATION;
}
/**
* The request should be presorted so accesses in HAL
* are O(logn). Sidenote, sorting a sorted metadata is nop.
*/
captureRequest->mSettings.sort();
halRequest->settings = captureRequest->mSettings.getAndLock();
mPrevRequest = captureRequest;
ALOGVV("%s: Request settings are NEW", __FUNCTION__);
IF_ALOGV() {
camera_metadata_ro_entry_t e = camera_metadata_ro_entry_t();
find_camera_metadata_ro_entry(
halRequest->settings,
ANDROID_CONTROL_AF_TRIGGER,
&e
);
if (e.count > 0) {
ALOGV("%s: Request (frame num %d) had AF trigger 0x%x",
__FUNCTION__,
halRequest->frame_number,
e.data.u8[0]);
}
}
} else {
// leave request.settings NULL to indicate 'reuse latest given'
ALOGVV("%s: Request settings are REUSED",
__FUNCTION__);
}
uint32_t totalNumBuffers = 0;
// Fill in buffers
if (captureRequest->mInputStream != NULL) {
halRequest->input_buffer = &captureRequest->mInputBuffer;
totalNumBuffers += 1;
} else {
halRequest->input_buffer = NULL;
}
outputBuffers->insertAt(camera3_stream_buffer_t(), 0,
captureRequest->mOutputStreams.size());
halRequest->output_buffers = outputBuffers->array();
for (size_t j = 0; j < captureRequest->mOutputStreams.size(); j++) {
sp outputStream = captureRequest->mOutputStreams.editItemAt(j);
// Prepare video buffers for high speed recording on the first video request.
if (mPrepareVideoStream && outputStream->isVideoStream()) {
// Only try to prepare video stream on the first video request.
mPrepareVideoStream = false;
res = outputStream->startPrepare(Camera3StreamInterface::ALLOCATE_PIPELINE_MAX);
while (res == NOT_ENOUGH_DATA) {
res = outputStream->prepareNextBuffer();
}
if (res != OK) {
ALOGW("%s: Preparing video buffers for high speed failed: %s (%d)",
__FUNCTION__, strerror(-res), res);
outputStream->cancelPrepare();
}
}
res = outputStream->getBuffer(&outputBuffers->editItemAt(j),
captureRequest->mOutputSurfaces[j]);
if (res != OK) {
// Can't get output buffer from gralloc queue - this could be due to
// abandoned queue or other consumer misbehavior, so not a fatal
// error
ALOGE("RequestThread: Can't get output buffer, skipping request:"
" %s (%d)", strerror(-res), res);
return TIMED_OUT;
}
halRequest->num_output_buffers++;
}
totalNumBuffers += halRequest->num_output_buffers;
// Log request in the in-flight queue
sp parent = mParent.promote();
if (parent == NULL) {
// Should not happen, and nowhere to send errors to, so just log it
CLOGE("RequestThread: Parent is gone");
return INVALID_OPERATION;
}
// If this request list is for constrained high speed recording (not
// preview), and the current request is not the last one in the batch,
// do not send callback to the app.
bool hasCallback = true;
if (mNextRequests[0].captureRequest->mBatchSize > 1 && i != mNextRequests.size()-1) {
hasCallback = false;
}
res = parent->registerInFlight(halRequest->frame_number,
totalNumBuffers, captureRequest->mResultExtras,
/*hasInput*/halRequest->input_buffer != NULL,
hasCallback);
ALOGVV("%s: registered in flight requestId = %" PRId32 ", frameNumber = %" PRId64
", burstId = %" PRId32 ".",
__FUNCTION__,
captureRequest->mResultExtras.requestId, captureRequest->mResultExtras.frameNumber,
captureRequest->mResultExtras.burstId);
if (res != OK) {
SET_ERR("RequestThread: Unable to register new in-flight request:"
" %s (%d)", strerror(-res), res);
return INVALID_OPERATION;
}
}
return OK;
}
分析该方法前,我们一定要清楚,该方法中完成了一个非常重要的目的,就是output buffer的准备,HAL所有的工作都是围绕输出的Buffer来操作的,所以看完这个方法,我们必须搞清楚,output buffer是如何准备的,准备到哪里去了。整个方法就一个for循环,对入参的每个Request进行处理,接下来的逻辑都是在给成员变量halRequest的子变量进行赋值,一步一步的完成halRequest的构建,输出Buffer就是成员变量outputBuffers了,它的准备就是调用outputStream->getBuffer(&outputBuffers->editItemAt(j), captureRequest->mOutputSurfaces[j])实现的。这里我们假定outputStream的类型为Camera3OutputStream(还有其他类型的Stream,比如Camera3SharedOutputStream),getBuffer的方法是调用父类Camera3Stream的实现,Camera3Stream文件的路径为frameworks\av\services\camera\libcameraservice\device3\Camera3Stream.cpp,它的getBuffer方法的源码如下:
status_t Camera3Stream::getBuffer(camera3_stream_buffer *buffer,
const std::vector& surface_ids) {
ATRACE_CALL();
Mutex::Autolock l(mLock);
status_t res = OK;
// This function should be only called when the stream is configured already.
if (mState != STATE_CONFIGURED) {
ALOGE("%s: Stream %d: Can't get buffers if stream is not in CONFIGURED state %d",
__FUNCTION__, mId, mState);
return INVALID_OPERATION;
}
// Wait for new buffer returned back if we are running into the limit.
if (getHandoutOutputBufferCountLocked() == camera3_stream::max_buffers) {
ALOGV("%s: Already dequeued max output buffers (%d), wait for next returned one.",
__FUNCTION__, camera3_stream::max_buffers);
nsecs_t waitStart = systemTime(SYSTEM_TIME_MONOTONIC);
res = mOutputBufferReturnedSignal.waitRelative(mLock, kWaitForBufferDuration);
nsecs_t waitEnd = systemTime(SYSTEM_TIME_MONOTONIC);
mBufferLimitLatency.add(waitStart, waitEnd);
if (res != OK) {
if (res == TIMED_OUT) {
ALOGE("%s: wait for output buffer return timed out after %lldms (max_buffers %d)",
__FUNCTION__, kWaitForBufferDuration / 1000000LL,
camera3_stream::max_buffers);
}
return res;
}
}
res = getBufferLocked(buffer, surface_ids);
if (res == OK) {
fireBufferListenersLocked(*buffer, /*acquired*/true, /*output*/true);
if (buffer->buffer) {
mOutstandingBuffers.push_back(*buffer->buffer);
}
}
return res;
}
这里需要先说明一下,最后调用fireBufferListenersLocked进行回调处理,所有的回调都是针对API1的架构的,因为该Listener的添加只有在API1的架构中才有,后面的已经没有了,所以最后if (res == OK)条件判断中的逻辑我们就不分析了。接着看getBufferLocked,该方法是由子类实现的,在Camera3OutputStream类中,源码如下:
status_t Camera3OutputStream::getBufferLocked(camera3_stream_buffer *buffer,
const std::vector&) {
ATRACE_CALL();
ANativeWindowBuffer* anb;
int fenceFd = -1;
status_t res;
res = getBufferLockedCommon(&anb, &fenceFd);
if (res != OK) {
return res;
}
/**
* FenceFD now owned by HAL except in case of error,
* in which case we reassign it to acquire_fence
*/
handoutBufferLocked(*buffer, &(anb->handle), /*acquireFence*/fenceFd,
/*releaseFence*/-1, CAMERA3_BUFFER_STATUS_OK, /*output*/true);
return OK;
}
要完成的目标就是给第一个入参buffer指针进行赋值,很直接,该方法继续调用getBufferLockedCommon进一步处理,getBufferLockedCommon方法的源码如下:
status_t Camera3OutputStream::getBufferLockedCommon(ANativeWindowBuffer** anb, int* fenceFd) {
ATRACE_CALL();
status_t res;
if ((res = getBufferPreconditionCheckLocked()) != OK) {
return res;
}
bool gotBufferFromManager = false;
if (mUseBufferManager) {
sp gb;
res = mBufferManager->getBufferForStream(getId(), getStreamSetId(), &gb, fenceFd);
if (res == OK) {
// Attach this buffer to the bufferQueue: the buffer will be in dequeue state after a
// successful return.
*anb = gb.get();
res = mConsumer->attachBuffer(*anb);
if (res != OK) {
ALOGE("%s: Stream %d: Can't attach the output buffer to this surface: %s (%d)",
__FUNCTION__, mId, strerror(-res), res);
return res;
}
gotBufferFromManager = true;
ALOGV("Stream %d: Attached new buffer", getId());
} else if (res == ALREADY_EXISTS) {
// Have sufficient free buffers already attached, can just
// dequeue from buffer queue
ALOGV("Stream %d: Reusing attached buffer", getId());
gotBufferFromManager = false;
} else if (res != OK) {
ALOGE("%s: Stream %d: Can't get next output buffer from buffer manager: %s (%d)",
__FUNCTION__, mId, strerror(-res), res);
return res;
}
}
if (!gotBufferFromManager) {
/**
* Release the lock briefly to avoid deadlock for below scenario:
* Thread 1: StreamingProcessor::startStream -> Camera3Stream::isConfiguring().
* This thread acquired StreamingProcessor lock and try to lock Camera3Stream lock.
* Thread 2: Camera3Stream::returnBuffer->StreamingProcessor::onFrameAvailable().
* This thread acquired Camera3Stream lock and bufferQueue lock, and try to lock
* StreamingProcessor lock.
* Thread 3: Camera3Stream::getBuffer(). This thread acquired Camera3Stream lock
* and try to lock bufferQueue lock.
* Then there is circular locking dependency.
*/
sp currentConsumer = mConsumer;
mLock.unlock();
nsecs_t dequeueStart = systemTime(SYSTEM_TIME_MONOTONIC);
res = currentConsumer->dequeueBuffer(currentConsumer.get(), anb, fenceFd);
nsecs_t dequeueEnd = systemTime(SYSTEM_TIME_MONOTONIC);
mDequeueBufferLatency.add(dequeueStart, dequeueEnd);
mLock.lock();
if (res != OK) {
ALOGE("%s: Stream %d: Can't dequeue next output buffer: %s (%d)",
__FUNCTION__, mId, strerror(-res), res);
// Only transition to STATE_ABANDONED from STATE_CONFIGURED. (If it is STATE_PREPARING,
// let prepareNextBuffer handle the error.)
if (res == NO_INIT && mState == STATE_CONFIGURED) {
mState = STATE_ABANDONED;
}
return res;
}
}
if (res == OK) {
std::vector> removedBuffers;
res = mConsumer->getAndFlushRemovedBuffers(&removedBuffers);
if (res == OK) {
onBuffersRemovedLocked(removedBuffers);
if (mUseBufferManager && removedBuffers.size() > 0) {
mBufferManager->onBuffersRemoved(getId(), getStreamSetId(), removedBuffers.size());
}
}
}
return res;
}
这里就是最终给buffer赋值的地方了,还是要重点说明一下,mUseBufferManager的值一直是false,我开始也不理解,在加日志调试时,我认为它在这里肯定是为true的,但是通过分析才知道,它一直是false,我们来追究一下,它的值要为true,只有在configureConsumerQueueLocked配置流的方法中的477行赋值为true,其他情况下都是false,那么我们继续看一下进入这个 if 判断的条件,if (mBufferManager != 0 && mSetId > CAMERA3_STREAM_SET_ID_INVALID),第一个mBufferManager是在构造Stream对象完成后,直接调用set赋值的,不为空;mSetId是父类Camera3Stream定义的成员变量,是要构造函数中赋值的,而我们一直往上追,构造函数传入的这个参数最根上是在CameraDeviceImpl类中调用configureStreamsChecked进行流的配置时,调用mRemoteDevice.createStream(outConfig)传入的,它的值也就是outConfig入参的成员变量mSurfaceGroupId的值,outConfig的类型为OutputConfiguration,而入参outConfig是直接调用new OutputConfiguration(Surface)带一个Surface参数的构造方法构建的,OutputConfiguration文件的路径为frameworks\base\core\java\android\hardware\camera2\params\OutputConfiguration.java,该构造方法的源码如下:
public OutputConfiguration(@NonNull Surface surface) {
this(SURFACE_GROUP_ID_NONE, surface, ROTATION_0);
}
它是继续调用其他构造函数,源码如下:
@SystemApi
public OutputConfiguration(int surfaceGroupId, @NonNull Surface surface, int rotation) {
checkNotNull(surface, "Surface must not be null");
checkArgumentInRange(rotation, ROTATION_0, ROTATION_270, "Rotation constant");
mSurfaceGroupId = surfaceGroupId;
mSurfaceType = SURFACE_TYPE_UNKNOWN;
mSurfaces = new ArrayList();
mSurfaces.add(surface);
mRotation = rotation;
mConfiguredSize = SurfaceUtils.getSurfaceSize(surface);
mConfiguredFormat = SurfaceUtils.getSurfaceFormat(surface);
mConfiguredDataspace = SurfaceUtils.getSurfaceDataspace(surface);
mConfiguredGenerationId = surface.getGenerationId();
mIsDeferredConfig = false;
mIsShared = false;
}
这里就可以看到,第一个参数SURFACE_GROUP_ID_NONE(值为-1)赋值给了成员变量mSurfaceGroupId,这也就是我们上面讲的if (mBufferManager != 0 && mSetId > CAMERA3_STREAM_SET_ID_INVALID)判断条件中的第二个条件为false的原因了,所以这里的buffer管理不是用的Camera3BufferManager,一定要搞清楚。
继续回到我们正路Camera3OutputStream类的getBufferLockedCommon方法当中,if (mUseBufferManager)判断跳过,gotBufferFromManager啥也没改,就是初始值false,所以进入到if (!gotBufferFromManager)分支中,看到currentConsumer->dequeueBuffer(currentConsumer.get(), anb, fenceFd),终于久违了!!!所有HAL层操作的Buffer就是这里分配的,是通过configureStream时配置的Surface中,使用Android原生的Buffer管理系统取出来的,也就是原生定义的buffer_handle_t,这些Buffer全部是由gralloc驱动创建的,是使用共享内存的实现的,所以各进程间要操作的话,不需要拷贝,操作完释放锁就可以了!!!
这里看完了,往上回到Camera3Device::RequestThread类的prepareHalRequests方法中,这里还需要声明一下,此时准备好的buffer还只是在NextRequest当中,而HAL层能拿到的是halRequest,所以还需要再处理一步,这步工作是在后边完成的,我们等下就会看到。
再往上回到threadLoop方法中,最后来看一下第三步sendRequestsBatch,将准备好的Request发往HAL进程去处理,该方法的源码如下:
bool Camera3Device::RequestThread::sendRequestsBatch() {
status_t res;
size_t batchSize = mNextRequests.size();
std::vector requests(batchSize);
uint32_t numRequestProcessed = 0;
for (size_t i = 0; i < batchSize; i++) {
requests[i] = &mNextRequests.editItemAt(i).halRequest;
}
ATRACE_ASYNC_BEGIN("batch frame capture", mNextRequests[0].halRequest.frame_number);
res = mInterface->processBatchCaptureRequests(requests, &numRequestProcessed);
bool triggerRemoveFailed = false;
NextRequest& triggerFailedRequest = mNextRequests.editItemAt(0);
for (size_t i = 0; i < numRequestProcessed; i++) {
NextRequest& nextRequest = mNextRequests.editItemAt(i);
nextRequest.submitted = true;
// Update the latest request sent to HAL
if (nextRequest.halRequest.settings != NULL) { // Don't update if they were unchanged
Mutex::Autolock al(mLatestRequestMutex);
camera_metadata_t* cloned = clone_camera_metadata(nextRequest.halRequest.settings);
mLatestRequest.acquire(cloned);
sp parent = mParent.promote();
if (parent != NULL) {
parent->monitorMetadata(TagMonitor::REQUEST,
nextRequest.halRequest.frame_number,
0, mLatestRequest);
}
}
if (nextRequest.halRequest.settings != NULL) {
nextRequest.captureRequest->mSettings.unlock(nextRequest.halRequest.settings);
}
if (!triggerRemoveFailed) {
// Remove any previously queued triggers (after unlock)
status_t removeTriggerRes = removeTriggers(mPrevRequest);
if (removeTriggerRes != OK) {
triggerRemoveFailed = true;
triggerFailedRequest = nextRequest;
}
}
}
if (triggerRemoveFailed) {
SET_ERR("RequestThread: Unable to remove triggers "
"(capture request %d, HAL device: %s (%d)",
triggerFailedRequest.halRequest.frame_number, strerror(-res), res);
cleanUpFailedRequests(/*sendRequestError*/ false);
return false;
}
if (res != OK) {
// Should only get a failure here for malformed requests or device-level
// errors, so consider all errors fatal. Bad metadata failures should
// come through notify.
SET_ERR("RequestThread: Unable to submit capture request %d to HAL device: %s (%d)",
mNextRequests[numRequestProcessed].halRequest.frame_number,
strerror(-res), res);
cleanUpFailedRequests(/*sendRequestError*/ false);
return false;
}
return true;
}
所有的request都已经准备好了,所以就调用mInterface->processBatchCaptureRequests(requests, &numRequestProcessed)来处理请求了,mInterface的类型为HalInterface,这里也就是分水领了,从这句代码往后,当前的Request就被处理了,所以nextRequest.submitted也就应该被赋值为true了。我们继续来看一下processBatchCaptureRequests的逻辑,源码如下:
status_t Camera3Device::HalInterface::processBatchCaptureRequests(
std::vector& requests,/*out*/uint32_t* numRequestProcessed) {
ATRACE_NAME("CameraHal::processBatchCaptureRequests");
if (!valid()) return INVALID_OPERATION;
hardware::hidl_vec captureRequests;
size_t batchSize = requests.size();
captureRequests.resize(batchSize);
std::vector handlesCreated;
for (size_t i = 0; i < batchSize; i++) {
wrapAsHidlRequest(requests[i], /*out*/&captureRequests[i], /*out*/&handlesCreated);
}
std::vector cachesToRemove;
{
std::lock_guard lock(mBufferIdMapLock);
for (auto& pair : mFreedBuffers) {
// The stream might have been removed since onBufferFreed
if (mBufferIdMaps.find(pair.first) != mBufferIdMaps.end()) {
cachesToRemove.push_back({pair.first, pair.second});
}
}
mFreedBuffers.clear();
}
common::V1_0::Status status = common::V1_0::Status::INTERNAL_ERROR;
*numRequestProcessed = 0;
// Write metadata to FMQ.
for (size_t i = 0; i < batchSize; i++) {
camera3_capture_request_t* request = requests[i];
device::V3_2::CaptureRequest* captureRequest = &captureRequests[i];
if (request->settings != nullptr) {
size_t settingsSize = get_camera_metadata_size(request->settings);
if (mRequestMetadataQueue != nullptr && mRequestMetadataQueue->write(
reinterpret_cast(request->settings), settingsSize)) {
captureRequest->settings.resize(0);
captureRequest->fmqSettingsSize = settingsSize;
} else {
if (mRequestMetadataQueue != nullptr) {
ALOGW("%s: couldn't utilize fmq, fallback to hwbinder", __FUNCTION__);
}
captureRequest->settings.setToExternal(
reinterpret_cast(const_cast(request->settings)),
get_camera_metadata_size(request->settings));
captureRequest->fmqSettingsSize = 0u;
}
} else {
// A null request settings maps to a size-0 CameraMetadata
captureRequest->settings.resize(0);
captureRequest->fmqSettingsSize = 0u;
}
}
auto err = mHidlSession->processCaptureRequest(captureRequests, cachesToRemove,
[&status, &numRequestProcessed] (auto s, uint32_t n) {
status = s;
*numRequestProcessed = n;
});
if (!err.isOk()) {
ALOGE("%s: Transaction error: %s", __FUNCTION__, err.description().c_str());
return DEAD_OBJECT;
}
if (status == common::V1_0::Status::OK && *numRequestProcessed != batchSize) {
ALOGE("%s: processCaptureRequest returns OK but processed %d/%zu requests",
__FUNCTION__, *numRequestProcessed, batchSize);
status = common::V1_0::Status::INTERNAL_ERROR;
}
for (auto& handle : handlesCreated) {
native_handle_delete(handle);
}
return CameraProviderManager::mapToStatusT(status);
}
这里先调用wrapAsHidlRequest对每个Request进一步包装,我们来看一下它的实现,源码如下:
void Camera3Device::HalInterface::wrapAsHidlRequest(camera3_capture_request_t* request,
/*out*/device::V3_2::CaptureRequest* captureRequest,
/*out*/std::vector* handlesCreated) {
if (captureRequest == nullptr || handlesCreated == nullptr) {
ALOGE("%s: captureRequest (%p) and handlesCreated (%p) must not be null",
__FUNCTION__, captureRequest, handlesCreated);
return;
}
captureRequest->frameNumber = request->frame_number;
captureRequest->fmqSettingsSize = 0;
{
std::lock_guard lock(mInflightLock);
if (request->input_buffer != nullptr) {
int32_t streamId = Camera3Stream::cast(request->input_buffer->stream)->getId();
buffer_handle_t buf = *(request->input_buffer->buffer);
auto pair = getBufferId(buf, streamId);
bool isNewBuffer = pair.first;
uint64_t bufferId = pair.second;
captureRequest->inputBuffer.streamId = streamId;
captureRequest->inputBuffer.bufferId = bufferId;
captureRequest->inputBuffer.buffer = (isNewBuffer) ? buf : nullptr;
captureRequest->inputBuffer.status = BufferStatus::OK;
native_handle_t *acquireFence = nullptr;
if (request->input_buffer->acquire_fence != -1) {
acquireFence = native_handle_create(1,0);
acquireFence->data[0] = request->input_buffer->acquire_fence;
handlesCreated->push_back(acquireFence);
}
captureRequest->inputBuffer.acquireFence = acquireFence;
captureRequest->inputBuffer.releaseFence = nullptr;
pushInflightBufferLocked(captureRequest->frameNumber, streamId,
request->input_buffer->buffer,
request->input_buffer->acquire_fence);
} else {
captureRequest->inputBuffer.streamId = -1;
captureRequest->inputBuffer.bufferId = BUFFER_ID_NO_BUFFER;
}
captureRequest->outputBuffers.resize(request->num_output_buffers);
for (size_t i = 0; i < request->num_output_buffers; i++) {
const camera3_stream_buffer_t *src = request->output_buffers + i;
StreamBuffer &dst = captureRequest->outputBuffers[i];
int32_t streamId = Camera3Stream::cast(src->stream)->getId();
buffer_handle_t buf = *(src->buffer);
auto pair = getBufferId(buf, streamId);
bool isNewBuffer = pair.first;
dst.streamId = streamId;
dst.bufferId = pair.second;
dst.buffer = isNewBuffer ? buf : nullptr;
dst.status = BufferStatus::OK;
native_handle_t *acquireFence = nullptr;
if (src->acquire_fence != -1) {
acquireFence = native_handle_create(1,0);
acquireFence->data[0] = src->acquire_fence;
handlesCreated->push_back(acquireFence);
}
dst.acquireFence = acquireFence;
dst.releaseFence = nullptr;
pushInflightBufferLocked(captureRequest->frameNumber, streamId,
src->buffer, src->acquire_fence);
}
}
}
可以看到,captureRequest->frameNumber = request->frame_number帧号的赋值,第一位,说明非常重要!!下面就是对buffer处理了,在output buffer中,通过const camera3_stream_buffer_t *src = request->output_buffers + i 将上面我们已经分析过的Surface中取出的buffer拿出来,然后调用 dst.buffer = isNewBuffer ? buf : nullptr 将它操作赋值给dst,我在这里打日志观察过,申请的几个buffer一直是复用的,isNewBuffer肯定是true,要不然输出buffer就为空了,而复用的就是分配了几个buffer,然后这几个buffer一直不断的轮转,比如1、2、3、4、5、6,然后又1、2、3、4、5、6一直往复循环的,大家也可以自己加日志研究一下这块的逻辑,最后调用pushInflightBufferLocked以帧号(captureRequest->frameNumber)、streamId、src->buffer、src->acquire_fence为参数将buffer保存在成员变量mInflightBufferMap中,后续HAL层对buffer填充完毕后,这里就可以直接从mInflightBufferMap当中取出来了。
好,往上回到processBatchCaptureRequests方法中,所有的Request的参数都赋值完成了,最后调用mHidlSession->processCaptureRequest把Request发到HAL进程当中,这是通过HIDL来实现的,其实最根本的还是Binder框架!!第三个参数是lambda表达式,在HAL那边直接就是个hidl_cb的回调接口。mHidlSession是在HAL打开的Session对象,高通和MTK等芯片公司都会有自己不同的实现,一般实现类的名字都是**CameraDevice3SessionImpl*.cpp类型的,大家如果有芯片公司的源码,可以自己研究下。
好,一层层往上再回到threadLoop方法当中,一帧请求处理完成,最后调用mNextRequests.clear()清空数据,成功返回true,继续下一次循环。
到这里,大家应该就明白Camera预览的RequestThread到底是怎么一回事了吧,当我们收到APP的下预览的请求时,往mRepeatingRequests队列中添加了一个元素,后续的预览都是在对这一个元素不断的取出来处理的过程,或者有拍照请求时,就取拍照的元素,这样在threadLoop无限循环就构成了不断的预览。中间涉及到一些比较重要的点就是HAL层buffer的申请和转移的过程,这个大家要清楚一些。
好了,本节就到这里了,我们后面会继续对Camera的东西进行详细分析。