[Android P] CameraAPI1 转 HAL3 预览流程(四) — Preview Data

系列文章

  • [Android P] CameraAPI1 转 HAL3 预览流程(一) — 背景概述
  • [Android P] CameraAPI1 转 HAL3 预览流程(二) — startPreview
  • [Android P] CameraAPI1 转 HAL3 预览流程(三) — setPreviewCallbackFlag
  • [Android P] CameraAPI1 转 HAL3 预览流程(四) — Preview Data

总览

预览打开完毕后,就进入了持续预览阶段。

Camera API2 架构下,采用一个 Request 对应一个 Result 的规范,所以在预览期间是需要持续下 Request 来获取预览数据的,而仍然采用 API1 相机应用在 Framework 中也会被转换成这样的形式。

其中,与 Request 密切相关的一个线程是 Camera3Device::RequestThread,它负责持续下预览 Request

Result 从底层返回时,会先回到 Camera3Device,触发 processCaptureResult 并通知到各个 Processor(如 FrameProcessor 和 CallbackProcessor)去进一步处理、上传。

我们现在分析的是开了 preview 以及 callback 两路 stream 的预览流程,其中 APP 一般是拿 callback 这路数据去进行客制化处理,然后进行预览,所以下面的时序中,Result 部分就重点看 callback 数据的回传部分(因为这部分也与我最开始所描述的卡顿问题密切相关)。

主要涉及到的类及其对应的代码地址分别是:

  1. Camera-JNI/frameworks/base/core/jni/android_hardware_Camera.cpp
  2. Camera2Client/frameworks/av/services/camera/libcameraservice/api1/Camera2Client.cpp
  3. FrameProcessor/frameworks/av/services/camera/libcameraservice/api1/client2/FrameProcessor.cpp
  4. CallbackProcessor/frameworks/av/services/camera/libcameraservice/api1/client2/CallbackProcessor.cpp
  5. Camera3Device/frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

接下来我会照着上面的时序图,结合具体代码进行更深入一些的解释。

代码分析

我们可以分成两个部分来看:

  1. 下行-控制流(Request);
  2. 上行-数据流(Result)。

控制流部分

严格来说,控制流部分主要是 RequestThread 在负责周转,但由于 setPreviewCallbackFlag 里有影响到它周转流程的逻辑,所以我在这里会先把 Camera2Client 中 setPreviewCallbackFlag 的两种调用情况描述出来,这样在分析 RequestThread::threadLoop 时我们就更能理解它受到了什么影响。

下面主要是讲图示中红框的部分。

Camera2Client::setPreviewCallbackFlag

这个函数被调用,一般是有两种情况:

  1. App 调用了 addCallbackBuffer,把用于装填 callback 数据的 buffer 主动传下来时,此时带的参数是 0x05
  2. Callback result 回调上来,到 JNI 处有 copyAndPost动作,如果此帧数据上传后, App 提供的 buffer 用完了,就会被调用,此时带的参数是 0x00
  3. 具体代码就不用看了,最终都会调用到 startPreviewL,这个才是我们需要关注的部分。

Camera2Client::startPreviewL

以参数 0x05 的情况来分析:

  1. 第 4~14 行,主要是状态变更,以及更新参数等动作,这部分在前两篇都有描述,就不赘述了;
  2. 第 19 行,这是一个关键点,由于传入的 previewCallbackFlags 是 0x05,这里计算出来的值是 true
  3. 第 23~37 行,由于 callbacksEnabled 为 true,走入该分支,会调用到 CallbackProcessor 实例的 updateStream 函数,而此处我们需要关注的是第 36 行,把 callback 的 output stream 加入到 stream 列表中
  4. 第 48 行,这里是把 preview 的 output stream 加入到 stream 列表中,此时 stream 列表的 size 为 2
  5. 第 66 行,注意这里,startStream 调用时带的参数有 outputStreams,在这个函数里面它会被传入到新创建的 CaptureRequest 实例中,进而影响到下一次 Request 申请 Hal buffer 的动作。
status_t Camera2Client::startPreviewL(Parameters &params, bool restart) {
    // NOTE: N Lines are omitted here

    params.state = Parameters::STOPPED;
    int lastPreviewStreamId = mStreamingProcessor->getPreviewStreamId();

    res = mStreamingProcessor->updatePreviewStream(params);
    if (res != OK) {
        ALOGE("%s: Camera %d: Unable to update preview stream: %s (%d)",
                __FUNCTION__, mCameraId, strerror(-res), res);
        return res;
    }

    bool previewStreamChanged = mStreamingProcessor->getPreviewStreamId() != lastPreviewStreamId;

    // NOTE: N Lines are omitted here

    Vector<int32_t> outputStreams;
    bool callbacksEnabled = (params.previewCallbackFlags &
            CAMERA_FRAME_CALLBACK_FLAG_ENABLE_MASK) ||
            params.previewCallbackSurface;

    if (callbacksEnabled) {
        // Can't have recording stream hanging around when enabling callbacks,
        // since it exceeds the max stream count on some devices.
        if (mStreamingProcessor->getRecordingStreamId() != NO_STREAM) {
            // NOTE: N Lines are omitted here
        }

        res = mCallbackProcessor->updateStream(params);
        if (res != OK) {
            ALOGE("%s: Camera %d: Unable to update callback stream: %s (%d)",
                    __FUNCTION__, mCameraId, strerror(-res), res);
            return res;
        }
        outputStreams.push(getCallbackStreamId());
    } else if (previewStreamChanged && mCallbackProcessor->getStreamId() != NO_STREAM) {
        // NOTE: N Lines are omitted here
    }

    if (params.useZeroShutterLag() &&
            getRecordingStreamId() == NO_STREAM) {
        // NOTE: N Lines are omitted here
    } else {
        mZslProcessor->deleteStream();
    }

    outputStreams.push(getPreviewStreamId());

    if (params.isDeviceZslSupported) {
        // If device ZSL is supported, resume preview buffers that may be paused
        // during last takePicture().
        mDevice->dropStreamBuffers(false, getPreviewStreamId());
    }

    if (!params.recordingHint) {
        if (!restart) {
            res = mStreamingProcessor->updatePreviewRequest(params);
            if (res != OK) {
                ALOGE("%s: Camera %d: Can't set up preview request: "
                        "%s (%d)", __FUNCTION__, mCameraId,
                        strerror(-res), res);
                return res;
            }
        }
        res = mStreamingProcessor->startStream(StreamingProcessor::PREVIEW,
                outputStreams);
    } else {
        // NOTE: N Lines are omitted here
    }
    if (res != OK) {
        ALOGE("%s: Camera %d: Unable to start streaming preview: %s (%d)",
                __FUNCTION__, mCameraId, strerror(-res), res);
        return res;
    }

    params.state = Parameters::PREVIEW;
    return OK;
}

Camera3Device::RequestThread::threadLoop

回到控制流的正题,即 RequestThread 的运转逻辑。

RequestThread 实例实际是在 Camera3Device::initializeCommonLocked 中创建并 run 起来的,这是 openCamera 的流程,有兴趣可以去看看。

线程 run 起来后就是循环调用 threadLoop 了,回到其逻辑:

  1. 第 6 行,先检查是否需要暂停循环,如果需要暂停则直接跳过本次 threadLoop,我们此时是不需要的;
  2. 第 11 行,等待这一批要下的 request 获取完毕,具体的逻辑下面会讲到;
  3. 第 22 行,这里主要是检查这次的 request 带的 Session Params 和上一次的是否一致,我们这里主要关注一致的情况,此时会返回 false,不走这个 if 分支;
  4. 第 27 行,为本次 request 准备好送下 HAL 层的 buffer(注意 APP 带下来的 buffer 是停留在 JNI 的);
  5. 第 58 行,把本次 request 发送到 HAL。
bool Camera3Device::RequestThread::threadLoop() {
    ATRACE_CALL();
    status_t res;

    // Handle paused state.
    if (waitIfPaused()) {
        return true;
    }

    // Wait for the next batch of requests.
    waitForNextRequestBatch();
    if (mNextRequests.size() == 0) {
        return true;
    }

    // NOTE: N Lines are omitted here

    // 'mNextRequests' will at this point contain either a set of HFR batched requests
    //  or a single request from streaming or burst. In either case the first element
    //  should contain the latest camera settings that we need to check for any session
    //  parameter updates.
    if (updateSessionParameters(mNextRequests[0].captureRequest->mSettingsList.begin()->metadata)) {
        // NOTE: N Lines are omitted here
    }

    // Prepare a batch of HAL requests and output buffers.
    res = prepareHalRequests();
   
    // NOTE: N Lines are omitted here

    // Inform waitUntilRequestProcessed thread of a new request ID
    {
        Mutex::Autolock al(mLatestRequestMutex);

        mLatestRequestId = latestRequestId;
        mLatestRequestSignal.signal();
    }

    // Submit a batch of requests to HAL.
    // Use flush lock only when submitting multilple requests in a batch.
    // TODO: The problem with flush lock is flush() will be blocked by process_capture_request()
    // which may take a long time to finish so synchronizing flush() and
    // process_capture_request() defeats the purpose of cancelling requests ASAP with flush().
    // For now, only synchronize for high speed recording and we should figure something out for
    // removing the synchronization.
    bool useFlushLock = mNextRequests.size() > 1;

    if (useFlushLock) {
        mFlushLock.lock();
    }

    ALOGVV("%s: %d: submitting %zu requests in a batch.", __FUNCTION__, __LINE__,
            mNextRequests.size());

    bool submitRequestSuccess = false;
    nsecs_t tRequestStart = systemTime(SYSTEM_TIME_MONOTONIC);
    if (mInterface->supportBatchRequest()) {
        submitRequestSuccess = sendRequestsBatch();
    } else {
        submitRequestSuccess = sendRequestsOneByOne();
    }
    nsecs_t tRequestEnd = systemTime(SYSTEM_TIME_MONOTONIC);
    mRequestLatency.add(tRequestStart, tRequestEnd);

    if (useFlushLock) {
        mFlushLock.unlock();
    }

    // Unset as current request
    {
        Mutex::Autolock l(mRequestLock);
        mNextRequests.clear();
    }

    return submitRequestSuccess;
}

Camera3Device::RequestThread::waitForNextRequestBatch

此处逻辑如下:

  1. 第 10 行,首先获取第一个 nextRequest;
  2. 第 17 行,将第一个 nextRequest 加入到队列 mNextRequests 中;
  3. 第 20 行,根据第一个 nextRequest 带的信息,获取到本批(即 batch)次 request 共有多少个 request(这里解释一下,一般预览的时候一批 request 里面只带一个 request,而像慢动作 120fps 这种情况下,一批里面要带 4 个 request,才能保证预览数据是按 30fps 上来的);
  4. 第 22~32 行,把本批次的 request 逐个获取出来。
void Camera3Device::RequestThread::waitForNextRequestBatch() {
    ATRACE_CALL();
    // Optimized a bit for the simple steady-state case (single repeating
    // request), to avoid putting that request in the queue temporarily.
    Mutex::Autolock l(mRequestLock);

    assert(mNextRequests.empty());

    NextRequest nextRequest;
    nextRequest.captureRequest = waitForNextRequestLocked();
    if (nextRequest.captureRequest == nullptr) {
        return;
    }

    nextRequest.halRequest = camera3_capture_request_t();
    nextRequest.submitted = false;
    mNextRequests.add(nextRequest);

    // Wait for additional requests
    const size_t batchSize = nextRequest.captureRequest->mBatchSize;

    for (size_t i = 1; i < batchSize; i++) {
        NextRequest additionalRequest;
        additionalRequest.captureRequest = waitForNextRequestLocked();
        if (additionalRequest.captureRequest == nullptr) {
            break;
        }

        additionalRequest.halRequest = camera3_capture_request_t();
        additionalRequest.submitted = false;
        mNextRequests.add(additionalRequest);
    }

    if (mNextRequests.size() < batchSize) {
        ALOGE("RequestThread: only get %zu out of %zu requests. Skipping requests.",
                mNextRequests.size(), batchSize);
        cleanUpFailedRequests(/*sendRequestError*/true);
    }

    return;
}

Camera3Device::RequestThread::waitForNextRequestLocked

预览时调用到 setRepeatingRequest 会把新 CaputureRequest 加入到 mRepeatingRequests 中,此处第 7~23 行,mRepeatingRequests 非空,进入该分支,并取出其中首个 request 加入到 mRequestQueue 中

sp<Camera3Device::CaptureRequest>
        Camera3Device::RequestThread::waitForNextRequestLocked() {
    status_t res;
    sp<CaptureRequest> nextRequest;

    while (mRequestQueue.empty()) {
        if (!mRepeatingRequests.empty()) {
            // Always atomically enqueue all requests in a repeating request
            // list. Guarantees a complete in-sequence set of captures to
            // application.
            const RequestList &requests = mRepeatingRequests;
            RequestList::const_iterator firstRequest =
                    requests.begin();
            nextRequest = *firstRequest;
            mRequestQueue.insert(mRequestQueue.end(),
                    ++firstRequest,
                    requests.end());
            // No need to wait any longer

            mRepeatingLastFrameNumber = mFrameNumber + requests.size() - 1;

            break;
        }

        // NOTE: N Lines are omitted here
    }

    // NOTE: N Lines are omitted here

    if (nextRequest != NULL) {
        nextRequest->mResultExtras.frameNumber = mFrameNumber++;
        nextRequest->mResultExtras.afTriggerId = mCurrentAfTriggerId;
        nextRequest->mResultExtras.precaptureTriggerId = mCurrentPreCaptureTriggerId;

        // NOTE: N Lines are omitted here
    }

    return nextRequest;
}

Camera3Device::RequestThread::prepareHalRequests

拿完 CaptureRequest 后,就要根据其准备好送去给 HAL 的 request 了,这里只关注几个重点:

  1. 第 14 行,插入 trigger(比如 AE trigger);
  2. 第 51 行,特别要注意这里的 captureRequest->mOutputStreams.size(),前面 setPreviewCallbackFlag 时候的 outputStreams 个数在此处就体现出来用处了,正常时候应该是 2,即 preview 和 callback stream 各一个,而如果正好获取到的是 1,则会缺一路数据;
  3. 第 56 行,每个 output stream 获取 buffer,给到 HAL request(如果缺了 callback stream,则这次 request 对应的 result 里面就不会有 callback 数据)。
status_t Camera3Device::RequestThread::prepareHalRequests() {
    ATRACE_CALL();

    for (size_t i = 0; i < mNextRequests.size(); i++) {
        auto& nextRequest = mNextRequests.editItemAt(i);
        sp captureRequest = nextRequest.captureRequest;
        camera3_capture_request_t* halRequest = &nextRequest.halRequest;
        Vector* outputBuffers = &nextRequest.outputBuffers;

        // Prepare a request to HAL
        halRequest->frame_number = captureRequest->mResultExtras.frameNumber;

        // Insert any queued triggers (before metadata is locked)
        status_t res = insertTriggers(captureRequest);
        if (res < 0) {
            SET_ERR("RequestThread: Unable to insert triggers "
                    "(capture request %d, HAL device: %s (%d)",
                    halRequest->frame_number, strerror(-res), res);
            return INVALID_OPERATION;
        }

        int triggerCount = res;
        bool triggersMixedIn = (triggerCount > 0 || mPrevTriggers > 0);
        mPrevTriggers = triggerCount;

        // If the request is the same as last, or we had triggers last time
        bool newRequest = mPrevRequest != captureRequest || triggersMixedIn;
        if (newRequest) {
            // NOTE: N Lines are omitted here
        } else {
            // leave request.settings NULL to indicate 'reuse latest given'
            ALOGVV("%s: Request settings are REUSED",
                   __FUNCTION__);
        }

        // NOTE: N Lines are omitted here

        outputBuffers->insertAt(camera3_stream_buffer_t(), 0,
                captureRequest->mOutputStreams.size());
        halRequest->output_buffers = outputBuffers->array();
        std::set requestedPhysicalCameras;

        sp parent = mParent.promote();
        if (parent == NULL) {
            // Should not happen, and nowhere to send errors to, so just log it
            CLOGE("RequestThread: Parent is gone");
            return INVALID_OPERATION;
        }
        nsecs_t waitDuration = kBaseGetBufferWait + parent->getExpectedInFlightDuration();

        for (size_t j = 0; j < captureRequest->mOutputStreams.size(); j++) {
            sp outputStream = captureRequest->mOutputStreams.editItemAt(j);

            // NOTE: N Lines are omitted here

            res = outputStream->getBuffer(&outputBuffers->editItemAt(j),
                    waitDuration,
                    captureRequest->mOutputSurfaces[j]);
            if (res != OK) {
                // Can't get output buffer from gralloc queue - this could be due to
                // abandoned queue or other consumer misbehavior, so not a fatal
                // error
                ALOGE("RequestThread: Can't get output buffer, skipping request:"
                        " %s (%d)", strerror(-res), res);

                return TIMED_OUT;
            }

            String8 physicalCameraId = outputStream->getPhysicalCameraId();

            if (!physicalCameraId.isEmpty()) {
                // Physical stream isn't supported for input request.
                if (halRequest->input_buffer) {
                    CLOGE("Physical stream is not supported for input request");
                    return INVALID_OPERATION;
                }
                requestedPhysicalCameras.insert(physicalCameraId);
            }
            halRequest->num_output_buffers++;
        }
        totalNumBuffers += halRequest->num_output_buffers;

        // Log request in the in-flight queue
        // If this request list is for constrained high speed recording (not
        // preview), and the current request is not the last one in the batch,
        // do not send callback to the app.
        bool hasCallback = true;
        if (mNextRequests[0].captureRequest->mBatchSize > 1 && i != mNextRequests.size()-1) {
            hasCallback = false;
        }
        res = parent->registerInFlight(halRequest->frame_number,
                totalNumBuffers, captureRequest->mResultExtras,
                /*hasInput*/halRequest->input_buffer != NULL,
                hasCallback,
                calculateMaxExpectedDuration(halRequest->settings),
                requestedPhysicalCameras);
        ALOGVV("%s: registered in flight requestId = %" PRId32 ", frameNumber = %" PRId64
               ", burstId = %" PRId32 ".",
                __FUNCTION__,
                captureRequest->mResultExtras.requestId, captureRequest->mResultExtras.frameNumber,
                captureRequest->mResultExtras.burstId);
        if (res != OK) {
            SET_ERR("RequestThread: Unable to register new in-flight request:"
                    " %s (%d)", strerror(-res), res);
            return INVALID_OPERATION;
        }
    }

    return OK;
}

Camera3Device::RequestThread::sendRequestsBatch

这里面主要是第 12 行,调用 Camera3Device::HalInterface::processBatchCaptureRequests,打包数据成 HIDL 规范的格式然后丢给 HAL 层。后面的逻辑就不再关注了,注意下第 45 行有个 removeTriggers 动作,与前面 prepareHalRequestinsertTriggers 是相对应的。

bool Camera3Device::RequestThread::sendRequestsBatch() {
    ATRACE_CALL();
    status_t res;
    size_t batchSize = mNextRequests.size();
    std::vector<camera3_capture_request_t*> requests(batchSize);
    uint32_t numRequestProcessed = 0;
    for (size_t i = 0; i < batchSize; i++) {
        requests[i] = &mNextRequests.editItemAt(i).halRequest;
        ATRACE_ASYNC_BEGIN("frame capture", mNextRequests[i].halRequest.frame_number);
    }

    res = mInterface->processBatchCaptureRequests(requests, &numRequestProcessed);

    bool triggerRemoveFailed = false;
    NextRequest& triggerFailedRequest = mNextRequests.editItemAt(0);
    for (size_t i = 0; i < numRequestProcessed; i++) {
        NextRequest& nextRequest = mNextRequests.editItemAt(i);
        nextRequest.submitted = true;


        // Update the latest request sent to HAL
        if (nextRequest.halRequest.settings != NULL) { // Don't update if they were unchanged
            Mutex::Autolock al(mLatestRequestMutex);

            camera_metadata_t* cloned = clone_camera_metadata(nextRequest.halRequest.settings);
            mLatestRequest.acquire(cloned);

            sp<Camera3Device> parent = mParent.promote();
            if (parent != NULL) {
                parent->monitorMetadata(TagMonitor::REQUEST,
                        nextRequest.halRequest.frame_number,
                        0, mLatestRequest);
            }
        }

        if (nextRequest.halRequest.settings != NULL) {
            nextRequest.captureRequest->mSettingsList.begin()->metadata.unlock(
                    nextRequest.halRequest.settings);
        }

        cleanupPhysicalSettings(nextRequest.captureRequest, &nextRequest.halRequest);

        if (!triggerRemoveFailed) {
            // Remove any previously queued triggers (after unlock)
            status_t removeTriggerRes = removeTriggers(mPrevRequest);
            if (removeTriggerRes != OK) {
                triggerRemoveFailed = true;
                triggerFailedRequest = nextRequest;
            }
        }
    }

    // NOTE: N Lines are omitted here
    return true;
}

数据流部分

Result 回传的流程在 Framework 里相对简单一些。

数据在 HAL 处理完毕打包成 Result 后,会通过 HIDL 回传到 Framework,而 Camera3Device::processCaptureResult 则会接收到这个 Result,然后再将内中携带的数据通过一系列回调上传到 APP(或者直接送去 Display)。

这里需要注意一下,MTK 底层的 AppStreamMgr 会有两次调用 processCaptureResult,第一次调用时是 reqeust.hasCallbacktrue 的情况,这里面主要是和 FrameProcessor 对接的,下面就不细致去看了,重点是看 callback buffer 回传的流程。

下面主要是讲图示中红框部分。

Camera3Device::processCaptureResult

Result 到达 Framework:

  1. 第 26~31 行,获取本次 result 的时间戳;
  2. 第 40 行,将本次收到的 buffer 向上层返回。
void Camera3Device::processCaptureResult(const camera3_capture_result *result) {
    // NOTE: N Lines are omitted here

    bool isPartialResult = false;
    CameraMetadata collectedPartialResult;
    bool hasInputBufferInRequest = false;

    // Get shutter timestamp and resultExtras from list of in-flight requests,
    // where it was added by the shutter notification for this frame. If the
    // shutter timestamp isn't received yet, append the output buffers to the
    // in-flight request and they will be returned when the shutter timestamp
    // arrives. Update the in-flight status and remove the in-flight entry if
    // all result data and shutter timestamp have been received.
    nsecs_t shutterTimestamp = 0;

    {
        Mutex::Autolock l(mInFlightLock);
        // NOTE: N Lines are omitted here

        shutterTimestamp = request.shutterTimestamp;
        hasInputBufferInRequest = request.hasInputBuffer;

        // Did we get the (final) result metadata for this capture?
        // NOTE: N Lines are omitted here

        camera_metadata_ro_entry_t entry;
        res = find_camera_metadata_ro_entry(result->result,
                ANDROID_SENSOR_TIMESTAMP, &entry);
        if (res == OK && entry.count == 1) {
            request.sensorTimestamp = entry.data.i64[0];
        }

        // If shutter event isn't received yet, append the output buffers to
        // the in-flight request. Otherwise, return the output buffers to
        // streams.
        if (shutterTimestamp == 0) {
            request.pendingOutputBuffers.appendArray(result->output_buffers,
                result->num_output_buffers);
        } else {
            returnOutputBuffers(result->output_buffers,
                result->num_output_buffers, shutterTimestamp);
        }

        if (result->result != NULL && !isPartialResult) {
            for (uint32_t i = 0; i < result->num_physcam_metadata; i++) {
                CameraMetadata physicalMetadata;
                physicalMetadata.append(result->physcam_metadata[i]);
                request.physicalMetadatas.push_back({String16(result->physcam_ids[i]),
                        physicalMetadata});
            }
            // NOTE: N Lines are omitted here
        }

        removeInFlightRequestIfReadyLocked(idx);
    } // scope for mInFlightLock

    // NOTE: N Lines are omitted here
}

Camera3Device::returnOutputBuffers

这个函数会把所有的 buffer 返回到各自的 output stream。

需要注意的是第 7 行,此处调用到了 Camera3Stream 的 returnBuffer 方法。

具体的逻辑就不详细讲了,简单来说一下调用逻辑。

这里面会调用到 Camera3OutputStream 实例的 returnBufferLocked 方法,进一步到其基类 Camera3IOStreamBase 的 returnAnyBufferLocked 方法,然后再回到 Camera3OutputStream 实现的 returnBufferCheckedLocked 方法,最终在这里调用到 queueBufferToConsumer

queueBufferToConsumer 就很关键了,调用到了 consumer 实例的 queueBuffer 逻辑,这里粗略来说会对应到 Surface 的 queueBuffer 逻辑,再进一步(这里似乎涉及到 Binder)会调用到 BufferQueueProducer 的 queueBuffer,在这里面就有触发 onFrameAvailable 的逻辑,具体到 callback stream 的话就是触发了 CallbackProcessor 实现的 onFrameAvailable

void Camera3Device::returnOutputBuffers(
        const camera3_stream_buffer_t *outputBuffers, size_t numBuffers,
        nsecs_t timestamp) {
    for (size_t i = 0; i < numBuffers; i++)
    {
        Camera3Stream *stream = Camera3Stream::cast(outputBuffers[i].stream);
        status_t res = stream->returnBuffer(outputBuffers[i], timestamp);
        // Note: stream may be deallocated at this point, if this buffer was
        // the last reference to it.
        if (res != OK) {
            ALOGE("Can't return buffer to its stream: %s (%d)",
                strerror(-res), res);
        }
    }
}

CallbackProcessor::onFrameAvailable

第 5 行发送信号,结束 threadLoop 里的等待。

void CallbackProcessor::onFrameAvailable(const BufferItem& /*item*/) {
    Mutex::Autolock l(mInputMutex);
    if (!mCallbackAvailable) {
        mCallbackAvailable = true;
        mCallbackAvailableSignal.signal();
    }
}

CallbackProcessor::threadLoop

此处逻辑如下:

  1. 第 7 行,进入等待,前面的 onFrameAvailable 被调用后这里就能继续执行下去;
  2. 第 19 行,正常情况下走的这里,对传来的数据帧进行进一步处理。
bool CallbackProcessor::threadLoop() {
    status_t res;

    {
        Mutex::Autolock l(mInputMutex);
        while (!mCallbackAvailable) {
            res = mCallbackAvailableSignal.waitRelative(mInputMutex,
                    kWaitDuration);
            if (res == TIMED_OUT) return true;
        }
        mCallbackAvailable = false;
    }

    do {
        sp<Camera2Client> client = mClient.promote();
        if (client == 0) {
            res = discardNewCallback();
        } else {
            res = processNewCallback(client);
        }
    } while (res == OK);

    return true;
}

CallbackProcessor::processNewCallback

这个函数实际内容不少,我已经省略了大半部分,这部分主要是将 buffer 数据拷贝到第 5 行 callbackHeap 的 mBuffers 中,而后第 19 行调用 dataCallback 将该 buffer 送往 JNI。

这里的 dataCallback 对应的应该是 Camera.cpp 中实现的函数(在 open camera ,调用 Camera2Client::connect 时 Camera 类实例作为 client 传入,并对应到 mSharedCameraCallbacks 这个成员),它主要是调用了 JNI 的 postData 函数

status_t CallbackProcessor::processNewCallback(sp<Camera2Client> &client) {
    ATRACE_CALL();
    status_t res;

    sp<Camera2Heap> callbackHeap;
    bool useFlexibleYuv = false;
    int32_t previewFormat = 0;
    size_t heapIdx;
    
    // NOTE: N Lines are omitted here

    // Call outside parameter lock to allow re-entrancy from notification
    {
        Camera2Client::SharedCameraCallbacks::Lock
            l(client->mSharedCameraCallbacks);
        if (l.mRemoteCallback != 0) {
            ALOGV("%s: Camera %d: Invoking client data callback",
                    __FUNCTION__, mId);
            l.mRemoteCallback->dataCallback(CAMERA_MSG_PREVIEW_FRAME,
                    callbackHeap->mBuffers[heapIdx], NULL);
        }
    }

    // Only increment free if we're still using the same heap
    mCallbackHeapFree++;

    ALOGV("%s: exit", __FUNCTION__);

    return OK;
}

Camera-JNI::postData

这里主要是第 28 行,调用 copyAndPost 将数据上传。

void JNICameraContext::postData(int32_t msgType, const sp& dataPtr,
                                camera_frame_metadata_t *metadata)
{
    // VM pointer will be NULL if object is released
    Mutex::Autolock _l(mLock);
    JNIEnv *env = AndroidRuntime::getJNIEnv();
    if (mCameraJObjectWeak == NULL) {
        ALOGW("callback on dead camera object");
        return;
    }

    int32_t dataMsgType = msgType & ~CAMERA_MSG_PREVIEW_METADATA;

    // return data based on callback type
    switch (dataMsgType) {
        case CAMERA_MSG_VIDEO_FRAME:
            // should never happen
            break;

        // For backward-compatibility purpose, if there is no callback
        // buffer for raw image, the callback returns null.
        case CAMERA_MSG_RAW_IMAGE:
            ALOGV("rawCallback");
            if (mRawImageCallbackBuffers.isEmpty()) {
                env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
                        mCameraJObjectWeak, dataMsgType, 0, 0, NULL);
            } else {
                copyAndPost(env, dataPtr, dataMsgType);
            }
            break;

        // There is no data.
        case 0:
            break;

        default:
            ALOGV("dataCallback(%d, %p)", dataMsgType, dataPtr.get());
            copyAndPost(env, dataPtr, dataMsgType);
            break;
    }

    // post frame metadata to Java
    if (metadata && (msgType & CAMERA_MSG_PREVIEW_METADATA)) {
        postMetadata(env, CAMERA_MSG_PREVIEW_METADATA, metadata);
    }
}

Camera-JNI::copyAndPost

这里就是我们要看的最后一步了:

  1. 第 7~11 行,将传过来的 buffer 地址获取到;
  2. 第 18 行,由于是 APP 主动带下 buffer,所以走了这个分支;
  3. 第 19 行,注意这里,把 APP 带下来的 buffer 以 jbyteArray 的类型指针形式获取出来,取出后 mCallbackBuffers 的 size 就减 1,如果此时 mCallbackBuffers 为空,则返回的是 NULL,进而会导致第 27 行直接 return,但不会导致出现逻辑 Error
  4. 第 21 行,一般 APP 一次只带一个 buffer 下来,所以这里取出后 isEmpty 会是 true,于是走了这个分支;
  5. 第 23 行,目前 mCallbackBuffers 为空,这里 Google 的逻辑是调用一次 setPreviewCallbackFlag(0x00)(具体触发的逻辑前面已经有提及了,这里就不赘述),会导致更新一次 CaptureRequest,而这个 request 里面 outputBuffers 的 size 就会是 1,如果 RequestThread 在下 request 时候正好拿到了这个 CaptureRequest,就会导致后续 callback buffer 缺失的情况;
  6. 第 47 行,将获取出来的 callback buffer 丢上去给 APP。
void JNICameraContext::copyAndPost(JNIEnv* env, const sp<IMemory>& dataPtr, int msgType)
{
    jbyteArray obj = NULL;

    // allocate Java byte array and copy data
    if (dataPtr != NULL) {
        ssize_t offset;
        size_t size;
        sp<IMemoryHeap> heap = dataPtr->getMemory(&offset, &size);
        ALOGV("copyAndPost: off=%zd, size=%zu", offset, size);
        uint8_t *heapBase = (uint8_t*)heap->base();

        if (heapBase != NULL) {
            const jbyte* data = reinterpret_cast<const jbyte*>(heapBase + offset);

            if (msgType == CAMERA_MSG_RAW_IMAGE) {
                obj = getCallbackBuffer(env, &mRawImageCallbackBuffers, size);
            } else if (msgType == CAMERA_MSG_PREVIEW_FRAME && mManualBufferMode) {
                obj = getCallbackBuffer(env, &mCallbackBuffers, size);

                if (mCallbackBuffers.isEmpty()) {
                    ALOGV("Out of buffers, clearing callback!");
                    mCamera->setPreviewCallbackFlags(CAMERA_FRAME_CALLBACK_FLAG_NOOP);
                    mManualCameraCallbackSet = false;

                    if (obj == NULL) {
                        return;
                    }
                }
            } else {
                ALOGV("Allocating callback buffer");
                obj = env->NewByteArray(size);
            }

            if (obj == NULL) {
                ALOGE("Couldn't allocate byte array for JPEG data");
                env->ExceptionClear();
            } else {
                env->SetByteArrayRegion(obj, 0, size, data);
            }
        } else {
            ALOGE("image heap is NULL");
        }
    }

    // post image data to Java
    env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
            mCameraJObjectWeak, msgType, 0, 0, obj);
    if (obj) {
        env->DeleteLocalRef(obj);
    }
}

结语

至此,API1 转 HAL3 流程中,关于预览部分的逻辑我们就已经有了大体的了解,回顾一下这几篇文章都讲了些什么:

  1. 概述:以日常处理三方卡顿问题时遇到的两个由 Framework 逻辑引起的卡顿现象为背景,简单介绍它们的原理以及对应的解决方案,由此引出对 API1 转 HAL3 预览流程的思考;
  2. startPreview:介绍了 API1 中最基本的打开预览的接口,在 HAL3 下的运作逻辑;
  3. setPreviewCallbackFlag:介绍了 startPreview 被调用后,接着调用 setPreviewCallbackWithBuffer 会触发的逻辑时序,从而深入理解 re-configure 动作导致的卡顿现象;
  4. Request && Result:重点关注 callback stream 对应的 request 下发逻辑以及对应的 result 回传逻辑,深入了解由于 setPreviewCallbackFlag(0x00) 触发时机导致的 callback buffer 缺失而引起的卡顿现象。

由于本次流程学习时间较紧张,在详细解析中我省略了较多细节,但应该不会影响对整体流程的理解,如有不清楚的地方可以指出,虽然我不一定了解得很透彻,但还是可以交流交流的。

你可能感兴趣的:([Android P] CameraAPI1 转 HAL3 预览流程(四) — Preview Data)