Android 8.0系统源码分析--Camera processCaptureResult结果回传源码分析

     相机,从上到下概览一下,真是太大了,上面的APP->Framework->CameraServer->CameraHAL,HAL进程中Pipeline、接各种算法的Node、再往下的ISP、3A、Driver,真是太大了,想把它搞懂真不是个简单的事情。不过我们奔着要把它搞懂的目标,一点点的啃,弄懂一点少一点,我们的功力也在不断的前进中一步步的增强。

     本节,我们就来看一下HAL层一帧处理完成,通过HIDL定义的接口processCaptureResult将数据回传的逻辑。我自己使用的Android8.0的系统源码是通过百度云盘分享的,大家可从Android 8.0系统源码分析--开篇中下载,百度云盘的下载链接和密码都有。

     Camera系统也提供了非常多的dump方式,可以dump metadata、dump buffer,对我们分析问题都有非常大的帮助。

     1、MTK系统中camera sensor的信息会通过kernal中imagesensor.c中的逻辑将所有camera sensor info写入到proc/driver/camera_info文件中,我们可以直接cat proc/driver/camera_info查看camera sensor的详细信息,其中记录了sensor类型,比如IMX386_mipi_raw;记录了sensor支持的分辨率大小,还有一些其他原始信息;

     2、我们可以执行adb shell dumpsys media.camera > camerainfo.txt收集camera metadata信息,大家可以试一下,从这里可以轻易得到很多camera metadata信息;

     3、android-8.0.0\system\media\camera\docs\docs.html文件中定义了AOSP所有的metadata,我们可以直接双击用浏览器打开它,就可以看到所有metadata,当然,每个SOC厂商肯定会自己新增一些,一般metadata的命名都是下划线拼接的,比如android.scaler.cropRegion,它的命名为ANDROID_SCALER_CROP_REGION,我们在HAL进程中就可以通过它去查询当前cropRegion的值。

Android 8.0系统源码分析--Camera processCaptureResult结果回传源码分析_第1张图片

     4、Google提供的基于Camera API2 Demo地址:android-Camera2Basic(普通的预览拍照功能)、android-Camera2Video(录像功能)。

     HIDL接口的定义有ICameraDevice.hal、ICameraDeviceSession.hal、ICameraDeviceCallback.hal,文件目录在hardware\interfaces\camera\device\xx目录下,xx为版本号,我所下载的源码有1.0和3.2两个版本,我们来看一下3.2版本下ICameraDeviceCallback.hal的定义,源码如下:

package [email protected];

import [email protected]::types;

/**
 *
 * Callback methods for the HAL to call into the framework.
 *
 * These methods are used to return metadata and image buffers for a completed
 * or failed captures, and to notify the framework of asynchronous events such
 * as errors.
 *
 * The framework must not call back into the HAL from within these callbacks,
 * and these calls must not block for extended periods.
 *
 */
interface ICameraDeviceCallback {

    /**
     * processCaptureResult:
     *
     * Send results from one or more completed or partially completed captures
     * to the framework.
     * processCaptureResult() may be invoked multiple times by the HAL in
     * response to a single capture request. This allows, for example, the
     * metadata and low-resolution buffers to be returned in one call, and
     * post-processed JPEG buffers in a later call, once it is available. Each
     * call must include the frame number of the request it is returning
     * metadata or buffers for. Only one call to processCaptureResult
     * may be made at a time by the HAL although the calls may come from
     * different threads in the HAL.
     *
     * A component (buffer or metadata) of the complete result may only be
     * included in one process_capture_result call. A buffer for each stream,
     * and the result metadata, must be returned by the HAL for each request in
     * one of the processCaptureResult calls, even in case of errors producing
     * some of the output. A call to processCaptureResult() with neither
     * output buffers or result metadata is not allowed.
     *
     * The order of returning metadata and buffers for a single result does not
     * matter, but buffers for a given stream must be returned in FIFO order. So
     * the buffer for request 5 for stream A must always be returned before the
     * buffer for request 6 for stream A. This also applies to the result
     * metadata; the metadata for request 5 must be returned before the metadata
     * for request 6.
     *
     * However, different streams are independent of each other, so it is
     * acceptable and expected that the buffer for request 5 for stream A may be
     * returned after the buffer for request 6 for stream B is. And it is
     * acceptable that the result metadata for request 6 for stream B is
     * returned before the buffer for request 5 for stream A is. If multiple
     * capture results are included in a single call, camera framework must
     * process results sequentially from lower index to higher index, as if
     * these results were sent to camera framework one by one, from lower index
     * to higher index.
     *
     * The HAL retains ownership of result structure, which only needs to be
     * valid to access during this call.
     *
     * The output buffers do not need to be filled yet; the framework must wait
     * on the stream buffer release sync fence before reading the buffer
     * data. Therefore, this method should be called by the HAL as soon as
     * possible, even if some or all of the output buffers are still in
     * being filled. The HAL must include valid release sync fences into each
     * output_buffers stream buffer entry, or -1 if that stream buffer is
     * already filled.
     *
     * If the result buffer cannot be constructed for a request, the HAL must
     * return an empty metadata buffer, but still provide the output buffers and
     * their sync fences. In addition, notify() must be called with an
     * ERROR_RESULT message.
     *
     * If an output buffer cannot be filled, its status field must be set to
     * STATUS_ERROR. In addition, notify() must be called with a ERROR_BUFFER
     * message.
     *
     * If the entire capture has failed, then this method still needs to be
     * called to return the output buffers to the framework. All the buffer
     * statuses must be STATUS_ERROR, and the result metadata must be an
     * empty buffer. In addition, notify() must be called with a ERROR_REQUEST
     * message. In this case, individual ERROR_RESULT/ERROR_BUFFER messages
     * must not be sent.
     *
     * Performance requirements:
     *
     * This is a non-blocking call. The framework must handle each CaptureResult
     * within 5ms.
     *
     * The pipeline latency (see S7 for definition) should be less than or equal to
     * 4 frame intervals, and must be less than or equal to 8 frame intervals.
     *
     */
    processCaptureResult(vec results);

    /**
     * notify:
     *
     * Asynchronous notification callback from the HAL, fired for various
     * reasons. Only for information independent of frame capture, or that
     * require specific timing. Multiple messages may be sent in one call; a
     * message with a higher index must be considered to have occurred after a
     * message with a lower index.
     *
     * Multiple threads may call notify() simultaneously.
     *
     * Buffers delivered to the framework must not be dispatched to the
     * application layer until a start of exposure timestamp (or input image's
     * start of exposure timestamp for a reprocess request) has been received
     * via a SHUTTER notify() call. It is highly recommended to dispatch this
     * call as early as possible.
     *
     * ------------------------------------------------------------------------
     * Performance requirements:
     *
     * This is a non-blocking call. The framework must handle each message in 5ms.
     */
    notify(vec msgs);

};

     看到这些大段大段的注释,就能明白,这些接口肯定都是非常重要的,详细的注释也是很好的习惯,看Android的源码感觉确实很舒服,难怪人家的代码能成为标准,不管是命名,格式,注释,分包,分类,所有的地方都让人感觉舒服!!

     好了,进入到我们本节的主题吧,ICameraDeviceCallback是HIDL定义的回调接口,processCaptureResult方法就是从HAL层回调到CameraServer的接口,CameraServer这一侧的回调类就是Camera3Device,因为在openCamera时,构造出来的Camera3Device进行初始化,Camera3Device类的initialize方法中与HAL进行连接,获取session时,将自己作为callback回调类传递到了HAL,所以后续HAL就会回调到Camera3Device类的processCaptureResult方法当中。

     我们再来回顾一下Camera3Device类的initialize方法,Camera3Device文件的目录路径为frameworks\av\services\camera\libcameraservice\device3\Camera3Device.cpp,initialize方法的源码如下:

status_t Camera3Device::initialize(sp manager) {
    ATRACE_CALL();
    Mutex::Autolock il(mInterfaceLock);
    Mutex::Autolock l(mLock);

    ALOGV("%s: Initializing HIDL device for camera %s", __FUNCTION__, mId.string());
    if (mStatus != STATUS_UNINITIALIZED) {
        CLOGE("Already initialized!");
        return INVALID_OPERATION;
    }
    if (manager == nullptr) return INVALID_OPERATION;

    sp session;
    ATRACE_BEGIN("CameraHal::openSession");
    status_t res = manager->openSession(mId.string(), this,
            /*out*/ &session);
    ATRACE_END();
    if (res != OK) {
        SET_ERR_L("Could not open camera session: %s (%d)", strerror(-res), res);
        return res;
    }

    res = manager->getCameraCharacteristics(mId.string(), &mDeviceInfo);
    if (res != OK) {
        SET_ERR_L("Could not retrive camera characteristics: %s (%d)", strerror(-res), res);
        session->close();
        return res;
    }

    std::shared_ptr queue;
    auto requestQueueRet = session->getCaptureRequestMetadataQueue(
        [&queue](const auto& descriptor) {
            queue = std::make_shared(descriptor);
            if (!queue->isValid() || queue->availableToWrite() <= 0) {
                ALOGE("HAL returns empty request metadata fmq, not use it");
                queue = nullptr;
                // don't use the queue onwards.
            }
        });
    if (!requestQueueRet.isOk()) {
        ALOGE("Transaction error when getting request metadata fmq: %s, not use it",
                requestQueueRet.description().c_str());
        return DEAD_OBJECT;
    }
    auto resultQueueRet = session->getCaptureResultMetadataQueue(
        [&queue = mResultMetadataQueue](const auto& descriptor) {
            queue = std::make_unique(descriptor);
            if (!queue->isValid() ||  queue->availableToWrite() <= 0) {
                ALOGE("HAL returns empty result metadata fmq, not use it");
                queue = nullptr;
                // Don't use the queue onwards.
            }
        });
    if (!resultQueueRet.isOk()) {
        ALOGE("Transaction error when getting result metadata queue from camera session: %s",
                resultQueueRet.description().c_str());
        return DEAD_OBJECT;
    }

    mInterface = std::make_unique(session, queue);
    std::string providerType;
    mVendorTagId = manager->getProviderTagIdLocked(mId.string());

    return initializeCommonLocked();
}

     该方法中就是调用manager->openSession(mId.string(), this, /*out*/ &session)来打开session的,manager是方法入参sp,调用openSession方法的第二个参数就是回调接口,传值为this,第三个是输出参数,当session在HAL进程中创建成功,则会通过回调赋值给这个输出参数。进一步来看一下CameraProviderManager类的openSession方法的实现,源码如下:

status_t CameraProviderManager::openSession(const std::string &id,
        const sp& callback,
        /*out*/
        sp *session) {

    std::lock_guard lock(mInterfaceMutex);

    auto deviceInfo = findDeviceInfoLocked(id,
            /*minVersion*/ {3,0}, /*maxVersion*/ {4,0});
    if (deviceInfo == nullptr) return NAME_NOT_FOUND;

    auto *deviceInfo3 = static_cast(deviceInfo);

    Status status;
    hardware::Return ret;
    ret = deviceInfo3->mInterface->open(callback, [&status, &session]
            (Status s, const sp& cameraSession) {
                status = s;
                if (status == Status::OK) {
                    *session = cameraSession;
                }
            });
    if (!ret.isOk()) {
        ALOGE("%s: Transaction error opening a session for camera device %s: %s",
                __FUNCTION__, id.c_str(), ret.description().c_str());
        return DEAD_OBJECT;
    }
    return mapToStatusT(status);
}

     这里的deviceInfo3->mInterface->open就会通过HIDL进入到CameraHalServer进程当中了。

     好,回过头来看Camera3Device类的processCaptureResult方法,源码如下:

// Only one processCaptureResult should be called at a time, so
// the locks won't block. The locks are present here simply to enforce this.
hardware::Return Camera3Device::processCaptureResult(
        const hardware::hidl_vec<
                hardware::camera::device::V3_2::CaptureResult>& results) {

    if (mProcessCaptureResultLock.tryLock() != OK) {
        // This should never happen; it indicates a wrong client implementation
        // that doesn't follow the contract. But, we can be tolerant here.
        ALOGE("%s: callback overlapped! waiting 1s...",
                __FUNCTION__);
        if (mProcessCaptureResultLock.timedLock(1000000000 /* 1s */) != OK) {
            ALOGE("%s: cannot acquire lock in 1s, dropping results",
                    __FUNCTION__);
            // really don't know what to do, so bail out.
            return hardware::Void();
        }
    }
    for (const auto& result : results) {
        processOneCaptureResultLocked(result);
    }
    mProcessCaptureResultLock.unlock();
    return hardware::Void();
}

     可以看到,该方法的代码非常简洁,我们是不是也应该有所领悟,重要的节点的回调就要这样,简明,让人看着非常容易明白,如果我们以后在实现一些功能时,模块与模块之间对接的边界地方的逻辑,就应该尽可能简明,其他模块的同事如果看到这里的时候,就能非常容易的明白代码编写着的意思。该方法中就是for循环对每个result调用processOneCaptureResultLocked进一步处理,processOneCaptureResultLocked方法的源码如下:

void Camera3Device::processOneCaptureResultLocked(
        const hardware::camera::device::V3_2::CaptureResult& result) {
    camera3_capture_result r;
    status_t res;
    r.frame_number = result.frameNumber;

    hardware::camera::device::V3_2::CameraMetadata resultMetadata;
    if (result.fmqResultSize > 0) {
        resultMetadata.resize(result.fmqResultSize);
        if (mResultMetadataQueue == nullptr) {
            return; // logged in initialize()
        }
        if (!mResultMetadataQueue->read(resultMetadata.data(), result.fmqResultSize)) {
            ALOGE("%s: Frame %d: Cannot read camera metadata from fmq, size = %" PRIu64,
                    __FUNCTION__, result.frameNumber, result.fmqResultSize);
            return;
        }
    } else {
        resultMetadata.setToExternal(const_cast(result.result.data()),
                result.result.size());
    }

    if (resultMetadata.size() != 0) {
        r.result = reinterpret_cast(resultMetadata.data());
        size_t expected_metadata_size = resultMetadata.size();
        if ((res = validate_camera_metadata_structure(r.result, &expected_metadata_size)) != OK) {
            ALOGE("%s: Frame %d: Invalid camera metadata received by camera service from HAL: %s (%d)",
                    __FUNCTION__, result.frameNumber, strerror(-res), res);
            return;
        }
    } else {
        r.result = nullptr;
    }

    std::vector outputBuffers(result.outputBuffers.size());
    std::vector outputBufferHandles(result.outputBuffers.size());
    for (size_t i = 0; i < result.outputBuffers.size(); i++) {
        auto& bDst = outputBuffers[i];
        const StreamBuffer &bSrc = result.outputBuffers[i];

        ssize_t idx = mOutputStreams.indexOfKey(bSrc.streamId);
        if (idx == NAME_NOT_FOUND) {
            ALOGE("%s: Frame %d: Buffer %zu: Invalid output stream id %d",
                    __FUNCTION__, result.frameNumber, i, bSrc.streamId);
            return;
        }
        bDst.stream = mOutputStreams.valueAt(idx)->asHalStream();

        buffer_handle_t *buffer;
        res = mInterface->popInflightBuffer(result.frameNumber, bSrc.streamId, &buffer);
        if (res != OK) {
            ALOGE("%s: Frame %d: Buffer %zu: No in-flight buffer for stream %d",
                    __FUNCTION__, result.frameNumber, i, bSrc.streamId);
            return;
        }
        bDst.buffer = buffer;
        bDst.status = mapHidlBufferStatus(bSrc.status);
        bDst.acquire_fence = -1;
        if (bSrc.releaseFence == nullptr) {
            bDst.release_fence = -1;
        } else if (bSrc.releaseFence->numFds == 1) {
            bDst.release_fence = dup(bSrc.releaseFence->data[0]);
        } else {
            ALOGE("%s: Frame %d: Invalid release fence for buffer %zu, fd count is %d, not 1",
                    __FUNCTION__, result.frameNumber, i, bSrc.releaseFence->numFds);
            return;
        }
    }
    r.num_output_buffers = outputBuffers.size();
    r.output_buffers = outputBuffers.data();

    camera3_stream_buffer_t inputBuffer;
    if (result.inputBuffer.streamId == -1) {
        r.input_buffer = nullptr;
    } else {
        if (mInputStream->getId() != result.inputBuffer.streamId) {
            ALOGE("%s: Frame %d: Invalid input stream id %d", __FUNCTION__,
                    result.frameNumber, result.inputBuffer.streamId);
            return;
        }
        inputBuffer.stream = mInputStream->asHalStream();
        buffer_handle_t *buffer;
        res = mInterface->popInflightBuffer(result.frameNumber, result.inputBuffer.streamId,
                &buffer);
        if (res != OK) {
            ALOGE("%s: Frame %d: Input buffer: No in-flight buffer for stream %d",
                    __FUNCTION__, result.frameNumber, result.inputBuffer.streamId);
            return;
        }
        inputBuffer.buffer = buffer;
        inputBuffer.status = mapHidlBufferStatus(result.inputBuffer.status);
        inputBuffer.acquire_fence = -1;
        if (result.inputBuffer.releaseFence == nullptr) {
            inputBuffer.release_fence = -1;
        } else if (result.inputBuffer.releaseFence->numFds == 1) {
            inputBuffer.release_fence = dup(result.inputBuffer.releaseFence->data[0]);
        } else {
            ALOGE("%s: Frame %d: Invalid release fence for input buffer, fd count is %d, not 1",
                    __FUNCTION__, result.frameNumber, result.inputBuffer.releaseFence->numFds);
            return;
        }
        r.input_buffer = &inputBuffer;
    }

    r.partial_result = result.partialResult;

    processCaptureResult(&r);
}

     第一行就是给成员变量frame_number赋值,看到了吧,上节我们讲RequestThread的预览循环时,也多次提到该属性,非常重要,它是CameraServer、CameraHalServer两个进程对Request对标的标志!接着根据if (result.fmqResultSize > 0)读取metadata,再下来就是outputBuffers了,for循环将result.outputBuffers中的StreamBuffer一个一个取出,然后调用mInterface->popInflightBuffer(result.frameNumber, bSrc.streamId, &buffer)再去取HAL填充完成的buffer指针,这个buffer指针就是最终我们要的数据载体了,它是上一节Android 8.0系统源码分析--Camera RequestThread预览循环源码分析我们已经讲过的,在Request发送到HAL进程前,封装buffer时已经放置到成员变量mInflightBufferMap当中了,这里就反过程将它取出来。再下来是对inputBuffer输入buffer的处理,处理完,封装参数camera3_capture_result就解析好了,接着调用processCaptureResult处理一帧结果,该方法是同名的重载方法,入参为const camera3_capture_result *result类型,该方法的源码如下:

void Camera3Device::processCaptureResult(const camera3_capture_result *result) {
    ATRACE_CALL();

    status_t res;

    uint32_t frameNumber = result->frame_number;
    if (result->result == NULL && result->num_output_buffers == 0 &&
            result->input_buffer == NULL) {
        SET_ERR("No result data provided by HAL for frame %d",
                frameNumber);
        return;
    }

    if (!mUsePartialResult &&
            result->result != NULL &&
            result->partial_result != 1) {
        SET_ERR("Result is malformed for frame %d: partial_result %u must be 1"
                " if partial result is not supported",
                frameNumber, result->partial_result);
        return;
    }

    bool isPartialResult = false;
    CameraMetadata collectedPartialResult;
    CaptureResultExtras resultExtras;
    bool hasInputBufferInRequest = false;

    // Get shutter timestamp and resultExtras from list of in-flight requests,
    // where it was added by the shutter notification for this frame. If the
    // shutter timestamp isn't received yet, append the output buffers to the
    // in-flight request and they will be returned when the shutter timestamp
    // arrives. Update the in-flight status and remove the in-flight entry if
    // all result data and shutter timestamp have been received.
    nsecs_t shutterTimestamp = 0;

    {
        Mutex::Autolock l(mInFlightLock);
        ssize_t idx = mInFlightMap.indexOfKey(frameNumber);
        if (idx == NAME_NOT_FOUND) {
            SET_ERR("Unknown frame number for capture result: %d",
                    frameNumber);
            return;
        }
        InFlightRequest &request = mInFlightMap.editValueAt(idx);
        ALOGVV("%s: got InFlightRequest requestId = %" PRId32
                ", frameNumber = %" PRId64 ", burstId = %" PRId32
                ", partialResultCount = %d, hasCallback = %d",
                __FUNCTION__, request.resultExtras.requestId,
                request.resultExtras.frameNumber, request.resultExtras.burstId,
                result->partial_result, request.hasCallback);
        // Always update the partial count to the latest one if it's not 0
        // (buffers only). When framework aggregates adjacent partial results
        // into one, the latest partial count will be used.
        if (result->partial_result != 0)
            request.resultExtras.partialResultCount = result->partial_result;

        // Check if this result carries only partial metadata
        if (mUsePartialResult && result->result != NULL) {
            if (result->partial_result > mNumPartialResults || result->partial_result < 1) {
                SET_ERR("Result is malformed for frame %d: partial_result %u must be  in"
                        " the range of [1, %d] when metadata is included in the result",
                        frameNumber, result->partial_result, mNumPartialResults);
                return;
            }
            isPartialResult = (result->partial_result < mNumPartialResults);
            if (isPartialResult) {
                request.collectedPartialResult.append(result->result);
            }

            if (isPartialResult && request.hasCallback) {
                // Send partial capture result
                sendPartialCaptureResult(result->result, request.resultExtras,
                        frameNumber);
            }
        }

        shutterTimestamp = request.shutterTimestamp;
        hasInputBufferInRequest = request.hasInputBuffer;

        // Did we get the (final) result metadata for this capture?
        if (result->result != NULL && !isPartialResult) {
            if (request.haveResultMetadata) {
                SET_ERR("Called multiple times with metadata for frame %d",
                        frameNumber);
                return;
            }
            if (mUsePartialResult &&
                    !request.collectedPartialResult.isEmpty()) {
                collectedPartialResult.acquire(
                    request.collectedPartialResult);
            }
            request.haveResultMetadata = true;
        }

        uint32_t numBuffersReturned = result->num_output_buffers;
        if (result->input_buffer != NULL) {
            if (hasInputBufferInRequest) {
                numBuffersReturned += 1;
            } else {
                ALOGW("%s: Input buffer should be NULL if there is no input"
                        " buffer sent in the request",
                        __FUNCTION__);
            }
        }
        request.numBuffersLeft -= numBuffersReturned;
        if (request.numBuffersLeft < 0) {
            SET_ERR("Too many buffers returned for frame %d",
                    frameNumber);
            return;
        }

        camera_metadata_ro_entry_t entry;
        res = find_camera_metadata_ro_entry(result->result,
                ANDROID_SENSOR_TIMESTAMP, &entry);
        if (res == OK && entry.count == 1) {
            request.sensorTimestamp = entry.data.i64[0];
        }

        // If shutter event isn't received yet, append the output buffers to
        // the in-flight request. Otherwise, return the output buffers to
        // streams.
        if (shutterTimestamp == 0) {
            request.pendingOutputBuffers.appendArray(result->output_buffers,
                result->num_output_buffers);
        } else {
            returnOutputBuffers(result->output_buffers,
                result->num_output_buffers, shutterTimestamp);
        }

        if (result->result != NULL && !isPartialResult) {
            if (shutterTimestamp == 0) {
                request.pendingMetadata = result->result;
                request.collectedPartialResult = collectedPartialResult;
            } else if (request.hasCallback) {
                CameraMetadata metadata;
                metadata = result->result;
                sendCaptureResult(metadata, request.resultExtras,
                    collectedPartialResult, frameNumber,
                    hasInputBufferInRequest);
            }
        }

        removeInFlightRequestIfReadyLocked(idx);
    } // scope for mInFlightLock

    if (result->input_buffer != NULL) {
        if (hasInputBufferInRequest) {
            Camera3Stream *stream =
                Camera3Stream::cast(result->input_buffer->stream);
            res = stream->returnInputBuffer(*(result->input_buffer));
            // Note: stream may be deallocated at this point, if this buffer was the
            // last reference to it.
            if (res != OK) {
                ALOGE("%s: RequestThread: Can't return input buffer for frame %d to"
                      "  its stream:%s (%d)",  __FUNCTION__,
                      frameNumber, strerror(-res), res);
            }
        } else {
            ALOGW("%s: Input buffer should be NULL if there is no input"
                    " buffer sent in the request, skipping input buffer return.",
                    __FUNCTION__);
        }
    }
}

     第一步还是取帧号,isPartialResult是指部分,目前对Partial的具体含义还不是很了解,可能的情况比如拍HDR,需要采集三帧,三帧的FrameNumber相同,这三帧一起才能解析合成一帧图片,所以三帧中的每一帧就是Partial的意思了。shutterTimestamp一直不为0,该值是从HAL过来的,为什么一直不为0还需要往HAL那么追究,它不为0导致if (shutterTimestamp == 0)判断为false,则进入else分支,调用returnOutputBuffers归还buffer,紧接着的下面几句,当result->result非空并且当前的回调帧不是部分结果时(if (result->result != NULL && !isPartialResult)),就调用sendCaptureResult把结果回传给APP。

     我们先来看一下returnOutputBuffers的逻辑,然后再分析sendCaptureResult是如何把结果回传到APP的。returnOutputBuffers方法的源码如下:

void Camera3Device::returnOutputBuffers(
        const camera3_stream_buffer_t *outputBuffers, size_t numBuffers,
        nsecs_t timestamp) {
    for (size_t i = 0; i < numBuffers; i++)
    {
        Camera3Stream *stream = Camera3Stream::cast(outputBuffers[i].stream);
        status_t res = stream->returnBuffer(outputBuffers[i], timestamp);
        // Note: stream may be deallocated at this point, if this buffer was
        // the last reference to it.
        if (res != OK) {
            ALOGE("Can't return buffer to its stream: %s (%d)",
                strerror(-res), res);
        }
    }
}

     归还的buffer用for循环来分别处理,每个buffer都有归属的Stream,直接调用stream类的returnBuffer方法,该方法实现在Camera3Stream基类中,Camera3Stream.cpp文件路径为frameworks\av\services\camera\libcameraservice\device3\Camera3Stream.cpp,

status_t Camera3Stream::returnBuffer(const camera3_stream_buffer &buffer,
        nsecs_t timestamp) {
    ATRACE_CALL();
    Mutex::Autolock l(mLock);

    // Check if this buffer is outstanding.
    if (!isOutstandingBuffer(buffer)) {
        ALOGE("%s: Stream %d: Returning an unknown buffer.", __FUNCTION__, mId);
        return BAD_VALUE;
    }

    /**
     * TODO: Check that the state is valid first.
     *
     * = HAL3.2 CONFIGURED only
     *
     * Do this for getBuffer as well.
     */
    status_t res = returnBufferLocked(buffer, timestamp);
    if (res == OK) {
        fireBufferListenersLocked(buffer, /*acquired*/false, /*output*/true);
    }

    // Even if returning the buffer failed, we still want to signal whoever is waiting for the
    // buffer to be returned.
    mOutputBufferReturnedSignal.signal();

    removeOutstandingBuffer(buffer);
    return res;
}

     该方法中还是转调returnBufferLocked来继续处理的,fireBufferListenersLocked方法我们上一节已经讲过了,只有对API1有效,API2都不再使用这个接口了,returnBufferLocked方法是被子类重写的,我们就来看一下Camera3OutputStream类中该方法的实现,源码如下:

status_t Camera3OutputStream::returnBufferLocked(
        const camera3_stream_buffer &buffer,
        nsecs_t timestamp) {
    ATRACE_CALL();

    status_t res = returnAnyBufferLocked(buffer, timestamp, /*output*/true);

    if (res != OK) {
        return res;
    }

    mLastTimestamp = timestamp;
    mFrameCount++;

    return OK;
}

     还是进一步调用returnAnyBufferLocked来处理,returnAnyBufferLocked方法又回到了父类Camera3IOStreamBase当中,源码如下:

status_t Camera3IOStreamBase::returnAnyBufferLocked(
        const camera3_stream_buffer &buffer,
        nsecs_t timestamp,
        bool output) {
    status_t res;

    // returnBuffer may be called from a raw pointer, not a sp<>, and we'll be
    // decrementing the internal refcount next. In case this is the last ref, we
    // might get destructed on the decStrong(), so keep an sp around until the
    // end of the call - otherwise have to sprinkle the decStrong on all exit
    // points.
    sp keepAlive(this);
    decStrong(this);

    if ((res = returnBufferPreconditionCheckLocked()) != OK) {
        return res;
    }

    sp releaseFence;
    res = returnBufferCheckedLocked(buffer, timestamp, output,
                                    &releaseFence);
    // Res may be an error, but we still want to decrement our owned count
    // to enable clean shutdown. So we'll just return the error but otherwise
    // carry on

    if (releaseFence != 0) {
        mCombinedFence = Fence::merge(mName, mCombinedFence, releaseFence);
    }

    if (output) {
        mHandoutOutputBufferCount--;
    }

    mHandoutTotalBufferCount--;
    if (mHandoutTotalBufferCount == 0 && mState != STATE_IN_CONFIG &&
            mState != STATE_IN_RECONFIG && mState != STATE_PREPARING) {
        /**
         * Avoid a spurious IDLE->ACTIVE->IDLE transition when using buffers
         * before/after register_stream_buffers during initial configuration
         * or re-configuration, or during prepare pre-allocation
         */
        ALOGV("%s: Stream %d: All buffers returned; now idle", __FUNCTION__,
                mId);
        sp statusTracker = mStatusTracker.promote();
        if (statusTracker != 0) {
            statusTracker->markComponentIdle(mStatusId, mCombinedFence);
        }
    }

    mBufferReturnedSignal.signal();

    if (output) {
        mLastTimestamp = timestamp;
    }

    return res;
}

     首先调用returnBufferPreconditionCheckLocked来进行参数检查,如果状态不对或者要归还的buffer数量为0,那么就是有问题的,这里就会直接返回错误码;参数检查通过,继续调用returnBufferCheckedLocked进一步处理,returnBufferCheckedLocked方法的实现又下放到了子类Camera3OutputStream当中,源码如下:

status_t Camera3OutputStream::returnBufferCheckedLocked(
            const camera3_stream_buffer &buffer,
            nsecs_t timestamp,
            bool output,
            /*out*/
            sp *releaseFenceOut) {

    (void)output;
    ALOG_ASSERT(output, "Expected output to be true");

    status_t res;

    // Fence management - always honor release fence from HAL
    sp releaseFence = new Fence(buffer.release_fence);
    int anwReleaseFence = releaseFence->dup();

    /**
     * Release the lock briefly to avoid deadlock with
     * StreamingProcessor::startStream -> Camera3Stream::isConfiguring (this
     * thread will go into StreamingProcessor::onFrameAvailable) during
     * queueBuffer
     */
    sp currentConsumer = mConsumer;
    mLock.unlock();

    ANativeWindowBuffer *anwBuffer = container_of(buffer.buffer, ANativeWindowBuffer, handle);
    /**
     * Return buffer back to ANativeWindow
     */
    if (buffer.status == CAMERA3_BUFFER_STATUS_ERROR) {
        // Cancel buffer
        ALOGW("A frame is dropped for stream %d", mId);
        res = currentConsumer->cancelBuffer(currentConsumer.get(),
                anwBuffer,
                anwReleaseFence);
        if (res != OK) {
            ALOGE("%s: Stream %d: Error cancelling buffer to native window:"
                  " %s (%d)", __FUNCTION__, mId, strerror(-res), res);
        }

        notifyBufferReleased(anwBuffer);
        if (mUseBufferManager) {
            // Return this buffer back to buffer manager.
            mBufferReleasedListener->onBufferReleased();
        }
    } else {
        if (mTraceFirstBuffer && (stream_type == CAMERA3_STREAM_OUTPUT)) {
            {
                char traceLog[48];
                snprintf(traceLog, sizeof(traceLog), "Stream %d: first full buffer\n", mId);
                ATRACE_NAME(traceLog);
            }
            mTraceFirstBuffer = false;
        }

        /* Certain consumers (such as AudioSource or HardwareComposer) use
         * MONOTONIC time, causing time misalignment if camera timestamp is
         * in BOOTTIME. Do the conversion if necessary. */
        res = native_window_set_buffers_timestamp(mConsumer.get(),
                mUseMonoTimestamp ? timestamp - mTimestampOffset : timestamp);
        if (res != OK) {
            ALOGE("%s: Stream %d: Error setting timestamp: %s (%d)",
                  __FUNCTION__, mId, strerror(-res), res);
            return res;
        }

        res = queueBufferToConsumer(currentConsumer, anwBuffer, anwReleaseFence);
        if (res != OK) {
            ALOGE("%s: Stream %d: Error queueing buffer to native window: "
                  "%s (%d)", __FUNCTION__, mId, strerror(-res), res);
        }
    }
    mLock.lock();

    // Once a valid buffer has been returned to the queue, can no longer
    // dequeue all buffers for preallocation.
    if (buffer.status != CAMERA3_BUFFER_STATUS_ERROR) {
        mStreamUnpreparable = true;
    }

    if (res != OK) {
        close(anwReleaseFence);
    }

    *releaseFenceOut = releaseFence;

    return res;
}

     mConsumer就是在创建当前流时构造方法中传入的参数sp了,它才是我们申请buffer的源泉!现在buffer已经经过HAL的ISP、3A、各种算法处理填充好了,需要把它queue入队到宿主Surface当中去进行界面显示或者照片保存了,它需要回家去了!!!!当buffer状态(buffer.status)正常,则进入else分支,调用queueBufferToConsumer归还buffer,queueBufferToConsumer方法的源码如下:

status_t Camera3OutputStream::queueBufferToConsumer(sp& consumer,
            ANativeWindowBuffer* buffer, int anwReleaseFence) {
    return consumer->queueBuffer(consumer.get(), buffer, anwReleaseFence);
}

     很简单,就是直接调用ANativeWindow对象的queueBuffer方法将buffer归还回去,ANativeWindow是OpenGL定义的图形接口,在Android上的实现就是Surface和SurfaceFlinger,一个用于生产buffer,一个用于消费buffer。

     好,看完了buffer归还的逻辑,一层层往上回到Camera3Device类的processCaptureResult方法当中,继续来看sendCaptureResult的实现,源码如下:

void Camera3Device::sendCaptureResult(CameraMetadata &pendingMetadata,
        CaptureResultExtras &resultExtras,
        CameraMetadata &collectedPartialResult,
        uint32_t frameNumber,
        bool reprocess) {
    if (pendingMetadata.isEmpty())
        return;

    Mutex::Autolock l(mOutputLock);

    // TODO: need to track errors for tighter bounds on expected frame number
    if (reprocess) {
        if (frameNumber < mNextReprocessResultFrameNumber) {
            SET_ERR("Out-of-order reprocess capture result metadata submitted! "
                "(got frame number %d, expecting %d)",
                frameNumber, mNextReprocessResultFrameNumber);
            return;
        }
        mNextReprocessResultFrameNumber = frameNumber + 1;
    } else {
        if (frameNumber < mNextResultFrameNumber) {
            SET_ERR("Out-of-order capture result metadata submitted! "
                    "(got frame number %d, expecting %d)",
                    frameNumber, mNextResultFrameNumber);
            return;
        }
        mNextResultFrameNumber = frameNumber + 1;
    }

    CaptureResult captureResult;
    captureResult.mResultExtras = resultExtras;
    captureResult.mMetadata = pendingMetadata;

    // Append any previous partials to form a complete result
    if (mUsePartialResult && !collectedPartialResult.isEmpty()) {
        captureResult.mMetadata.append(collectedPartialResult);
    }

    captureResult.mMetadata.sort();

    // Check that there's a timestamp in the result metadata
    camera_metadata_entry timestamp = captureResult.mMetadata.find(ANDROID_SENSOR_TIMESTAMP);
    if (timestamp.count == 0) {
        SET_ERR("No timestamp provided by HAL for frame %d!",
                frameNumber);
        return;
    }

    mTagMonitor.monitorMetadata(TagMonitor::RESULT,
            frameNumber, timestamp.data.i64[0], captureResult.mMetadata);

    insertResultLocked(&captureResult, frameNumber);
}

     首先还是给帧号赋值,然后根据函数入参填充一个CaptureResult对象,用于回调结果,最后调用insertResultLocked将赋值好的CaptureResult结果对象插入到结果队列上,insertResultLocked方法的源码如下:

void Camera3Device::insertResultLocked(CaptureResult *result,
        uint32_t frameNumber) {
    if (result == nullptr) return;

    camera_metadata_t *meta = const_cast(
            result->mMetadata.getAndLock());
    set_camera_metadata_vendor_id(meta, mVendorTagId);
    result->mMetadata.unlock(meta);

    if (result->mMetadata.update(ANDROID_REQUEST_FRAME_COUNT,
            (int32_t*)&frameNumber, 1) != OK) {
        SET_ERR("Failed to set frame number %d in metadata", frameNumber);
        return;
    }

    if (result->mMetadata.update(ANDROID_REQUEST_ID, &result->mResultExtras.requestId, 1) != OK) {
        SET_ERR("Failed to set request ID in metadata for frame %d", frameNumber);
        return;
    }

    // Valid result, insert into queue
    List::iterator queuedResult =
            mResultQueue.insert(mResultQueue.end(), CaptureResult(*result));
    ALOGVV("%s: result requestId = %" PRId32 ", frameNumber = %" PRId64
           ", burstId = %" PRId32, __FUNCTION__,
           queuedResult->mResultExtras.requestId,
           queuedResult->mResultExtras.frameNumber,
           queuedResult->mResultExtras.burstId);

    mResultSignal.signal();
}

     该方法中先进行参数判断,然后将传过来的结果插入到mResultQueue结果队列的末尾,最后调用mResultSignal.signal()通知阻塞线程有消息来了,那么它要通知谁呢?就是FrameProcessorBase帧处理线程了,它是在CameraDeviceClient对象初始化的initializeImpl方法中就构造并运行起来的,我们来回顾一下。CameraDeviceClient类的initializeImpl方法的源码如下:

template
status_t CameraDeviceClient::initializeImpl(TProviderPtr providerPtr) {
    ATRACE_CALL();
    status_t res;

    res = Camera2ClientBase::initialize(providerPtr);
    if (res != OK) {
        return res;
    }

    String8 threadName;
    mFrameProcessor = new FrameProcessorBase(mDevice);
    threadName = String8::format("CDU-%s-FrameProc", mCameraIdStr.string());
    mFrameProcessor->run(threadName.string());

    mFrameProcessor->registerListener(FRAME_PROCESSOR_LISTENER_MIN_ID,
                                      FRAME_PROCESSOR_LISTENER_MAX_ID,
                                      /*listener*/this,
                                      /*sendPartials*/true);

    return OK;
}

     可以看到,这里调用mFrameProcessor的run方法时,传入的线程名字,我们也可以通过ps -T命令查看当前进程的所有线程,就可以找到它。最后调用mFrameProcessor->registerListener注册回调接口,我们来看一下四个参数,FRAME_PROCESSOR_LISTENER_MIN_ID、FRAME_PROCESSOR_LISTENER_MAX_ID都是定义在CameraDeviceClient.h头文件中,FRAME_PROCESSOR_LISTENER_MIN_ID = 0,FRAME_PROCESSOR_LISTENER_MAX_ID = 0x7fffffffL,这两个值表示可以注册的回调接口的数量,可以看到是非常大,绝对是可以满足我们的需求了;第三个参数就是回调接口对象,这里传入的this,所以FrameProcessorBase线程处理好一帧结果后,就会回调到CameraDeviceClient类中了,回调的方法就是onResultAvailable了;第四个参数表示是否支持部分结果回调,这里硬编码true,框架层肯定是要能支持各种各样的需求,所以部分结果的回调肯定也是需要支持的了。

     好,FrameProcessorBase的初始化分析完了,我们继续看一下它的主循环threadLoop,该方法的源码如下:

bool FrameProcessorBase::threadLoop() {
    status_t res;

    sp device;
    {
        device = mDevice.promote();
        if (device == 0) return false;
    }

    res = device->waitForNextFrame(kWaitDuration);
    if (res == OK) {
        processNewFrames(device);
    } else if (res != TIMED_OUT) {
        ALOGE("FrameProcessorBase: Error waiting for new "
                "frames: %s (%d)", strerror(-res), res);
    }

    return true;
}

     这里就是调用device->waitForNextFrame(kWaitDuration)判断结果队列中是否有数据,kWaitDuration表示等待的间隔时间,值为10毫秒,定义在FrameProcessorBase.h头文件中,源码如下:

static const nsecs_t kWaitDuration = 10000000; // 10 ms

     这里的device就是Camera3Device对象了,我们来看一下它的waitForNextFrame方法的实现,源码如下:

status_t Camera3Device::waitForNextFrame(nsecs_t timeout) {
    status_t res;
    Mutex::Autolock l(mOutputLock);

    while (mResultQueue.empty()) {
        res = mResultSignal.waitRelative(mOutputLock, timeout);
        if (res == TIMED_OUT) {
            return res;
        } else if (res != OK) {
            ALOGW("%s: Camera %s: No frame in %" PRId64 " ns: %s (%d)",
                    __FUNCTION__, mId.string(), timeout, strerror(-res), res);
            return res;
        }
    }
    return OK;
}

     这里的逻辑很清晰,如果结果队列mResultQueue不为空,则直接返回,因为有数据可以处理了;如果为空,那么就等待kWaitDuration(10毫秒),不管这里的返回值是什么,都不会影响帧处理线程FrameProcessorBase的循环,因为即使哪一帧数据出错了,帧处理线程也不能因为这个影响而退出,还是要继续正常循环处理的。那么当有数据后,就会调用processNewFrames取数据并进行处理了,processNewFrames方法的源码如下:

void FrameProcessorBase::processNewFrames(const sp &device) {
    status_t res;
    ATRACE_CALL();
    CaptureResult result;

    ALOGV("%s: Camera %s: Process new frames", __FUNCTION__, device->getId().string());

    while ( (res = device->getNextResult(&result)) == OK) {

        // TODO: instead of getting frame number from metadata, we should read
        // this from result.mResultExtras when CameraDeviceBase interface is fixed.
        camera_metadata_entry_t entry;

        entry = result.mMetadata.find(ANDROID_REQUEST_FRAME_COUNT);
        if (entry.count == 0) {
            ALOGE("%s: Camera %s: Error reading frame number",
                    __FUNCTION__, device->getId().string());
            break;
        }
        ATRACE_INT("cam2_frame", entry.data.i32[0]);

        if (!processSingleFrame(result, device)) {
            break;
        }

        if (!result.mMetadata.isEmpty()) {
            Mutex::Autolock al(mLastFrameMutex);
            mLastFrame.acquire(result.mMetadata);
        }
    }
    if (res != NOT_ENOUGH_DATA) {
        ALOGE("%s: Camera %s: Error getting next frame: %s (%d)",
                __FUNCTION__, device->getId().string(), strerror(-res), res);
        return;
    }

    return;
}

     这里的while判断条件就是取帧是否为空,结果赋值到局部变量result引用中,然后调用processSingleFrame进行处理,processSingleFrame方法的源码如下:

bool FrameProcessorBase::processSingleFrame(CaptureResult &result,
                                            const sp &device) {
    ALOGV("%s: Camera %s: Process single frame (is empty? %d)",
            __FUNCTION__, device->getId().string(), result.mMetadata.isEmpty());
    return processListeners(result, device) == OK;
}

     该方法很简单,直接转调processListeners进行处理,processListeners方法的源码如下:

status_t FrameProcessorBase::processListeners(const CaptureResult &result,
        const sp &device) {
    ATRACE_CALL();

    camera_metadata_ro_entry_t entry;

    // Check if this result is partial.
    bool isPartialResult =
            result.mResultExtras.partialResultCount < mNumPartialResults;

    // TODO: instead of getting requestID from CameraMetadata, we should get it
    // from CaptureResultExtras. This will require changing Camera2Device.
    // Currently Camera2Device uses MetadataQueue to store results, which does not
    // include CaptureResultExtras.
    entry = result.mMetadata.find(ANDROID_REQUEST_ID);
    if (entry.count == 0) {
        ALOGE("%s: Camera %s: Error reading frame id", __FUNCTION__, device->getId().string());
        return BAD_VALUE;
    }
    int32_t requestId = entry.data.i32[0];

    List > listeners;
    {
        Mutex::Autolock l(mInputMutex);

        List::iterator item = mRangeListeners.begin();
        // Don't deliver partial results to listeners that don't want them
        while (item != mRangeListeners.end()) {
            if (requestId >= item->minId && requestId < item->maxId &&
                    (!isPartialResult || item->sendPartials)) {
                sp listener = item->listener.promote();
                if (listener == 0) {
                    item = mRangeListeners.erase(item);
                    continue;
                } else {
                    listeners.push_back(listener);
                }
            }
            item++;
        }
    }
    ALOGV("%s: Camera %s: Got %zu range listeners out of %zu", __FUNCTION__,
          device->getId().string(), listeners.size(), mRangeListeners.size());

    List >::iterator item = listeners.begin();
    for (; item != listeners.end(); item++) {
        (*item)->onResultAvailable(result);
    }
    return OK;
}

     该方法中就是将所有注册的listeners取出来,调用它的onResultAvailable方法了,而listeners我们前面已经讲过了,就是CameraDeviceClient类,接着继续来看CameraDeviceClient类的onResultAvailable方法,源码如下:

void CameraDeviceClient::onResultAvailable(const CaptureResult& result) {
    ATRACE_CALL();
    ALOGV("%s", __FUNCTION__);

    // Thread-safe. No lock necessary.
    sp remoteCb = mRemoteCallback;
    if (remoteCb != NULL) {
        remoteCb->onResultReceived(result.mMetadata, result.mResultExtras);
    }
}

     这里的remoteCb就回到Camera Application进程当中了,它就是CameraDeviceImpl类的内部类CameraDeviceCallbacks对象了,CameraDeviceCallbacks类的源码如下:

public class CameraDeviceCallbacks extends ICameraDeviceCallbacks.Stub {

        @Override
        public IBinder asBinder() {
            return this;
        }

        @Override
        public void onDeviceError(final int errorCode, CaptureResultExtras resultExtras) {
            if (DEBUG) {
                Log.d(TAG, String.format(
                        "Device error received, code %d, frame number %d, request ID %d, subseq ID %d",
                        errorCode, resultExtras.getFrameNumber(), resultExtras.getRequestId(),
                        resultExtras.getSubsequenceId()));
            }

            synchronized (mInterfaceLock) {
                if (mRemoteDevice == null) {
                    return; // Camera already closed
                }

                switch (errorCode) {
                    case ERROR_CAMERA_DISCONNECTED:
                        CameraDeviceImpl.this.mDeviceHandler.post(mCallOnDisconnected);
                        break;
                    default:
                        Log.e(TAG, "Unknown error from camera device: " + errorCode);
                        // no break
                    case ERROR_CAMERA_DEVICE:
                    case ERROR_CAMERA_SERVICE:
                        mInError = true;
                        final int publicErrorCode = (errorCode == ERROR_CAMERA_DEVICE) ?
                                StateCallback.ERROR_CAMERA_DEVICE :
                                StateCallback.ERROR_CAMERA_SERVICE;
                        Runnable r = new Runnable() {
                            @Override
                            public void run() {
                                if (!CameraDeviceImpl.this.isClosed()) {
                                    mDeviceCallback.onError(CameraDeviceImpl.this, publicErrorCode);
                                }
                            }
                        };
                        CameraDeviceImpl.this.mDeviceHandler.post(r);
                        break;
                    case ERROR_CAMERA_REQUEST:
                    case ERROR_CAMERA_RESULT:
                    case ERROR_CAMERA_BUFFER:
                        onCaptureErrorLocked(errorCode, resultExtras);
                        break;
                }
            }
        }

        @Override
        public void onRepeatingRequestError(long lastFrameNumber) {
            if (DEBUG) {
                Log.d(TAG, "Repeating request error received. Last frame number is " +
                        lastFrameNumber);
            }

            synchronized (mInterfaceLock) {
                // Camera is already closed or no repeating request is present.
                if (mRemoteDevice == null || mRepeatingRequestId == REQUEST_ID_NONE) {
                    return; // Camera already closed
                }

                checkEarlyTriggerSequenceComplete(mRepeatingRequestId, lastFrameNumber);
                mRepeatingRequestId = REQUEST_ID_NONE;
            }
        }

        @Override
        public void onDeviceIdle() {
            if (DEBUG) {
                Log.d(TAG, "Camera now idle");
            }
            synchronized (mInterfaceLock) {
                if (mRemoteDevice == null) return; // Camera already closed

                if (!CameraDeviceImpl.this.mIdle) {
                    CameraDeviceImpl.this.mDeviceHandler.post(mCallOnIdle);
                }
                CameraDeviceImpl.this.mIdle = true;
            }
        }

        @Override
        public void onCaptureStarted(final CaptureResultExtras resultExtras, final long timestamp) {
            int requestId = resultExtras.getRequestId();
            final long frameNumber = resultExtras.getFrameNumber();

            if (DEBUG) {
                Log.d(TAG, "Capture started for id " + requestId + " frame number " + frameNumber);
            }
            final CaptureCallbackHolder holder;

            synchronized (mInterfaceLock) {
                if (mRemoteDevice == null) return; // Camera already closed

                // Get the callback for this frame ID, if there is one
                holder = CameraDeviceImpl.this.mCaptureCallbackMap.get(requestId);

                if (holder == null) {
                    return;
                }

                if (isClosed()) return;

                // Dispatch capture start notice
                holder.getHandler().post(
                        new Runnable() {
                            @Override
                            public void run() {
                                if (!CameraDeviceImpl.this.isClosed()) {
                                    final int subsequenceId = resultExtras.getSubsequenceId();
                                    final CaptureRequest request = holder.getRequest(subsequenceId);

                                    if (holder.hasBatchedOutputs()) {
                                        // Send derived onCaptureStarted for requests within the batch
                                        final Range fpsRange =
                                                request.get(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE);
                                        for (int i = 0; i < holder.getRequestCount(); i++) {
                                            holder.getCallback().onCaptureStarted(
                                                    CameraDeviceImpl.this,
                                                    holder.getRequest(i),
                                                    timestamp - (subsequenceId - i) *
                                                            NANO_PER_SECOND / fpsRange.getUpper(),
                                                    frameNumber - (subsequenceId - i));
                                        }
                                    } else {
                                        holder.getCallback().onCaptureStarted(
                                                CameraDeviceImpl.this,
                                                holder.getRequest(resultExtras.getSubsequenceId()),
                                                timestamp, frameNumber);
                                    }
                                }
                            }
                        });

            }
        }

        @Override
        public void onResultReceived(CameraMetadataNative result,
                                     CaptureResultExtras resultExtras) throws RemoteException {

            int requestId = resultExtras.getRequestId();
            long frameNumber = resultExtras.getFrameNumber();

            if (DEBUG) {
                Log.v(TAG, "Received result frame " + frameNumber + " for id "
                        + requestId);
            }

            synchronized (mInterfaceLock) {
                if (mRemoteDevice == null) return; // Camera already closed

                // TODO: Handle CameraCharacteristics access from CaptureResult correctly.
                result.set(CameraCharacteristics.LENS_INFO_SHADING_MAP_SIZE,
                        getCharacteristics().get(CameraCharacteristics.LENS_INFO_SHADING_MAP_SIZE));

                final CaptureCallbackHolder holder =
                        CameraDeviceImpl.this.mCaptureCallbackMap.get(requestId);
                final CaptureRequest request = holder.getRequest(resultExtras.getSubsequenceId());

                boolean isPartialResult =
                        (resultExtras.getPartialResultCount() < mTotalPartialCount);
                boolean isReprocess = request.isReprocess();

                // Check if we have a callback for this
                if (holder == null) {
                    if (DEBUG) {
                        Log.d(TAG,
                                "holder is null, early return at frame "
                                        + frameNumber);
                    }

                    mFrameNumberTracker.updateTracker(frameNumber, /*result*/null, isPartialResult,
                            isReprocess);

                    return;
                }

                if (isClosed()) {
                    if (DEBUG) {
                        Log.d(TAG,
                                "camera is closed, early return at frame "
                                        + frameNumber);
                    }

                    mFrameNumberTracker.updateTracker(frameNumber, /*result*/null, isPartialResult,
                            isReprocess);
                    return;
                }


                Runnable resultDispatch = null;

                CaptureResult finalResult;
                // Make a copy of the native metadata before it gets moved to a CaptureResult
                // object.
                final CameraMetadataNative resultCopy;
                if (holder.hasBatchedOutputs()) {
                    resultCopy = new CameraMetadataNative(result);
                } else {
                    resultCopy = null;
                }

                // Either send a partial result or the final capture completed result
                if (isPartialResult) {
                    final CaptureResult resultAsCapture =
                            new CaptureResult(result, request, resultExtras);
                    // Partial result
                    resultDispatch = new Runnable() {
                        @Override
                        public void run() {
                            if (!CameraDeviceImpl.this.isClosed()) {
                                if (holder.hasBatchedOutputs()) {
                                    // Send derived onCaptureProgressed for requests within
                                    // the batch.
                                    for (int i = 0; i < holder.getRequestCount(); i++) {
                                        CameraMetadataNative resultLocal =
                                                new CameraMetadataNative(resultCopy);
                                        CaptureResult resultInBatch = new CaptureResult(
                                                resultLocal, holder.getRequest(i), resultExtras);

                                        holder.getCallback().onCaptureProgressed(
                                                CameraDeviceImpl.this,
                                                holder.getRequest(i),
                                                resultInBatch);
                                    }
                                } else {
                                    holder.getCallback().onCaptureProgressed(
                                            CameraDeviceImpl.this,
                                            request,
                                            resultAsCapture);
                                }
                            }
                        }
                    };
                    finalResult = resultAsCapture;
                } else {
                    List partialResults =
                            mFrameNumberTracker.popPartialResults(frameNumber);

                    final long sensorTimestamp =
                            result.get(CaptureResult.SENSOR_TIMESTAMP);
                    final Range fpsRange =
                            request.get(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE);
                    final int subsequenceId = resultExtras.getSubsequenceId();
                    final TotalCaptureResult resultAsCapture = new TotalCaptureResult(result,
                            request, resultExtras, partialResults, holder.getSessionId());
                    // Final capture result
                    resultDispatch = new Runnable() {
                        @Override
                        public void run() {
                            if (!CameraDeviceImpl.this.isClosed()) {
                                if (holder.hasBatchedOutputs()) {
                                    // Send derived onCaptureCompleted for requests within
                                    // the batch.
                                    for (int i = 0; i < holder.getRequestCount(); i++) {
                                        resultCopy.set(CaptureResult.SENSOR_TIMESTAMP,
                                                sensorTimestamp - (subsequenceId - i) *
                                                        NANO_PER_SECOND / fpsRange.getUpper());
                                        CameraMetadataNative resultLocal =
                                                new CameraMetadataNative(resultCopy);
                                        TotalCaptureResult resultInBatch = new TotalCaptureResult(
                                                resultLocal, holder.getRequest(i), resultExtras,
                                                partialResults, holder.getSessionId());

                                        holder.getCallback().onCaptureCompleted(
                                                CameraDeviceImpl.this,
                                                holder.getRequest(i),
                                                resultInBatch);
                                    }
                                } else {
                                    holder.getCallback().onCaptureCompleted(
                                            CameraDeviceImpl.this,
                                            request,
                                            resultAsCapture);
                                }
                            }
                        }
                    };
                    finalResult = resultAsCapture;
                }

                holder.getHandler().post(resultDispatch);

                // Collect the partials for a total result; or mark the frame as totally completed
                mFrameNumberTracker.updateTracker(frameNumber, finalResult, isPartialResult,
                        isReprocess);

                // Fire onCaptureSequenceCompleted
                if (!isPartialResult) {
                    checkAndFireSequenceComplete();
                }
            }
        }

        @Override
        public void onPrepared(int streamId) {
            final OutputConfiguration output;
            final StateCallbackKK sessionCallback;

            if (DEBUG) {
                Log.v(TAG, "Stream " + streamId + " is prepared");
            }

            synchronized (mInterfaceLock) {
                output = mConfiguredOutputs.get(streamId);
                sessionCallback = mSessionStateCallback;
            }

            if (sessionCallback == null) return;

            if (output == null) {
                Log.w(TAG, "onPrepared invoked for unknown output Surface");
                return;
            }
            final List surfaces = output.getSurfaces();
            for (Surface surface : surfaces) {
                sessionCallback.onSurfacePrepared(surface);
            }
        }

        @Override
        public void onRequestQueueEmpty() {
            final StateCallbackKK sessionCallback;

            if (DEBUG) {
                Log.v(TAG, "Request queue becomes empty");
            }

            synchronized (mInterfaceLock) {
                sessionCallback = mSessionStateCallback;
            }

            if (sessionCallback == null) return;

            sessionCallback.onRequestQueueEmpty();
        }

        /**
         * Called by onDeviceError for handling single-capture failures.
         */
        private void onCaptureErrorLocked(int errorCode, CaptureResultExtras resultExtras) {

            final int requestId = resultExtras.getRequestId();
            final int subsequenceId = resultExtras.getSubsequenceId();
            final long frameNumber = resultExtras.getFrameNumber();
            final CaptureCallbackHolder holder =
                    CameraDeviceImpl.this.mCaptureCallbackMap.get(requestId);

            final CaptureRequest request = holder.getRequest(subsequenceId);

            Runnable failureDispatch = null;
            if (errorCode == ERROR_CAMERA_BUFFER) {
                // Because 1 stream id could map to multiple surfaces, we need to specify both
                // streamId and surfaceId.
                List surfaces =
                        mConfiguredOutputs.get(resultExtras.getErrorStreamId()).getSurfaces();
                for (Surface surface : surfaces) {
                    if (!request.containsTarget(surface)) {
                        continue;
                    }
                    if (DEBUG) {
                        Log.v(TAG, String.format("Lost output buffer reported for frame %d, target %s",
                                frameNumber, surface));
                    }
                    failureDispatch = new Runnable() {
                        @Override
                        public void run() {
                            if (!CameraDeviceImpl.this.isClosed()) {
                                holder.getCallback().onCaptureBufferLost(
                                        CameraDeviceImpl.this,
                                        request,
                                        surface,
                                        frameNumber);
                            }
                        }
                    };
                    // Dispatch the failure callback
                    holder.getHandler().post(failureDispatch);
                }
            } else {
                boolean mayHaveBuffers = (errorCode == ERROR_CAMERA_RESULT);

                // This is only approximate - exact handling needs the camera service and HAL to
                // disambiguate between request failures to due abort and due to real errors.  For
                // now, assume that if the session believes we're mid-abort, then the error is due
                // to abort.
                int reason = (mCurrentSession != null && mCurrentSession.isAborting()) ?
                        CaptureFailure.REASON_FLUSHED :
                        CaptureFailure.REASON_ERROR;

                final CaptureFailure failure = new CaptureFailure(
                        request,
                        reason,
                    /*dropped*/ mayHaveBuffers,
                        requestId,
                        frameNumber);

                failureDispatch = new Runnable() {
                    @Override
                    public void run() {
                        if (!CameraDeviceImpl.this.isClosed()) {
                            holder.getCallback().onCaptureFailed(
                                    CameraDeviceImpl.this,
                                    request,
                                    failure);
                        }
                    }
                };

                // Fire onCaptureSequenceCompleted if appropriate
                if (DEBUG) {
                    Log.v(TAG, String.format("got error frame %d", frameNumber));
                }
                mFrameNumberTracker.updateTracker(frameNumber, /*error*/true, request.isReprocess());
                checkAndFireSequenceComplete();

                // Dispatch the failure callback
                holder.getHandler().post(failureDispatch);
            }

        }

    }

     我们来看一下回调回来的两个参数,分别是CameraMetadataNative、CaptureResultExtras类型,其中根本没有buffer数据,那我们怎么取预览或者拍照数据呢?这就是API2架构的修改了,API2的架构已经不像API1那样直接在回调接口中支持buffer数据的回调了,而我们要想取到预览或者拍照结果的buffer数据,可以通过ImageReader来实现,关于使用ImageReader获取预览和拍照的buffer数据的,网上有很多教程,这里就不讲了,大家可以自己去查,主要的重点就是将构造好的ImageReader的Surface通过createCaptureSession下发下去,然后重写OnImageAvailableListener类的onImageAvailable方法,当我们前面讲过的returnBuffer将buffer数据回填给Surface之后,显示系统就会回调onImageAvailable方法,我们就可以取到想要的buffer了。

     好了,本节我们讲了下数据回传在CameraServer中的流转最后到Camera Application的所有过程的源码,希望大家详细认真的读懂该博客后能对CameraServer进程的逻辑流程有进一步的理解。

你可能感兴趣的:(android,framework,Android框架总结,Android源码解析)