虚拟摄像头之十: Camera3 DeviceSession 和 CaptureRequest 流程机制

前言

前面九篇系列文章、把 Android Camera 框架逻辑基本描述清晰、从使用API2的角度看、还有 Session 和 CaptureRequest 未详细梳理、导致虚拟摄像头移植过程、出现者两块内容错误时、无法修复bug。
因此本篇把这两方面的内容也详细梳理、我们从中就能够看到为什么 Camera api2 比 api1 接口效率高的原理啦。

采用 Session 方式是 api2 接口框架中采用的方法,因此我们直接从 makeClient() 接口方法入手、就直接进入 Session 部分内容,如果您不是很清晰这个 makeClient() 函数由来、请参考《虚拟摄像头之八:从 Camera api2 角度看摄像头框架》中、有详细梳理此部分。

简单回顾一下:
用户App 采用 CameraManager::openCamera() 函数、通过Camera代理、Binder 通讯调用 CameraService::connectDevice()方法,此方法中调用 connectHelper() ==> makeClient() 函数,上面说的 makeClient 函数就是这么被调用发起的。

CameraDeviceSession 的产生

@frameworks/av/services/camera/libcameraservice/CameraService.cpp
在 makeClient 函数中、会根据设备的api版本、创建 Camera 代理,api2 的创建 CameraDeviceClient 对象,就是采用 Session 方式来管理拍照和录像的。

switch(deviceVersion) {
          case CAMERA_DEVICE_API_VERSION_1_0:
            if (effectiveApiLevel == API_1) {  // Camera1 API route
                sp<ICameraClient> tmp = static_cast<ICameraClient*>(cameraCb.get());
                *client = new CameraClient(cameraService, tmp, packageName, cameraIdToInt(cameraId),
                        facing, clientPid, clientUid, getpid(), legacyMode);
            } else { // Camera2 API route
                ALOGW("Camera using old HAL version: %d", deviceVersion);
                return STATUS_ERROR_FMT(ERROR_DEPRECATED_HAL,
                        "Camera device \"%s\" HAL version %d does not support camera2 API",
                        cameraId.string(), deviceVersion);
            }
            break;
          case CAMERA_DEVICE_API_VERSION_3_0:
          case CAMERA_DEVICE_API_VERSION_3_1:
          case CAMERA_DEVICE_API_VERSION_3_2:
          case CAMERA_DEVICE_API_VERSION_3_3:
          case CAMERA_DEVICE_API_VERSION_3_4:
            if (effectiveApiLevel == API_1) { // Camera1 API route
                sp<ICameraClient> tmp = static_cast<ICameraClient*>(cameraCb.get());
                *client = new Camera2Client(cameraService, tmp, packageName, cameraIdToInt(cameraId),
                        facing, clientPid, clientUid, servicePid, legacyMode);
            } else { // Camera2 API route
                sp<hardware::camera2::ICameraDeviceCallbacks> tmp =
                        static_cast<hardware::camera2::ICameraDeviceCallbacks*>(cameraCb.get());
                *client = new CameraDeviceClient(cameraService, tmp, packageName, cameraId,
                        facing, clientPid, clientUid, servicePid);
            }
            break;
          default:
            // Should not be reachable
            ALOGE("Unknown camera device HAL version: %d", deviceVersion);
            return STATUS_ERROR_FMT(ERROR_INVALID_OPERATION,
                    "Camera device \"%s\" has unknown HAL version %d",
                    cameraId.string(), deviceVersion);
        }

CameraDeviceClient 的创建过程调用关系如下:

CameraDeviceClient::CameraDeviceClient
  ==> Camera2ClientBase
    ==> new Camera3Device(cameraId)

在创建 Camera3Device 后、紧接着调用 initialize 初始化函数,其中调用这部分代码、如下:
源码路径: @frameworks/av/services/camera/libcameraservcie/api2/CameraDeviceClient.cpp

//> 初始化函数
status_t CameraDeviceClient::initialize(sp<CameraProviderManager> manager) {
    return initializeImpl(manager);
}

template<typename TProviderPtr>
status_t CameraDeviceClient::initializeImpl(TProviderPtr providerPtr) {
    ATRACE_CALL();
    status_t res;

    res = Camera2ClientBase::initialize(providerPtr);
    if (res != OK) {
        return res;
    }

    String8 threadName;
    mFrameProcessor = new FrameProcessorBase(mDevice);
    threadName = String8::format("CDU-%s-FrameProc", mCameraIdStr.string());
    mFrameProcessor->run(threadName.string());

    mFrameProcessor->registerListener(FRAME_PROCESSOR_LISTENER_MIN_ID,
                                      FRAME_PROCESSOR_LISTENER_MAX_ID,
                                      /*listener*/this,
                                      /*sendPartials*/true);

    return OK;
}

其中 new FrameProcessorBase(mDevice) 构建线程类、并启动该线程,这个线程主要是负责捕获帧和监听回调的处理,源码内容如下:

bool FrameProcessorBase::threadLoop() {
    status_t res;

    sp<CameraDeviceBase> device;
    {
        device = mDevice.promote();
        if (device == 0) return false;
    }

    res = device->waitForNextFrame(kWaitDuration);
    if (res == OK) {
        processNewFrames(device);
    } else if (res != TIMED_OUT) {
        ALOGE("FrameProcessorBase: Error waiting for new "
                "frames: %s (%d)", strerror(-res), res);
    }

    return true;
}

void FrameProcessorBase::processNewFrames(const sp<CameraDeviceBase> &device) {
    status_t res;
    ATRACE_CALL();
    CaptureResult result;

    ALOGV("%s: Camera %s: Process new frames", __FUNCTION__, device->getId().string());

    while ( (res = device->getNextResult(&result)) == OK) {

        // TODO: instead of getting frame number from metadata, we should read
        // this from result.mResultExtras when CameraDeviceBase interface is fixed.
        camera_metadata_entry_t entry;

        entry = result.mMetadata.find(ANDROID_REQUEST_FRAME_COUNT);
        if (entry.count == 0) {
            ALOGE("%s: Camera %s: Error reading frame number",
                    __FUNCTION__, device->getId().string());
            break;
        }
        ATRACE_INT("cam2_frame", entry.data.i32[0]);

        if (!processSingleFrame(result, device)) {
            break;
        }

        if (!result.mMetadata.isEmpty()) {
            Mutex::Autolock al(mLastFrameMutex);
            mLastFrame.acquire(result.mMetadata);
        }
    }
    if (res != NOT_ENOUGH_DATA) {
        ALOGE("%s: Camera %s: Error getting next frame: %s (%d)",
                __FUNCTION__, device->getId().string(), strerror(-res), res);
        return;
    }

    return;
}

bool FrameProcessorBase::processSingleFrame(CaptureResult &result,
                                            const sp<CameraDeviceBase> &device) {
    ALOGV("%s: Camera %s: Process single frame (is empty? %d)",
            __FUNCTION__, device->getId().string(), result.mMetadata.isEmpty());
    return processListeners(result, device) == OK;
}

status_t FrameProcessorBase::processListeners(const CaptureResult &result,
        const sp<CameraDeviceBase> &device) {
    ATRACE_CALL();

    camera_metadata_ro_entry_t entry;

    // Check if this result is partial.
    bool isPartialResult =
            result.mResultExtras.partialResultCount < mNumPartialResults;

    // TODO: instead of getting requestID from CameraMetadata, we should get it
    // from CaptureResultExtras. This will require changing Camera2Device.
    // Currently Camera2Device uses MetadataQueue to store results, which does not
    // include CaptureResultExtras.
    entry = result.mMetadata.find(ANDROID_REQUEST_ID);
    if (entry.count == 0) {
        ALOGE("%s: Camera %s: Error reading frame id", __FUNCTION__, device->getId().string());
        return BAD_VALUE;
    }
    int32_t requestId = entry.data.i32[0];

    List<sp<FilteredListener> > listeners;
    {
        Mutex::Autolock l(mInputMutex);

        List<RangeListener>::iterator item = mRangeListeners.begin();
        // Don't deliver partial results to listeners that don't want them
        while (item != mRangeListeners.end()) {
            if (requestId >= item->minId && requestId < item->maxId &&
                    (!isPartialResult || item->sendPartials)) {
                sp<FilteredListener> listener = item->listener.promote();
                if (listener == 0) {
                    item = mRangeListeners.erase(item);
                    continue;
                } else {
                    listeners.push_back(listener);
                }
            }
            item++;
        }
    }
    ALOGV("%s: Camera %s: Got %zu range listeners out of %zu", __FUNCTION__,
          device->getId().string(), listeners.size(), mRangeListeners.size());

    List<sp<FilteredListener> >::iterator item = listeners.begin();
    for (; item != listeners.end(); item++) {
        (*item)->onResultAvailable(result);
    }
    return OK;
}

我们在看看 Camera3Device::initialize 函数、内容如下:
源码路径:@frameworks/av/services/camera/libcameraservcie/device3/Camera3Device.cpp

status_t Camera3Device::initialize(sp<CameraProviderManager> manager) {
    ATRACE_CALL();
    Mutex::Autolock il(mInterfaceLock);
    Mutex::Autolock l(mLock);

    //>第一步 强指针构建 ICameraDeviceSession 的 session 对象
    sp<ICameraDeviceSession> session;
    ATRACE_BEGIN("CameraHal::openSession");
    status_t res = manager->openSession(mId.string(), this,
            /*out*/ &session);

    //>第二步 创建拍照请求队列, 此处lamada函数通过descriptor构建Fast Message Queue 对象;
    std::shared_ptr<RequestMetadataQueue> queue;
    auto requestQueueRet = session->getCaptureRequestMetadataQueue(
        [&queue](const auto& descriptor) {
            queue = std::make_shared<RequestMetadataQueue>(descriptor);
            if (!queue->isValid() || queue->availableToWrite() <= 0) {
                ALOGE("HAL returns empty request metadata fmq, not use it");
                queue = nullptr;
                // don't use the queue onwards.
            }
        });
   
    IF_ALOGV() {
        session->interfaceChain([](
            ::android::hardware::hidl_vec<::android::hardware::hidl_string> interfaceChain) {
                ALOGV("Session interface chain:");
                for (auto iface : interfaceChain) {
                    ALOGV("  %s", iface.c_str());
                }
            });
    }

    //> 构建 HalInterface 对象
    mInterface = new HalInterface(session, queue);
    std::string providerType;
    mVendorTagId = manager->getProviderTagIdLocked(mId.string());
    //> 第三步 通用初始化
    return initializeCommonLocked();
}

第一步 创建 ICameraDeviceSession 的 session

此方法 manager->openSession() 把打开的 Session 对象给到 智能指针ICameraDeviceSession指向的session 指针,看看打开的是什么内容呢?

status_t CameraProviderManager::openSession(const std::string &id,
        const sp<hardware::camera::device::V3_2::ICameraDeviceCallback>& callback,
        /*out*/
        sp<hardware::camera::device::V3_2::ICameraDeviceSession> *session) {

    std::lock_guard<std::mutex> lock(mInterfaceMutex);
    //> 从 ProviderInfo mProviders 中查找设备信息
    auto deviceInfo = findDeviceInfoLocked(id,
            /*minVersion*/ {3,0}, /*maxVersion*/ {4,0});
    if (deviceInfo == nullptr) return NAME_NOT_FOUND;

    auto *deviceInfo3 = static_cast<ProviderInfo::DeviceInfo3*>(deviceInfo);

    Status status;
    hardware::Return<void> ret;
    //> 调用 ICameraProvider 接口 open 函数,执行是 CameraProxy->open() 方法.
    ret = deviceInfo3->mInterface->open(callback, [&status, &session]
            (Status s, const sp<device::V3_2::ICameraDeviceSession>& cameraSession) {
                status = s;
                if (status == Status::OK) {
                    *session = cameraSession;
                }
            });

    return mapToStatusT(status);
}

此函数中创建 device::V3_2::ICameraDeviceSession cameraSession 并把此对象赋值给 *session;

第二步 创建拍照请求队列

接下来通过 session 的方法session->getCaptureRequestMetadataQueue 获取到这个队列; 然后构建出 HalInterface 类型的 mInterface 对象;把 Session 和 RequestMetadataQueue 结合起来的是构建的对象 mInterface 实例;

Camera3Device::HalInterface::HalInterface(
            sp<ICameraDeviceSession> &session,
            std::shared_ptr<RequestMetadataQueue> queue) :
        mHidlSession(session),
        mRequestMetadataQueue(queue) {}

我们知道 new HalInterface时、分别构造 ICameraDeviceSession 和 RequestMetadataQueue 对象;
ICameraDeviceSession 是 hidl 描述语言,内容如下:

package android.hardware.camera.device@3.2;

import android.hardware.camera.common@1.0::types;

/**
 * Camera device active session interface.
 *
 * Obtained via ICameraDevice::open(), this interface contains the methods to
 * configure and request captures from an active camera device.
 *
 */
interface ICameraDeviceSession {

    constructDefaultRequestSettings(RequestTemplate type) generates
            (Status status, CameraMetadata requestTemplate);

    configureStreams(StreamConfiguration requestedConfiguration)
            generates (Status status,
                    HalStreamConfiguration halConfiguration);

    processCaptureRequest(vec<CaptureRequest> requests,
            vec<BufferCache> cachesToRemove)
            generates (Status status, uint32_t numRequestProcessed);

    getCaptureRequestMetadataQueue() generates (fmq_sync<uint8_t> queue);

    getCaptureResultMetadataQueue() generates (fmq_sync<uint8_t> queue);
 
    flush() generates (Status status);

    close();
};

android编译系统自动生成代码内容如下:
@out/soong/.intermediates/hardware/interfaces/camera/device/3.2/[email protected]_gen
c++/gen/android/hardware/camera/device/3.2/CameraDeviceSessionAll.cpp

const char* ICameraDeviceSession::descriptor("[email protected]::ICameraDeviceSession");

__attribute__((constructor))static void static_constructor() {
    ::android::hardware::details::gBnConstructorMap.set(ICameraDeviceSession::descriptor,
            [](void *iIntf) -> ::android::sp<::android::hardware::IBinder> {
                return new BnHwCameraDeviceSession(static_cast<ICameraDeviceSession *>(iIntf));
            });
    ::android::hardware::details::gBsConstructorMap.set(ICameraDeviceSession::descriptor,
            [](void *iIntf) -> ::android::sp<::android::hidl::base::V1_0::IBase> {
                return new BsCameraDeviceSession(static_cast<ICameraDeviceSession *>(iIntf));
            });
};

__attribute__((destructor))static void static_destructor() {
    ::android::hardware::details::gBnConstructorMap.erase(ICameraDeviceSession::descriptor);
    ::android::hardware::details::gBsConstructorMap.erase(ICameraDeviceSession::descriptor);
};
//> configureStreams 接口函数
::android::hardware::Return<void> BpHwCameraDeviceSession::_hidl_configureStreams(::android::hardware::IInterface *_hidl_this, 
	::android::hardware::details::HidlInstrumentor *_hidl_this_instrumentor, const StreamConfiguration& requestedConfiguration, configureStreams_cb _hidl_cb) 
{
    #ifdef __ANDROID_DEBUGGABLE__
    bool mEnableInstrumentation = _hidl_this_instrumentor->isInstrumentationEnabled();
    const auto &mInstrumentationCallbacks = _hidl_this_instrumentor->getInstrumentationCallbacks();
    #else
    (void) _hidl_this_instrumentor;
    #endif // __ANDROID_DEBUGGABLE__
    if (_hidl_cb == nullptr) {
        return ::android::hardware::Status::fromExceptionCode(
                ::android::hardware::Status::EX_ILLEGAL_ARGUMENT,
                "Null synchronous callback passed.");
    }

    atrace_begin(ATRACE_TAG_HAL, "HIDL::ICameraDeviceSession::configureStreams::client");
    #ifdef __ANDROID_DEBUGGABLE__
    if (UNLIKELY(mEnableInstrumentation)) {
        std::vector<void *> _hidl_args;
        _hidl_args.push_back((void *)&requestedConfiguration);
        for (const auto &callback: mInstrumentationCallbacks) {
            callback(InstrumentationEvent::CLIENT_API_ENTRY, "android.hardware.camera.device", "3.2", "ICameraDeviceSession", "configureStreams", &_hidl_args);
        }
    }
    #endif // __ANDROID_DEBUGGABLE__

    ::android::hardware::Parcel _hidl_data;
    ::android::hardware::Parcel _hidl_reply;
    ::android::status_t _hidl_err;
    ::android::hardware::Status _hidl_status;

    ::android::hardware::camera::common::V1_0::Status _hidl_out_status;
    const HalStreamConfiguration* _hidl_out_halConfiguration;

    _hidl_err = _hidl_data.writeInterfaceToken(BpHwCameraDeviceSession::descriptor);
    if (_hidl_err != ::android::OK) { goto _hidl_error; }

    size_t _hidl_requestedConfiguration_parent;

    _hidl_err = _hidl_data.writeBuffer(&requestedConfiguration, sizeof(requestedConfiguration), &_hidl_requestedConfiguration_parent);
    if (_hidl_err != ::android::OK) { goto _hidl_error; }

    _hidl_err = writeEmbeddedToParcel(
            requestedConfiguration,
            &_hidl_data,
            _hidl_requestedConfiguration_parent,
            0 /* parentOffset */);

    if (_hidl_err != ::android::OK) { goto _hidl_error; }

    _hidl_err = ::android::hardware::IInterface::asBinder(_hidl_this)->transact(2 /* configureStreams */, _hidl_data, &_hidl_reply);
    if (_hidl_err != ::android::OK) { goto _hidl_error; }

    _hidl_err = ::android::hardware::readFromParcel(&_hidl_status, _hidl_reply);
    if (_hidl_err != ::android::OK) { goto _hidl_error; }

    if (!_hidl_status.isOk()) { return _hidl_status; }

    _hidl_err = _hidl_reply.readUint32((uint32_t *)&_hidl_out_status);
    if (_hidl_err != ::android::OK) { goto _hidl_error; }

    size_t _hidl__hidl_out_halConfiguration_parent;

    _hidl_err = _hidl_reply.readBuffer(sizeof(*_hidl_out_halConfiguration), &_hidl__hidl_out_halConfiguration_parent,  reinterpret_cast<const void **>(&_hidl_out_halConfiguration));
    if (_hidl_err != ::android::OK) { goto _hidl_error; }

    _hidl_err = readEmbeddedFromParcel(
            const_cast<HalStreamConfiguration &>(*_hidl_out_halConfiguration),
            _hidl_reply,
            _hidl__hidl_out_halConfiguration_parent,
            0 /* parentOffset */);

    if (_hidl_err != ::android::OK) { goto _hidl_error; }

    _hidl_cb(_hidl_out_status, *_hidl_out_halConfiguration);

    atrace_end(ATRACE_TAG_HAL);
    #ifdef __ANDROID_DEBUGGABLE__
    if (UNLIKELY(mEnableInstrumentation)) {
        std::vector<void *> _hidl_args;
        _hidl_args.push_back((void *)&_hidl_out_status);
        _hidl_args.push_back((void *)_hidl_out_halConfiguration);
        for (const auto &callback: mInstrumentationCallbacks) {
            callback(InstrumentationEvent::CLIENT_API_EXIT, "android.hardware.camera.device", "3.2", "ICameraDeviceSession", "configureStreams", &_hidl_args);
        }
    }
    #endif // __ANDROID_DEBUGGABLE__

    _hidl_status.setFromStatusT(_hidl_err);
    return ::android::hardware::Return<void>();

_hidl_error:
    _hidl_status.setFromStatusT(_hidl_err);
    return ::android::hardware::Return<void>(_hidl_status);
}
//> getCaptureRequestMetadataQueue 接口函数
::android::hardware::Return<void> BpHwCameraDeviceSession::_hidl_getCaptureRequestMetadataQueue(::android::hardware::IInterface *_hidl_this, 
	::android::hardware::details::HidlInstrumentor *_hidl_this_instrumentor, getCaptureRequestMetadataQueue_cb _hidl_cb) 
{
    #ifdef __ANDROID_DEBUGGABLE__
    bool mEnableInstrumentation = _hidl_this_instrumentor->isInstrumentationEnabled();
    const auto &mInstrumentationCallbacks = _hidl_this_instrumentor->getInstrumentationCallbacks();
    #else
    (void) _hidl_this_instrumentor;
    #endif // __ANDROID_DEBUGGABLE__
    if (_hidl_cb == nullptr) {
        return ::android::hardware::Status::fromExceptionCode(
                ::android::hardware::Status::EX_ILLEGAL_ARGUMENT,
                "Null synchronous callback passed.");
    }

    atrace_begin(ATRACE_TAG_HAL, "HIDL::ICameraDeviceSession::getCaptureRequestMetadataQueue::client");
    #ifdef __ANDROID_DEBUGGABLE__
    if (UNLIKELY(mEnableInstrumentation)) {
        std::vector<void *> _hidl_args;
        for (const auto &callback: mInstrumentationCallbacks) {
            callback(InstrumentationEvent::CLIENT_API_ENTRY, "android.hardware.camera.device", "3.2", "ICameraDeviceSession", "getCaptureRequestMetadataQueue", &_hidl_args);
        }
    }
    #endif // __ANDROID_DEBUGGABLE__

    ::android::hardware::Parcel _hidl_data;
    ::android::hardware::Parcel _hidl_reply;
    ::android::status_t _hidl_err;
    ::android::hardware::Status _hidl_status;

    const ::android::hardware::MQDescriptorSync<uint8_t>* _hidl_out_queue;

    _hidl_err = _hidl_data.writeInterfaceToken(BpHwCameraDeviceSession::descriptor);
    if (_hidl_err != ::android::OK) { goto _hidl_error; }

    _hidl_err = ::android::hardware::IInterface::asBinder(_hidl_this)->transact(4 /* getCaptureRequestMetadataQueue */, _hidl_data, &_hidl_reply);
    if (_hidl_err != ::android::OK) { goto _hidl_error; }

    _hidl_err = ::android::hardware::readFromParcel(&_hidl_status, _hidl_reply);
    if (_hidl_err != ::android::OK) { goto _hidl_error; }

    if (!_hidl_status.isOk()) { return _hidl_status; }

    size_t _hidl__hidl_out_queue_parent;

    _hidl_err = _hidl_reply.readBuffer(sizeof(*_hidl_out_queue), &_hidl__hidl_out_queue_parent,  reinterpret_cast<const void **>(&_hidl_out_queue));

    if (_hidl_err != ::android::OK) { goto _hidl_error; }

    _hidl_err = ::android::hardware::readEmbeddedFromParcel(
            const_cast<::android::hardware::MQDescriptorSync<uint8_t> &>(*_hidl_out_queue),
            _hidl_reply,
            _hidl__hidl_out_queue_parent,
            0 /* parentOffset */);

    if (_hidl_err != ::android::OK) { goto _hidl_error; }

    _hidl_cb(*_hidl_out_queue);

    atrace_end(ATRACE_TAG_HAL);
    #ifdef __ANDROID_DEBUGGABLE__
    if (UNLIKELY(mEnableInstrumentation)) {
        std::vector<void *> _hidl_args;
        _hidl_args.push_back((void *)_hidl_out_queue);
        for (const auto &callback: mInstrumentationCallbacks) {
            callback(InstrumentationEvent::CLIENT_API_EXIT, "android.hardware.camera.device", "3.2", "ICameraDeviceSession", "getCaptureRequestMetadataQueue", &_hidl_args);
        }
    }
    #endif // __ANDROID_DEBUGGABLE__

    _hidl_status.setFromStatusT(_hidl_err);
    return ::android::hardware::Return<void>();

_hidl_error:
    _hidl_status.setFromStatusT(_hidl_err);
    return ::android::hardware::Return<void>(_hidl_status);
}
//> getCaptureRequestMetadataQueue 接口函数入口
::android::hardware::Return<void> BpHwCameraDeviceSession::getCaptureRequestMetadataQueue(getCaptureRequestMetadataQueue_cb _hidl_cb){
    ::android::hardware::Return<void>  _hidl_out = ::android::hardware::camera::device::V3_2::BpHwCameraDeviceSession::_hidl_getCaptureRequestMetadataQueue(this, this, _hidl_cb);

    return _hidl_out;
}
//> configureStreams 接口函数入口
::android::hardware::Return<void> BpHwCameraDeviceSession::configureStreams(const StreamConfiguration& requestedConfiguration, configureStreams_cb _hidl_cb){
    ::android::hardware::Return<void>  _hidl_out = ::android::hardware::camera::device::V3_2::BpHwCameraDeviceSession::_hidl_configureStreams(this, this, requestedConfiguration, _hidl_cb);

    return _hidl_out;
}

编译系统根据hidl文件、生成ICameraDeviceSessionAll.cpp源码文件、源码中谨摘录 configureStreams 和 getCaptureRequestMetadataQueue 接口函数内容,现在我们知道源码中有上述的接口方法即可。通过 Session 构建的 ICameraDeviceSession 对象存放到
sp mHidlSession 中了;

std::shared_ptr mRequestMetadataQueue 此类型变量定义在 Camera3Device.h 文件中,该变量是 Camera3Device 类的子类HalInterface类、内部成员变量,定义如下:

class Camera3Device :
            public CameraDeviceBase,
            virtual public hardware::camera::device::V3_2::ICameraDeviceCallback,
            private camera3_callback_ops 
{
	//> using 别名用法
	using RequestMetadataQueue = hardware::MessageQueue<uint8_t, hardware::kSynchronizedReadWrite>;
    
    class HalInterface : public camera3::Camera3StreamBufferFreedListener {
	     std::shared_ptr<RequestMetadataQueue> mRequestMetadataQueue;
	     //> std::shared_ptr> mRequestMetadataQueue;
    }
}

第三步 通用初始化 initializeCommonLocked

接下来我们先看看 initializeCommonLocked() 初始化函数,内容如下:

status_t Camera3Device::initializeCommonLocked() {

    /**  第1处 Start up status tracker thread */
    mStatusTracker = new StatusTracker(this);
    status_t res = mStatusTracker->run(String8::format("C3Dev-%s-Status", mId.string()).string());
    if (res != OK) {
        SET_ERR_L("Unable to start status tracking thread: %s (%d)",
                strerror(-res), res);
        mInterface->close();
        mStatusTracker.clear();
        return res;
    }

    /** 第2处 Register in-flight map to the status tracker */
    mInFlightStatusId = mStatusTracker->addComponent();

    /** 第3处 Create buffer manager */
    mBufferManager = new Camera3BufferManager();

    mTagMonitor.initialize(mVendorTagId);

    /** 第4处 Start up request queue thread */
    mRequestThread = new RequestThread(this, mStatusTracker, mInterface);
    res = mRequestThread->run(String8::format("C3Dev-%s-ReqQueue", mId.string()).string());
    if (res != OK) {
        SET_ERR_L("Unable to start request queue thread: %s (%d)",
                strerror(-res), res);
        mInterface->close();
        mRequestThread.clear();
        return res;
    }
    /** 第5处 Create PreparerThread  */
    mPreparerThread = new PreparerThread();

    //> 第6处 STATUS_UNCONFIGURED 状态更新
    internalUpdateStatusLocked(STATUS_UNCONFIGURED);
    mNextStreamId = 0;
    mDummyStreamId = NO_STREAM;
    mNeedConfig = true;
    mPauseStateNotify = false;

    // Measure the clock domain offset between camera and video/hw_composer
    camera_metadata_entry timestampSource =
            mDeviceInfo.find(ANDROID_SENSOR_INFO_TIMESTAMP_SOURCE);
    if (timestampSource.count > 0 && timestampSource.data.u8[0] ==
            ANDROID_SENSOR_INFO_TIMESTAMP_SOURCE_REALTIME) {
        mTimestampOffset = getMonoToBoottimeOffset();
    }

    // Will the HAL be sending in early partial result metadata?
    camera_metadata_entry partialResultsCount =
            mDeviceInfo.find(ANDROID_REQUEST_PARTIAL_RESULT_COUNT);
    if (partialResultsCount.count > 0) {
        mNumPartialResults = partialResultsCount.data.i32[0];
        mUsePartialResult = (mNumPartialResults > 1);
    }

    camera_metadata_entry configs =
            mDeviceInfo.find(ANDROID_SCALER_AVAILABLE_STREAM_CONFIGURATIONS);
    for (uint32_t i = 0; i < configs.count; i += 4) {
        if (configs.data.i32[i] == HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED &&
                configs.data.i32[i + 3] ==
                ANDROID_SCALER_AVAILABLE_STREAM_CONFIGURATIONS_INPUT) {
            mSupportedOpaqueInputSizes.add(Size(configs.data.i32[i + 1],
                    configs.data.i32[i + 2]));
        }
    }

    return OK;
}

上面函数中启动 RequestThread() 线程、我们暂时先了解到此,后面涉及到 Camera3Device 代码阅读、因内容较多,因此我们先阶段性的总结Camera3Device::initialize 函数:

(1). 通过 CameraProviderManager 的 openSession 方法,创建 ICameraDeviceSession 对象、并赋值给session,从而 session 就持有 ICameraDeviceSession 代理、就可以通过此 Session 代理来使用 session 相关功能; 在 openSession 同时设置 ICameraDeviceCallback 的 回调函数、在 CameraProvider侧的、摄像头线程中捕获到事件后、会通过此回调返回信息;

(2). 通过 session->getCaptureRequestMetadataQueue() 接口、创建快速消息队列 MessageQueue RequestMetadataQueue;
并触发 CameraProvider 侧的、 CameraDeviceSession 对象对应的方法,如下:

 struct TrampolineSessionInterface_3_3 : public ICameraDeviceSession {
        TrampolineSessionInterface_3_3(sp<CameraDeviceSession> parent) :
                mParent(parent) {}      

        virtual Return<void> configureStreams(
                const V3_2::StreamConfiguration& requestedConfiguration,
                V3_3::ICameraDeviceSession::configureStreams_cb _hidl_cb) override {
            return mParent->configureStreams(requestedConfiguration, _hidl_cb);
        }

        virtual Return<void> getCaptureRequestMetadataQueue(
                V3_3::ICameraDeviceSession::getCaptureRequestMetadataQueue_cb _hidl_cb) override  {
            return mParent->getCaptureRequestMetadataQueue(_hidl_cb);
        }

    private:
        sp<CameraDeviceSession> mParent;
    };

由此我们得出、Android 用户App操作 session 而引发操作路线: 用户App --> CameraService --> CameraProvider --> CameraDeviceSession; 通过 CameraDeviceSession.hal 在 CameraServer 和 CameraProvider 之间搭建关于 Session 的 C/S 框架;这也就是 api2 接口扩充的新特征。

(3). 通过 mHidlSession 和 mRequestMetadataQueue 作为入口参数、创建 HalInterface mInterface 类型的对象是 CameraClient 对象中内部变量,内部线程可以操作 mInterface 对象;

(4). initializeCommonLocked() 函数中、构造 Client 对象运行环境、启动 RequestThread 线程来处理、用户App的请求、并响应 CameraProvider侧的回调;

至此,整个 connectDevice 方法已经运行完毕,此时App已经获取了一个Camera设备,openCamera() 过程就结束了;

由于需要采集图像,所以用户会调用 CameraDevice 的 CreateCaptureSession() 来创建 session 操作,Framework 再通过 ICameraDeviceSession 代理调用 configureStreams() 接口、进行一系列操作,包含cancelRequest/beginConfigure/deleteStream/createStream以及endConfigure方法来进行数据流的配置;

此时Android用户侧看到效果是摄像头可以显示图片、CameraDevice的状态在 预览状态下,用户可通过 App 拍照、连拍或录像操作了。

CameraDevice configureStreams 都做了什么

在 CameraProvider 服务启动和构建过程中、创建 Camera 对象、此对象中有 camera3_device_t mDevice 类型成员,此结构体定于如下:
@hardware/libhardware/include/hardware/camera3.h

typedef struct camera3_device {
    /**
     * common.version must equal CAMERA_DEVICE_API_VERSION_3_0 to identify this
     * device as implementing version 3.0 of the camera device HAL.
     *
     * Performance requirements:
     *
     * Camera open (common.module->common.methods->open) should return in 200ms, and must return
     * in 500ms.
     * Camera close (common.close) should return in 200ms, and must return in 500ms.
     *
     */
    hw_device_t common;
    camera3_device_ops_t *ops;
    void *priv;
} camera3_device_t;

结构体 camera3_device_ops_t *ops 成员是配置该摄像头流和session的方法集合,在 Camera 对象构建中、源码如下:
@vendor/nxp-opensoruce/imx/libcamera3/Camera.cpp 如下:

Camera::Camera(int32_t id, int32_t facing, int32_t orientation, char *path)
    : usemx6s(0), mId(id), mStaticInfo(NULL), mBusy(false), mCallbackOps(NULL), mStreams(NULL), mNumStreams(0), mTmpBuf(NULL)
{
    ALOGI("%s:%d: new camera device", __func__, mId);
    android::Mutex::Autolock al(mDeviceLock);

    camera_info::facing = facing;
    camera_info::orientation = orientation;
    strncpy(SensorData::mDevPath, path, CAMAERA_FILENAME_LENGTH);
    SensorData::mDevPath[CAMAERA_FILENAME_LENGTH-1] = 0;

    memset(&mDevice, 0, sizeof(mDevice));
    mDevice.common.tag = HARDWARE_DEVICE_TAG;
#if ANDROID_SDK_VERSION >= 28
    mDevice.common.version = CAMERA_DEVICE_API_VERSION_3_5;
#else
    mDevice.common.version = CAMERA_DEVICE_API_VERSION_3_2;
#endif
    mDevice.common.close = close_device;

    mDevice.ops = const_cast<camera3_device_ops_t *>(&sOps);        //> 初始化 mDevice.ops 
    mDevice.priv = this;
    memset(&m3aState, 0, sizeof(m3aState));
}

初始化 mDevice.ops 内容是什么呢,如下:

const camera3_device_ops_t Camera::sOps = {
    .initialize = initialize,
    .configure_streams = configure_streams,
    .register_stream_buffers = register_stream_buffers,
    .construct_default_request_settings
        = construct_default_request_settings,
    .process_capture_request = process_capture_request,
    .get_metadata_vendor_tag_ops = NULL,
    .dump = dump,
    .flush = flush,
    .reserved = {0},
};

我们看看 configure_streams 函数实现内容,

static int32_t configure_streams(const camera3_device_t *dev,
        camera3_stream_configuration_t *stream_list)
{
    return camdev_to_camera(dev)->configureStreams(stream_list);
}

它是调用具体某款摄像头中的 configureStreams() 方法,如果该摄像头未实现此方法,就调用父类中的 configureStreams() 方法,在本文中笔者虚拟摄像头未实现此方法、Camera父类的实现方法如下:

int32_t Camera::configureStreams(camera3_stream_configuration_t *stream_config)
{
    camera3_stream_t *astream;
    sp<Stream> *newStreams = NULL;

    if (stream_config == NULL) {
        ALOGE("%s:%d: NULL stream configuration array", __func__, mId);
        return -EINVAL;
    }

    ALOGI("%s():%d: stream_config %p, num %d, streams %p, mode %d",
        __func__,
        mId,
        stream_config,
        stream_config->num_streams,
        stream_config->streams,
        stream_config->operation_mode);

    android::Mutex::Autolock al(mDeviceLock);

    if (stream_config->num_streams == 0) {
        ALOGE("%s:%d: Empty stream configuration array", __func__, mId);
        return -EINVAL;
    }

    for(uint32_t i = 0; i < stream_config->num_streams; i++) {
      camera3_stream_t *stream = stream_config->streams[i];
      if(stream == NULL) {
        ALOGE("stream config %d null", i);
        return -EINVAL;
      }

    ALOGI("config %d, type %d, res %dx%d, fmt 0x%x, usage 0x%x, maxbufs %d, priv %p, rotation %d",
        i,
        stream->stream_type,
        stream->width,
        stream->height,
        stream->format,
        stream->usage,
        stream->max_buffers,
        stream->priv,
        stream->rotation);

        if(((int)stream->width <= 0) || ((int)stream->height <= 0) || (stream->format == -1) ||
          !(stream->rotation >=0 && stream->rotation <= 3) ) {
          ALOGE("para error");
          return -EINVAL;
        }
    }

    // Create new stream array
    newStreams = new sp<Stream>[stream_config->num_streams];
    ALOGV("%s:%d: Number of Streams: %d", __func__, mId,
            stream_config->num_streams);

    // Mark all current streams unused for now
    for (int32_t i = 0; i < mNumStreams; i++)
        mStreams[i]->setReuse(false);
    // Fill new stream array with reused streams and new streams
    for (uint32_t i = 0; i < stream_config->num_streams; i++) {
        astream = stream_config->streams[i];
        if (astream->max_buffers > 0) {
            ALOGV("%s:%d: Reusing stream %d", __func__, mId, i);
            newStreams[i] = reuseStream(astream);
        } else {
            ALOGV("%s:%d: Creating new stream %d", __func__, mId, i);
            newStreams[i] = new Stream(mId, astream, this);
        }

        if (newStreams[i] == NULL) {
            ALOGE("%s:%d: Error processing stream %d", __func__, mId, i);
            goto err_out;
        }
        astream->priv = newStreams[i].get();
    }

    // Verify the set of streams in aggregate
    if (!isValidStreamSet(newStreams, stream_config->num_streams)) {
        ALOGE("%s:%d: Invalid stream set", __func__, mId);
        goto err_out;
    }

    // Destroy all old streams and replace stream array with new one
    destroyStreams(mStreams, mNumStreams);
    mStreams = newStreams;
    mNumStreams = stream_config->num_streams;
    ALOGV("%s:%d: replace stream array with new one", __func__, mId);
    return 0;

err_out:
    // Clean up temporary streams, preserve existing mStreams/mNumStreams
    destroyStreams(newStreams, stream_config->num_streams);
    return -EINVAL;
}

通过此函数会创建两个流、预览流和捕获流; 紧接着调用位于整个调用流程的末尾 endConfigure 方法,该方法对应着 CameraDeviceClient 的 endConfigure 方法,在该方法中会调用Camera3Device的configureStreams的方法,而该方法又会去通过ICameraDeviceSession的configureStreams_3_4的方法最终将需求传递给Provider。
内容如下:

binder::Status CameraDeviceClient::endConfigure(int operatingMode) {
    ATRACE_CALL();
    ALOGI("%s: ending configure (%d input stream, %zu output surfaces)",
            __FUNCTION__, mInputStream.configured ? 1 : 0,
            mStreamMap.size());

    binder::Status res;
    if (!(res = checkPidStatus(__FUNCTION__)).isOk()) return res;

    Mutex::Autolock icl(mBinderSerializationLock);

    if (!mDevice.get()) {
        return STATUS_ERROR(CameraService::ERROR_DISCONNECTED, "Camera device no longer alive");
    }

    if (operatingMode < 0) {
        String8 msg = String8::format(
            "Camera %s: Invalid operating mode %d requested", mCameraIdStr.string(), operatingMode);
        ALOGE("%s: %s", __FUNCTION__, msg.string());
        return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
                msg.string());
    }

    // Sanitize the high speed session against necessary capability bit.
    bool isConstrainedHighSpeed = (operatingMode == ICameraDeviceUser::CONSTRAINED_HIGH_SPEED_MODE);
    if (isConstrainedHighSpeed) {
        CameraMetadata staticInfo = mDevice->info();
        camera_metadata_entry_t entry = staticInfo.find(ANDROID_REQUEST_AVAILABLE_CAPABILITIES);
        bool isConstrainedHighSpeedSupported = false;
        for(size_t i = 0; i < entry.count; ++i) {
            uint8_t capability = entry.data.u8[i];
            if (capability == ANDROID_REQUEST_AVAILABLE_CAPABILITIES_CONSTRAINED_HIGH_SPEED_VIDEO) {
                isConstrainedHighSpeedSupported = true;
                break;
            }
        }
        if (!isConstrainedHighSpeedSupported) {
            String8 msg = String8::format(
                "Camera %s: Try to create a constrained high speed configuration on a device"
                " that doesn't support it.", mCameraIdStr.string());
            ALOGE("%s: %s", __FUNCTION__, msg.string());
            return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
                    msg.string());
        }
    }

    status_t err = mDevice->configureStreams(operatingMode);
    if (err == BAD_VALUE) {
        String8 msg = String8::format("Camera %s: Unsupported set of inputs/outputs provided",
                mCameraIdStr.string());
        ALOGE("%s: %s", __FUNCTION__, msg.string());
        res = STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT, msg.string());
    } else if (err != OK) {
        String8 msg = String8::format("Camera %s: Error configuring streams: %s (%d)",
                mCameraIdStr.string(), strerror(-err), err);
        ALOGE("%s: %s", __FUNCTION__, msg.string());
        res = STATUS_ERROR(CameraService::ERROR_INVALID_OPERATION, msg.string());
    }

    return res;
}

到这里整个数据流已经配置完成,并且App也获取了Framework中的 CameraCaptureSession 对象,之后便可进行图像需求的下发了,在下发之前需要先创建一个 Request,而App通过调用 CameraDeviceImpl中的 createCaptureRequest 来实现,该方法在Framework中实现,内部会再去调用 CameraService 中的AIDL接口 createDefaultRequest,该接口的实现是 CameraDeviceClient,在其内部又会去调用 Camera3Device 的 createDefaultRequest 方法,最后通过 ICameraDeviceSession 代理的 constructDefaultRequestSettings方法将需求下发到 CameraProvider 端去创建一个默认的 CaptureRequest 配置,一旦操作完成,CameraProvider 会将配置上传至 CameraService 进而给到 App 中,在创建 CaptureRequest 成功之后,便可下发图像采集需求了。

到这阶段打开摄像头和配置视频流过程就已经结束,在下来就是用户拍照采集图片数据流通过程梳理。

Camera3Device 图片处理

在 Camera3Device::initialize() 函数中第一步 openSession() 打开建立Session,第二步 执行 initializeCommonLocked() 函数 ,完成打开相机和流配置;

这个 initializeCommonLocked 函数是整个 CameraHAL3 框架方案启动的入口点、此函数中笔者标注6处内容,为了把此框架梳理清楚、我们需要花费点时间梳理这部分代码。 我们先对 CameraHAL3 有各基本认识、这个类涉及到内容非常多、源码已精简了还是很多、建议读者走读下、类定义内容如下:

/**
 * CameraDevice for HAL devices with version CAMERA_DEVICE_API_VERSION_3_0 or higher.
 */
class Camera3Device :
            public CameraDeviceBase,
            virtual public hardware::camera::device::V3_2::ICameraDeviceCallback,
            private camera3_callback_ops {
  public:

    explicit Camera3Device(const String8& id);

    virtual ~Camera3Device();

    // Capture and setStreamingRequest will configure streams if currently in
    // idle state
    status_t capture(CameraMetadata &request, int64_t *lastFrameNumber = NULL) override;
    status_t captureList(const List<const CameraMetadata> &requests,
            const std::list<const SurfaceMap> &surfaceMaps,
            int64_t *lastFrameNumber = NULL) override;
    status_t setStreamingRequest(const CameraMetadata &request,
            int64_t *lastFrameNumber = NULL) override;
    status_t setStreamingRequestList(const List<const CameraMetadata> &requests,
            const std::list<const SurfaceMap> &surfaceMaps,
            int64_t *lastFrameNumber = NULL) override;
    status_t clearStreamingRequest(int64_t *lastFrameNumber = NULL) override;

    status_t waitUntilRequestReceived(int32_t requestId, nsecs_t timeout) override;

    // Actual stream creation/deletion is delayed until first request is submitted
    // If adding streams while actively capturing, will pause device before adding
    // stream, reconfiguring device, and unpausing. If the client create a stream
    // with nullptr consumer surface, the client must then call setConsumers()
    // and finish the stream configuration before starting output streaming.
    status_t createStream(sp<Surface> consumer,
            uint32_t width, uint32_t height, int format,
            android_dataspace dataSpace, camera3_stream_rotation_t rotation, int *id,
            int streamSetId = camera3::CAMERA3_STREAM_SET_ID_INVALID,
            bool isShared = false, uint64_t consumerUsage = 0) override;

    status_t createInputStream(
            uint32_t width, uint32_t height, int format,
            int *id) override;

    status_t configureStreams(int operatingMode =
            static_cast<int>(hardware::camera::device::V3_2::StreamConfigurationMode::NORMAL_MODE))
            override;

    status_t getInputBufferProducer(
            sp<IGraphicBufferProducer> *producer) override;

    status_t createDefaultRequest(int templateId, CameraMetadata *request) override;

    /**
     * Set the deferred consumer surfaces to the output stream and finish the deferred
     * consumer configuration.
     */
    status_t setConsumerSurfaces(int streamId, const std::vector<sp<Surface>>& consumers) override;

    private:

    // internal typedefs using的另一个作用是指定别名,一般都是using a = b;
    using RequestMetadataQueue = hardware::MessageQueue<uint8_t, hardware::kSynchronizedReadWrite>;
    using ResultMetadataQueue  = hardware::MessageQueue<uint8_t, hardware::kSynchronizedReadWrite>;

    /**
     * Adapter for legacy HAL / HIDL HAL interface calls; calls either into legacy HALv3 or the
     * HIDL HALv3 interfaces.
     */
    class HalInterface : public camera3::Camera3StreamBufferFreedListener {
      public:
        HalInterface(sp<hardware::camera::device::V3_2::ICameraDeviceSession> &session,
                     std::shared_ptr<RequestMetadataQueue> queue);
        HalInterface(const HalInterface &other);
        HalInterface();

        // Caller takes ownership of requestTemplate
        status_t constructDefaultRequestSettings(camera3_request_template_t templateId,
                /*out*/ camera_metadata_t **requestTemplate);
        status_t configureStreams(/*inout*/ camera3_stream_configuration *config);
        status_t processCaptureRequest(camera3_capture_request_t *request);
        status_t processBatchCaptureRequests(
                std::vector<camera3_capture_request_t*>& requests,
                /*out*/uint32_t* numRequestProcessed);
        
        private:
        //> 需要关注两个对象
        sp<hardware::camera::device::V3_2::ICameraDeviceSession> mHidlSession;
        std::shared_ptr<RequestMetadataQueue> mRequestMetadataQueue;

        // The output HIDL request still depends on input camera3_capture_request_t
        // Do not free input camera3_capture_request_t before output HIDL request
        void wrapAsHidlRequest(camera3_capture_request_t* in,
                /*out*/hardware::camera::device::V3_2::CaptureRequest* out,
                /*out*/std::vector<native_handle_t*>* handlesCreated);

        status_t pushInflightBufferLocked(int32_t frameNumber, int32_t streamId,
                buffer_handle_t *buffer, int acquireFence);
        // Cache of buffer handles keyed off (frameNumber << 32 | streamId)
        // value is a pair of (buffer_handle_t*, acquire_fence FD)
        std::unordered_map<uint64_t, std::pair<buffer_handle_t*, int>> mInflightBufferMap;

        struct BufferHasher {
            size_t operator()(const buffer_handle_t& buf) const {
                if (buf == nullptr)
                    return 0;

                size_t result = 1;
                result = 31 * result + buf->numFds;
                for (int i = 0; i < buf->numFds; i++) {
                    result = 31 * result + buf->data[i];
                }
                return result;
            }
        };

        struct BufferComparator {
            bool operator()(const buffer_handle_t& buf1, const buffer_handle_t& buf2) const {
                if (buf1->numFds == buf2->numFds) {
                    for (int i = 0; i < buf1->numFds; i++) {
                        if (buf1->data[i] != buf2->data[i]) {
                            return false;
                        }
                    }
                    return true;
                }
                return false;
            }
        };

        std::mutex mBufferIdMapLock; // protecting mBufferIdMaps and mNextBufferId
        typedef std::unordered_map<const buffer_handle_t, uint64_t,
                BufferHasher, BufferComparator> BufferIdMap;
        // stream ID -> per stream buffer ID map
        std::unordered_map<int, BufferIdMap> mBufferIdMaps;
        // method to extract buffer's unique ID
        // TODO: we should switch to use gralloc mapper's getBackingStore API
        //       once we ran in binderized gralloc mode, but before that is ready,
        //       we need to rely on the conventional buffer queue behavior where
        //       buffer_handle_t's FD won't change.
        // return pair of (newlySeenBuffer?, bufferId)
        std::pair<bool, uint64_t> getBufferId(const buffer_handle_t& buf, int streamId);

        virtual void onBufferFreed(int streamId, const native_handle_t* handle) override;

        std::vector<std::pair<int, uint64_t>> mFreedBuffers;
    };

    //> 
    sp<HalInterface> mInterface;

    CameraMetadata             mDeviceInfo;

    CameraMetadata             mRequestTemplateCache[CAMERA3_TEMPLATE_COUNT];

    // Mapping of stream IDs to stream instances
    typedef KeyedVector<int, sp<camera3::Camera3OutputStreamInterface> >
            StreamSet;

    StreamSet                  mOutputStreams;
    sp<camera3::Camera3Stream> mInputStream;

     class CaptureRequest : public LightRefBase<CaptureRequest> {
      public:
        CameraMetadata                      mSettings;
        sp<camera3::Camera3Stream>          mInputStream;
        camera3_stream_buffer_t             mInputBuffer;
        Vector<sp<camera3::Camera3OutputStreamInterface> >
                                            mOutputStreams;
        SurfaceMap                          mOutputSurfaces;
        CaptureResultExtras                 mResultExtras;
        // The number of requests that should be submitted to HAL at a time.
        // For example, if batch size is 8, this request and the following 7
        // requests will be submitted to HAL at a time. The batch size for
        // the following 7 requests will be ignored by the request thread.
        int                                 mBatchSize;
        //  Whether this request is from a repeating or repeating burst.
        bool                                mRepeating;
    };
    typedef List<sp<CaptureRequest> > RequestList;

    /**
     * Implementation of android::hardware::camera::device::V3_2::ICameraDeviceCallback
     */

    hardware::Return<void> processCaptureResult(
            const hardware::hidl_vec<
                    hardware::camera::device::V3_2::CaptureResult>& results) override;
    hardware::Return<void> notify(
            const hardware::hidl_vec<
                    hardware::camera::device::V3_2::NotifyMsg>& msgs) override;

    /**
     * Build a CaptureRequest request from the CameraDeviceBase request
     * settings.
     */
    sp<CaptureRequest> createCaptureRequest(const CameraMetadata &request,
                                            const SurfaceMap &surfaceMap);

    /**
     * Take the currently-defined set of streams and configure the HAL to use
     * them. This is a long-running operation (may be several hundered ms).
     */
    status_t           configureStreamsLocked(int operatingMode);

    struct RequestTrigger {
        // Metadata tag number, e.g. android.control.aePrecaptureTrigger
        uint32_t metadataTag;
        // Metadata value, e.g. 'START' or the trigger ID
        int32_t entryValue;

        // The last part of the fully qualified path, e.g. afTrigger
        const char *getTagName() const {
            return get_camera_metadata_tag_name(metadataTag) ?: "NULL";
        }

        // e.g. TYPE_BYTE, TYPE_INT32, etc.
        int getTagType() const {
            return get_camera_metadata_tag_type(metadataTag);
        }
    };

    /** Camera3Device 内部线程
     * Thread for managing capture request submission to HAL device.
     */
    class RequestThread : public Thread {
      public:
        RequestThread(wp<Camera3Device> parent,
                sp<camera3::StatusTracker> statusTracker,
                sp<HalInterface> interface);
        ~RequestThread();

        void     setNotificationListener(wp<NotificationListener> listener);

        /**
         * Call after stream (re)-configuration is completed.
         */
        void     configurationComplete(bool isConstrainedHighSpeed);

        /**
         * Set or clear the list of repeating requests. Does not block
         * on either. Use waitUntilPaused to wait until request queue
         * has emptied out.
         */
        status_t setRepeatingRequests(const RequestList& requests,
                                      /*out*/
                                      int64_t *lastFrameNumber = NULL);
        status_t clearRepeatingRequests(/*out*/
                                        int64_t *lastFrameNumber = NULL);

        status_t queueRequestList(List<sp<CaptureRequest> > &requests,
                                  /*out*/
                                  int64_t *lastFrameNumber = NULL);

        /**
         * Remove all queued and repeating requests, and pending triggers
         */
        status_t clear(/*out*/int64_t *lastFrameNumber = NULL);

        /**
         * Flush all pending requests in HAL.
         */
        status_t flush();

        /**
         * Queue a trigger to be dispatched with the next outgoing
         * process_capture_request. The settings for that request only
         * will be temporarily rewritten to add the trigger tag/value.
         * Subsequent requests will not be rewritten (for this tag).
         */
        status_t queueTrigger(RequestTrigger trigger[], size_t count);

        /**
         * Pause/unpause the capture thread. Doesn't block, so use
         * waitUntilPaused to wait until the thread is paused.
         */
        void     setPaused(bool paused);

        /**
         * Wait until thread processes the capture request with settings'
         * android.request.id == requestId.
         *
         * Returns TIMED_OUT in case the thread does not process the request
         * within the timeout.
         */
        status_t waitUntilRequestProcessed(int32_t requestId, nsecs_t timeout);

        /**
         * Shut down the thread. Shutdown is asynchronous, so thread may
         * still be running once this method returns.
         */
        virtual void requestExit();

        /**
         * Get the latest request that was sent to the HAL
         * with process_capture_request.
         */
        CameraMetadata getLatestRequest() const;

        /**
         * Returns true if the stream is a target of any queued or repeating
         * capture request
         */
        bool isStreamPending(sp<camera3::Camera3StreamInterface>& stream);

        // dump processCaptureRequest latency
        void dumpCaptureRequestLatency(int fd, const char* name) {
            mRequestLatency.dump(fd, name);
        }

      protected:

        virtual bool threadLoop();

      private:
        static const String8& getId(const wp<Camera3Device> &device);

        status_t           queueTriggerLocked(RequestTrigger trigger);
        // Mix-in queued triggers into this request
        int32_t            insertTriggers(const sp<CaptureRequest> &request);
        // Purge the queued triggers from this request,
        //  restoring the old field values for those tags.
        status_t           removeTriggers(const sp<CaptureRequest> &request);

        // HAL workaround: Make sure a trigger ID always exists if
        // a trigger does
        status_t          addDummyTriggerIds(const sp<CaptureRequest> &request);

        static const nsecs_t kRequestTimeout = 50e6; // 50 ms

        // Used to prepare a batch of requests.
        struct NextRequest {
            sp<CaptureRequest>              captureRequest;
            camera3_capture_request_t       halRequest;
            Vector<camera3_stream_buffer_t> outputBuffers;
            bool                            submitted;
        };

        // Wait for the next batch of requests and put them in mNextRequests. mNextRequests will
        // be empty if it times out.
        void waitForNextRequestBatch();

        // Waits for a request, or returns NULL if times out. Must be called with mRequestLock hold.
        sp<CaptureRequest> waitForNextRequestLocked();

        // Prepare HAL requests and output buffers in mNextRequests. Return TIMED_OUT if getting any
        // output buffer timed out. If an error is returned, the caller should clean up the pending
        // request batch.
        status_t prepareHalRequests();

        // Return buffers, etc, for requests in mNextRequests that couldn't be fully constructed and
        // send request errors if sendRequestError is true. The buffers will be returned in the
        // ERROR state to mark them as not having valid data. mNextRequests will be cleared.
        void cleanUpFailedRequests(bool sendRequestError);

        // Stop the repeating request if any of its output streams is abandoned.
        void checkAndStopRepeatingRequest();

        // Pause handling
        bool               waitIfPaused();
        void               unpauseForNewRequests();

        // Relay error to parent device object setErrorState
        void               setErrorState(const char *fmt, ...);

        // If the input request is in mRepeatingRequests. Must be called with mRequestLock hold
        bool isRepeatingRequestLocked(const sp<CaptureRequest>&);

        // Clear repeating requests. Must be called with mRequestLock held.
        status_t clearRepeatingRequestsLocked(/*out*/ int64_t *lastFrameNumber = NULL);

        // send request in mNextRequests to HAL one by one. Return true = sucssess
        bool sendRequestsOneByOne();

        // send request in mNextRequests to HAL in a batch. Return true = sucssess
        bool sendRequestsBatch();

        // Calculate the expected maximum duration for a request
        nsecs_t calculateMaxExpectedDuration(const camera_metadata_t *request);

        wp<Camera3Device>  mParent;
        wp<camera3::StatusTracker>  mStatusTracker;
        sp<HalInterface>   mInterface;

        wp<NotificationListener> mListener;

        const String8&     mId;       // The camera ID
        int                mStatusId; // The RequestThread's component ID for
                                      // status tracking

        Mutex              mRequestLock;
        Condition          mRequestSignal;
        RequestList        mRequestQueue;
        RequestList        mRepeatingRequests;
        // The next batch of requests being prepped for submission to the HAL, no longer
        // on the request queue. Read-only even with mRequestLock held, outside
        // of threadLoop
        Vector<NextRequest> mNextRequests;

        // Flag indicating if we should prepare video stream for video requests.
        bool               mPrepareVideoStream;

        static const int32_t kRequestLatencyBinSize = 40; // in ms
        CameraLatencyHistogram mRequestLatency;
    };
    sp<RequestThread> mRequestThread;

    /**
     * In-flight queue for tracking completion of capture requests.
     */

    struct InFlightRequest {
        // Decremented by calls to process_capture_result with valid output
        // and input buffers
        int     numBuffersLeft;
        CaptureResultExtras resultExtras;
        // If this request has any input buffer
        bool hasInputBuffer;

        // The last metadata that framework receives from HAL and
        // not yet send out because the shutter event hasn't arrived.
        // It's added by process_capture_result and sent when framework
        // receives the shutter event.
        CameraMetadata pendingMetadata;

        // The metadata of the partial results that framework receives from HAL so far
        // and has sent out.
        CameraMetadata collectedPartialResult;

        // Buffers are added by process_capture_result when output buffers
        // return from HAL but framework has not yet received the shutter
        // event. They will be returned to the streams when framework receives
        // the shutter event.
        Vector<camera3_stream_buffer_t> pendingOutputBuffers;

        // Default constructor needed by KeyedVector
        InFlightRequest() :
                shutterTimestamp(0),
                sensorTimestamp(0),
                requestStatus(OK),
                haveResultMetadata(false),
                numBuffersLeft(0),
                hasInputBuffer(false),
                hasCallback(true),
                maxExpectedDuration(kDefaultExpectedDuration),
                skipResultMetadata(false) {
        }

        InFlightRequest(int numBuffers, CaptureResultExtras extras, bool hasInput,
                bool hasAppCallback, nsecs_t maxDuration) :
                shutterTimestamp(0),
                sensorTimestamp(0),
                requestStatus(OK),
                haveResultMetadata(false),
                numBuffersLeft(numBuffers),
                resultExtras(extras),
                hasInputBuffer(hasInput),
                hasCallback(hasAppCallback),
                maxExpectedDuration(maxDuration),
                skipResultMetadata(false) {
        }
    };

    // Map from frame number to the in-flight request state
    typedef KeyedVector<uint32_t, InFlightRequest> InFlightMap;

    /**
     * Tracking for idle detection
     */
    sp<camera3::StatusTracker> mStatusTracker;

    /**
     * Graphic buffer manager for output streams. Each device has a buffer manager, which is used
     * by the output streams to get and return buffers if these streams are registered to this
     * buffer manager.
     */
    sp<camera3::Camera3BufferManager> mBufferManager;

    /**
     * Thread for preparing streams
     */
    class PreparerThread : private Thread, public virtual RefBase {
      public:
        PreparerThread();
        ~PreparerThread();

        void setNotificationListener(wp<NotificationListener> listener);

        /**
         * Queue up a stream to be prepared. Streams are processed by a background thread in FIFO
         * order.  Pre-allocate up to maxCount buffers for the stream, or the maximum number needed
         * for the pipeline if maxCount is ALLOCATE_PIPELINE_MAX.
         */
        status_t prepare(int maxCount, sp<camera3::Camera3StreamInterface>& stream);

        /**
         * Cancel all current and pending stream preparation
         */
        status_t clear();

      private:
        Mutex mLock;

        virtual bool threadLoop();

        // Guarded by mLock

        wp<NotificationListener> mListener;
        List<sp<camera3::Camera3StreamInterface> > mPendingStreams;
        bool mActive;
        bool mCancelNow;

        // Only accessed by threadLoop and the destructor

        sp<camera3::Camera3StreamInterface> mCurrentStream;
    };
    sp<PreparerThread> mPreparerThread;

    /**
     * Callback functions from HAL device
     */
    void processCaptureResult(const camera3_capture_result *result);

    // helper function to return the output buffers to the streams.
    void returnOutputBuffers(const camera3_stream_buffer_t *outputBuffers,
            size_t numBuffers, nsecs_t timestamp);

    // Send a partial capture result.
    void sendPartialCaptureResult(const camera_metadata_t * partialResult,
            const CaptureResultExtras &resultExtras, uint32_t frameNumber);

    // Send a total capture result given the pending metadata and result extras,
    // partial results, and the frame number to the result queue.
    void sendCaptureResult(CameraMetadata &pendingMetadata,
            CaptureResultExtras &resultExtras,
            CameraMetadata &collectedPartialResult, uint32_t frameNumber,
            bool reprocess);
    
    /**
     * Static callback forwarding methods from HAL to instance
     */
    static callbacks_process_capture_result_t sProcessCaptureResult;
}

走读完这个类定义、我们再总结 Camera3Device 一下:
(1). Hal接口子类 class HalInterface,类内部有 std::unordered_map 类型变量 mBufferIdMaps、mInflightBufferMap;
(2). 拍照请求子类 class CaptureRequest,类内部变量有输入、输出流、元数据集、拍照结果集和surfaceMap;
(3). 子类 struct RequestTrigger,拍照请求触发相关内容;
(4). 子类 class RequestThread : public Thread 线程类,处理与拍照相关请求或反馈事件等;
(5). 子类 struct InFlightRequest 类,注解:In-flight queue for tracking completion of capture requests;
(6). 子类 class PreparerThread : private Thread线程类,注解: Thread for preparing streams;
(7). FrameProcessorBase 线程类负责帧结果和事件监听部分,就是数据流从 Camera --> CameraProvider --> CameraService --> 用户App;

现在我们回过头来看 Camera3Device::initialize() 函数中调用 initializeCommonLocked() 函数, 此函数笔者标注6处;我们分别走读源码。

第1处 建立 StatusTracker 线程

源码路径:
@frameworks/av/services/camera/libcameraservice/device3/StatusTracker.cpp

StatusTracker::StatusTracker(wp<Camera3Device> parent) :
        mComponentsChanged(false),
        mParent(parent),
        mNextComponentId(0),
        mIdleFence(new Fence()),
        mDeviceState(IDLE) {
}

//> 线程主体
bool StatusTracker::threadLoop() {
    status_t res;

    // Wait for state updates
    {
        Mutex::Autolock pl(mPendingLock);
        while (mPendingChangeQueue.size() == 0 && !mComponentsChanged) {
            res = mPendingChangeSignal.waitRelative(mPendingLock,
                    kWaitDuration);
            if (exitPending()) return false;
            if (res != OK) {
                if (res != TIMED_OUT) {
                    ALOGE("%s: Error waiting on state changes: %s (%d)",
                            __FUNCTION__, strerror(-res), res);
                }
                // TIMED_OUT is expected
                break;
            }
        }
    }

    // After new pending states appear, or timeout, check if we're idle.  Even
    // with timeout, need to check to account for fences that may still be
    // clearing out
    sp<Camera3Device> parent;
    {
        Mutex::Autolock pl(mPendingLock);
        Mutex::Autolock l(mLock);

        // Collect all pending state updates and see if the device
        // collectively transitions between idle and active for each one

        // First pass for changed components or fence completions
        ComponentState prevState = getDeviceStateLocked();
        if (prevState != mDeviceState) {
            // Only collect changes to overall device state
            mStateTransitions.add(prevState);
        }
        // For each pending component state update, check if we've transitioned
        // to a new overall device state
        for (size_t i = 0; i < mPendingChangeQueue.size(); i++) {
            const StateChange &newState = mPendingChangeQueue[i];
            ssize_t idx = mStates.indexOfKey(newState.id);
            // Ignore notices for unknown components
            if (idx >= 0) {
                // Update single component state
                mStates.replaceValueAt(idx, newState.state);
                mIdleFence = Fence::merge(String8("idleFence"),
                        mIdleFence, newState.fence);
                // .. and see if overall device state has changed
                ComponentState newState = getDeviceStateLocked();
                if (newState != prevState) {
                    mStateTransitions.add(newState);
                }
                prevState = newState;
            }
        }
        mPendingChangeQueue.clear();
        mComponentsChanged = false;

        // Store final state after all pending state changes are done with

        mDeviceState = prevState;
        parent = mParent.promote();
    }

    // Notify parent for all intermediate transitions
    if (mStateTransitions.size() > 0 && parent.get()) {
        for (size_t i = 0; i < mStateTransitions.size(); i++) {
            bool idle = (mStateTransitions[i] == IDLE);
            ALOGV("Camera device is now %s", idle ? "idle" : "active");
            parent->notifyStatus(idle);
        }
    }
    mStateTransitions.clear();

    return true;
}

第2处 mStatusTracker->addComponent()

源码内容如下,路径同上。

int StatusTracker::addComponent() {
    int id;
    ssize_t err;
    {
        Mutex::Autolock l(mLock);
        id = mNextComponentId++;
        ALOGV("%s: Adding new component %d", __FUNCTION__, id);

        err = mStates.add(id, IDLE);
        ALOGE_IF(err < 0, "%s: Can't add new component %d: %s (%zd)",
                __FUNCTION__, id, strerror(-err), err);
    }

    if (err >= 0) {
        Mutex::Autolock pl(mPendingLock);
        mComponentsChanged = true;
        mPendingChangeSignal.signal();
    }

    return err < 0 ? err : id;
}

第3处 Create buffer manager

源码路径:
@frameworks/av/services/camera/libcameraservice/device3/Camera3BufferManager.h
此处我们先看类定于:

class Camera3BufferManager: public virtual RefBase {
public:
    explicit Camera3BufferManager();

    virtual ~Camera3BufferManager();

    status_t registerStream(wp<Camera3OutputStream>& stream, const StreamInfo &streamInfo);

   
    status_t unregisterStream(int streamId, int streamSetId);

    status_t getBufferForStream(int streamId, int streamSetId, sp<GraphicBuffer>* gb, int* fenceFd);

    status_t onBufferReleased(int streamId, int streamSetId, /*out*/bool* shouldFreeBuffer);

    status_t onBuffersRemoved(int streamId, int streamSetId, size_t count);

    /**
     * This method notifiers the manager that a buffer is freed from the buffer queue, usually
     * because onBufferReleased signals the caller to free a buffer via the shouldFreeBuffer flag.
     */
    void notifyBufferRemoved(int streamId, int streamSetId);

    /**
     * Dump the buffer manager statistics.
     */
    void     dump(int fd, const Vector<String16> &args) const;

private:
    // allocatedBufferWaterMark will be decreased when:
    //   numAllocatedBuffersThisSet > numHandoutBuffersThisSet + BUFFER_WATERMARK_DEC_THRESHOLD
    // This allows the watermark go back down after a burst of buffer requests
    static const int BUFFER_WATERMARK_DEC_THRESHOLD = 3;

    // onBufferReleased will set shouldFreeBuffer to true when:
    //   numAllocatedBuffersThisSet > allocatedBufferWaterMark AND
    //   numAllocatedBuffersThisStream > numHandoutBuffersThisStream + BUFFER_FREE_THRESHOLD
    // So after a burst of buffer requests and back to steady state, the buffer queue should have
    // (BUFFER_FREE_THRESHOLD + steady state handout buffer count) buffers.
    static const int BUFFER_FREE_THRESHOLD = 3;

    /**
     * Lock to synchronize the access to the methods of this class.
     */
    mutable Mutex mLock;

    static const size_t kMaxBufferCount = BufferQueueDefs::NUM_BUFFER_SLOTS;

    struct GraphicBufferEntry {
        sp<GraphicBuffer> graphicBuffer;
        int fenceFd;
        explicit GraphicBufferEntry(const sp<GraphicBuffer>& gb = 0, int fd = -1) :
            graphicBuffer(gb),
            fenceFd(fd) {}
    };

    /**
     * A buffer entry (indexed by stream ID) represents a single physically allocated buffer. For
     * Gralloc V0, since each physical buffer is associated with one stream, this is
     * a single entry map. For Gralloc V1, one physical buffer can be shared between different
     * streams in one stream set, so this entry may include multiple entries, where the different
     * graphic buffers have the same common Gralloc backing store.
     */
    typedef int StreamId;
    typedef KeyedVector<StreamId, GraphicBufferEntry> BufferEntry;

    typedef std::list<BufferEntry> BufferList;

    /**
     * Stream info map (indexed by stream ID) tracks all the streams registered to a particular
     * stream set.
     */
    typedef KeyedVector<StreamId, StreamInfo> InfoMap;

    /**
     * Stream set buffer count map (indexed by stream ID) tracks all buffer counts of the streams
     * registered to a particular stream set.
     */
    typedef KeyedVector<StreamId, size_t> BufferCountMap;

    /**
     * StreamSet keeps track of the stream info, free buffer list and hand-out buffer counts for
     * each stream set.
     */
    struct StreamSet {
       
        size_t allocatedBufferWaterMark;

        /**
         * The max allowed buffer count for this stream set. It is the max of total number of
         * buffers for each stream. This is the upper bound of the allocatedBufferWaterMark.
         */
        size_t maxAllowedBufferCount;

        /**
         * The stream info for all streams in this set
         */
        InfoMap streamInfoMap;
        /**
         * The count of the buffers that were handed out to the streams of this set.
         */
        BufferCountMap handoutBufferCountMap;
        /**
         * The count of the buffers that are attached to the streams of this set.
         * An attached buffer may be free or handed out
         */
        BufferCountMap attachedBufferCountMap;

        StreamSet() {
            allocatedBufferWaterMark = 0;
            maxAllowedBufferCount = 0;
        }
    };

    /**
     * Stream set map managed by this buffer manager.
     */
    typedef int StreamSetId;
    KeyedVector<StreamSetId, StreamSet> mStreamSetMap;
    KeyedVector<StreamId, wp<Camera3OutputStream>> mStreamMap;

    // TODO: There is no easy way to query the Gralloc version in this code yet, we have different
    // code paths for different Gralloc versions, hardcode something here for now.
    const uint32_t mGrallocVersion = GRALLOC_DEVICE_API_VERSION_0_1;

    /**
     * Check if this stream was successfully registered already. This method needs to be called with
     * mLock held.
     */
    bool checkIfStreamRegisteredLocked(int streamId, int streamSetId) const;

    /**
     * Check if other streams in the stream set has extra buffer available to be freed, and
     * free one if so.
     */
    status_t checkAndFreeBufferOnOtherStreamsLocked(int streamId, int streamSetId);
};

这个类是管理摄像头流内容与显示surface间相关内容。

第4处 建立 RequestThread 线程

源码路径:
@frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

Camera3Device::RequestThread::RequestThread(wp<Camera3Device> parent,
        sp<StatusTracker> statusTracker,
        sp<HalInterface> interface) :
        Thread(/*canCallJava*/false),
        mParent(parent),
        mStatusTracker(statusTracker),
        mInterface(interface),
        mListener(nullptr),
        mId(getId(parent)),
        mReconfigured(false),
        mDoPause(false),
        mPaused(true),
        mFrameNumber(0),
        mLatestRequestId(NAME_NOT_FOUND),
        mCurrentAfTriggerId(0),
        mCurrentPreCaptureTriggerId(0),
        mRepeatingLastFrameNumber(
            hardware::camera2::ICameraDeviceUser::NO_IN_FLIGHT_REPEATING_FRAMES),
        mPrepareVideoStream(false),
        mRequestLatency(kRequestLatencyBinSize) {
    mStatusId = statusTracker->addComponent();
}

//> 线程主体函数
bool Camera3Device::RequestThread::threadLoop() {
    ATRACE_CALL();
    status_t res;

    // Handle paused state.
    if (waitIfPaused()) {
        return true;
    }

    // Wait for the next batch of requests.
    //> 获取新的 captureRequest 内容,存放到 mNextRequests 容器中
    //> Condition mRequestSignal, mDoPauseSignal
    waitForNextRequestBatch();
    if (mNextRequests.size() == 0) {
        return true;
    }

    // Get the latest request ID, if any
    int latestRequestId;
    camera_metadata_entry_t requestIdEntry = mNextRequests[mNextRequests.size() - 1].
            captureRequest->mSettings.find(ANDROID_REQUEST_ID);
    if (requestIdEntry.count > 0) {
        latestRequestId = requestIdEntry.data.i32[0];
    } else {
        ALOGW("%s: Did not have android.request.id set in the request.", __FUNCTION__);
        latestRequestId = NAME_NOT_FOUND;
    }

    // Prepare a batch of HAL requests and output buffers.
    res = prepareHalRequests();
    if (res == TIMED_OUT) {
        // Not a fatal error if getting output buffers time out.
        cleanUpFailedRequests(/*sendRequestError*/ true);
        // Check if any stream is abandoned.
        checkAndStopRepeatingRequest();
        return true;
    } else if (res != OK) {
        cleanUpFailedRequests(/*sendRequestError*/ false);
        return false;
    }

    // Inform waitUntilRequestProcessed thread of a new request ID
    {
        Mutex::Autolock al(mLatestRequestMutex);

        mLatestRequestId = latestRequestId;
        mLatestRequestSignal.signal();
    }

    // Submit a batch of requests to HAL.
    // Use flush lock only when submitting multilple requests in a batch.
    // TODO: The problem with flush lock is flush() will be blocked by process_capture_request()
    // which may take a long time to finish so synchronizing flush() and
    // process_capture_request() defeats the purpose of cancelling requests ASAP with flush().
    // For now, only synchronize for high speed recording and we should figure something out for
    // removing the synchronization.
    bool useFlushLock = mNextRequests.size() > 1;

    if (useFlushLock) {
        mFlushLock.lock();
    }

    ALOGVV("%s: %d: submitting %zu requests in a batch.", __FUNCTION__, __LINE__,
            mNextRequests.size());

    bool submitRequestSuccess = false;
    nsecs_t tRequestStart = systemTime(SYSTEM_TIME_MONOTONIC);
    if (mInterface->supportBatchRequest()) {
        submitRequestSuccess = sendRequestsBatch();
    } else {
        submitRequestSuccess = sendRequestsOneByOne();
    }
    nsecs_t tRequestEnd = systemTime(SYSTEM_TIME_MONOTONIC);
    mRequestLatency.add(tRequestStart, tRequestEnd);

    if (useFlushLock) {
        mFlushLock.unlock();
    }

    // Unset as current request
    {
        Mutex::Autolock l(mRequestLock);
        mNextRequests.clear();
    }

    return submitRequestSuccess;
}

处理 拍照相关的线程类。

第5处 建立 PreparerThread 线程

源码路径:
@frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

/**
 * PreparerThread inner class methods
 */

Camera3Device::PreparerThread::PreparerThread() :
        Thread(/*canCallJava*/false), mListener(nullptr),
        mActive(false), mCancelNow(false) {
}

//> 线程主体方法
bool Camera3Device::PreparerThread::threadLoop() {
    status_t res;
    {
        Mutex::Autolock l(mLock);
        if (mCurrentStream == nullptr) {
            // End thread if done with work
            if (mPendingStreams.empty()) {
                ALOGV("%s: Preparer stream out of work", __FUNCTION__);
                // threadLoop _must not_ re-acquire mLock after it sets mActive to false; would
                // cause deadlock with prepare()'s requestExitAndWait triggered by !mActive.
                mActive = false;
                return false;
            }

            // Get next stream to prepare
            auto it = mPendingStreams.begin();
            mCurrentStream = *it;
            mPendingStreams.erase(it);
            ATRACE_ASYNC_BEGIN("stream prepare", mCurrentStream->getId());
            ALOGV("%s: Preparing stream %d", __FUNCTION__, mCurrentStream->getId());
        } else if (mCancelNow) {
            mCurrentStream->cancelPrepare();
            ATRACE_ASYNC_END("stream prepare", mCurrentStream->getId());
            ALOGV("%s: Cancelling stream %d prepare", __FUNCTION__, mCurrentStream->getId());
            mCurrentStream.clear();
            mCancelNow = false;
            return true;
        }
    }

    res = mCurrentStream->prepareNextBuffer();
    if (res == NOT_ENOUGH_DATA) return true;
    if (res != OK) {
        // Something bad happened; try to recover by cancelling prepare and
        // signalling listener anyway
        ALOGE("%s: Stream %d returned error %d (%s) during prepare", __FUNCTION__,
                mCurrentStream->getId(), res, strerror(-res));
        mCurrentStream->cancelPrepare();
    }

    // This stream has finished, notify listener
    Mutex::Autolock l(mLock);
    sp<NotificationListener> listener = mListener.promote();
    if (listener != NULL) {
        ALOGV("%s: Stream %d prepare done, signaling listener", __FUNCTION__,
                mCurrentStream->getId());
        listener->notifyPrepared(mCurrentStream->getId());
    }

    ATRACE_ASYNC_END("stream prepare", mCurrentStream->getId());
    mCurrentStream.clear();

    return true;
}

第6处 STATUS_UNCONFIGURED 状态更新

源码路径
@frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

void Camera3Device::internalUpdateStatusLocked(Status status) {
    mStatus = status;
    mRecentStatusUpdates.add(mStatus); //> Vector mRecentStatusUpdates
    mStatusChanged.broadcast();  //> Condition mStatusChanged
}

上面代码是 Camera3Device 类中主要内容,笔者只能通过文字总结的方式来介绍拍照的数据流过程,源码走读展示的话、内容太长。

总结

(1). 在打开摄像头、配置两个流程,一个是预览,一个拍照,两者差异主要体现在 CameraService 中针对 CaptureRequest 获取优先级上,一般拍照的Request优先级高于预览, 具体表现是当预览Request在不断下发的时候,来了一次拍照需求,在 Camera3Device 的 RequestThread 线程中,会优先下发此次拍照的 Request。

(2). 下发拍照 Request 到 CameraService,其操作主要是由 CameraDevcieClient的 submitRequestList 方法来实现,在该方法中,会调用Camera3Device的 setStreamingRequestList方法,将需求发送到Camera3Device中,而Camera3Device将需求又加入到 RequestThread 中的快速消息队列 RequestQueue 中, 并唤醒 RequestThread 线程,在该线程被唤醒后,会从 RequestQueue 中取出Request,通过之前获取的 ICameraDeviceSession 代理的 processCaptureRequest_3_4方法
将需求发送至 CameraProvider 中,使其非阻塞实现,所以一旦发送成功,便立即返回,在App端便等待这结果的回传;

(3). 获取结果是通过异步实现,主要分别两个部分,一个是事件的回传,一个是数据的回传,而数据中又根据流程的差异主要分为 MetaData 和 ImageData 两个部分;在下发 CaptureRequest 之后,
首先从 CameraProvider 端传来的是 ShutterNotify 事件,因为之前已经将 Camera3Device 作为 ICameraDeviceCallback 的实现传入 CameraProvider 中,所以此时会调用 Camera3Device
的 notify 方法将事件传入 CameraService 侧,紧接着通过层层回调,将事件通过 CameraDeviceClient 的 notifyShutter 方法发送,之后通过打开相机设备时Framework 传入的 CameraDeviceCallbacks 接口的 onCaptureStarted方法将事件最终传回至 Framework 层,进而给到App端;

(4). 在 ShutterNotify 事件上报完成之后,当有 MetaData 生成,CameraProvider 便会通过 ICameraDeviceCallback 的 processCaptureResult_3_4 方法将数据给到 CameraService侧,而该接口的实现 对应的是 Camera3Device 的 processCaptureResult_3_4 方法,该方法会通过层层调用,调用到 sendCaptureResult 方法将 Result 放入一个快速消息队列 mResultQueue 中,并且通知 FrameProcessorBase 线程去取出 Result,并且将其发送至 CameraDeviceClient 中,之后通过内部的 CameraDeviceCallbacks 远程代理的 onResultReceived 方法、将结果上传至 Framework 层,进而给到 App 进行处理;

(5). 当有 ImageData 数据产生, 也会按照类似的流程走到Camera3Device中,但会通过调用 returnOutputBuffers 方法将数据给到 Camera3OutputStream 中,而该 Stream 会通过 BufferQueue 生产者消费者模式 的生产者的 queue 方法、通知消费者对该 buffer 进行消费,而消费者正是 App 端的诸如 ImageReader 拥有的 Surface 类,最后 App 便可以将图像数据取出进行后期处理;

本篇源码走读笔记、可能会有偏差,仅供读者阅读源码的参考,如有疏漏偏差请谅解; 如本篇文章对你有所帮助或启发请给笔者点赞、以资鼓励噢、谢谢。

附录: 虚拟相机驱动的 logcat 内容

08-24 05:39:54.359 +0000  5250  5250 I CameraManagerGlobal: Connecting to camera service
08-24 05:39:54.361 +0000  3551  3634 V CameraService: addListener: Add listener 0xee299a00
08-24 05:39:54.362 +0000  3551  3634 V CameraService: supportsCameraApi: for camera ID = 0
08-24 05:39:54.362 +0000  3551  3634 V CameraService: supportsCameraApi: Camera id 0 uses HAL3.2 or newer, supports api1/api2 directly
08-24 05:39:54.363 +0000  4730  5292 D OpenGLRenderer: endAllActiveAnimators on 0xe60f222ce800 (RippleDrawable) with handle 0xe60f21d07040
08-24 05:39:54.363 +0000  5250  5250 I Camera2BasicFragment: CameraId= 0 facing= 1
08-24 05:39:54.364 +0000  3551  3634 V CameraService: supportsCameraApi: for camera ID = 1
08-24 05:39:54.364 +0000  3551  3634 V CameraService: supportsCameraApi: Camera id 1 uses HAL3.2 or newer, supports api1/api2 directly
08-24 05:39:54.364 +0000  5250  5250 I Camera2BasicFragment: CameraId= 1 facing= 0
08-24 05:39:54.372 +0000  3551  3634 V CameraService: supportsCameraApi: for camera ID = 1
08-24 05:39:54.372 +0000  3551  3634 V CameraService: supportsCameraApi: Camera id 1 uses HAL3.2 or newer, supports api1/api2 directly
08-24 05:39:54.375 +0000  3551  3634 V CameraService: supportsCameraApi: for camera ID = 1
08-24 05:39:54.375 +0000  3551  3634 V CameraService: supportsCameraApi: Camera id 1 uses HAL3.2 or newer, supports api1/api2 directly
08-24 05:39:54.375 +0000  3551  3634 I CameraService: CameraService::connect call (PID -1 "com.example.android.camera2basic", camera ID 1) for HAL version default and Camera API version 2
08-24 05:39:54.377 +0000  3551  3634 I Camera2ClientBase: Camera 1: Opened. Client: com.example.android.camera2basic (PID 5250, UID 10060)
08-24 05:39:54.377 +0000  3551  3634 V Camera3-Device: Camera3Device: Created device for camera 1
08-24 05:39:54.377 +0000  3551  3634 I CameraDeviceClient: CameraDeviceClient 1: Opened
08-24 05:39:54.377 +0000  3551  3634 V CameraService: startCameraOps: Start camera ops, package name = com.example.android.camera2basic, client UID = 10060
08-24 05:39:54.380 +0000  3551  3634 V CameraService: updateStatus: Status has changed for camera ID 1 from 0x1 to 0xfffffffe
08-24 05:39:54.382 +0000  3551  3634 V Camera3-Device: initialize: Initializing HIDL device for camera 1
08-24 05:39:54.382 +0000  3519  3633 V FslCameraHAL: openDev: module=0xe6ef9008, name=1, device=0xe5e8252c
08-24 05:39:54.382 +0000  3519  3633 I FslCameraHAL: openDev:214: Opening '/dev/video4' camera device
08-24 05:39:54.413 +0000  3551  3634 V Camera3-Device: Session interface chain:
08-24 05:39:54.413 +0000  3551  3634 V Camera3-Device:   android.hardware.camera.device@3.3::ICameraDeviceSession
08-24 05:39:54.413 +0000  3551  3634 V Camera3-Device:   android.hardware.camera.device@3.2::ICameraDeviceSession
08-24 05:39:54.413 +0000  3551  3634 V Camera3-Device:   android.hidl.base@1.0::IBase
08-24 05:39:54.415 +0000  3551  3634 V Camera2-FrameProcessorBase: registerListener: Registering listener for frame id range 0 - 2147483647
08-24 05:39:54.418 +0000  3551  3634 V CameraDeviceClient: createDefaultRequest (templateId = 0x1)
08-24 05:39:54.418 +0000  3551  3634 V Camera3-Device: createDefaultRequest: for template 1
08-24 05:39:54.419 +0000  3519  3632 I FslCameraHAL: constructDefaultRequestSettings():1: type=1

08-24 05:39:54.422 +0000  3551  3634 V CameraDeviceClient: waitUntilIdle
08-24 05:39:54.422 +0000  3551  3634 V Camera3-Device: waitUntilDrainedLocked: Already idle

08-24 05:39:54.422 +0000  3551  3634 V CameraDeviceClient: waitUntilIdle Done
08-24 05:39:54.422 +0000  3551  3634 I CameraDeviceClient: beginConfigure: Not implemented yet.
08-24 05:39:54.423 +0000  3551  3634 W CameraDeviceClient: createSurfaceFromGbp: Camera 1 with consumer usage flag: 256: Forcing asynchronous mode for stream
08-24 05:39:54.423 +0000  3551  3634 W CameraDeviceClient: createSurfaceFromGbp: Camera 1: Overriding format 0x1 to IMPLEMENTATION_DEFINED

08-24 05:39:54.423 +0000  3551  3634 V Camera3-Device: Camera 1: Creating new stream 0: 720 x 480, format 34, dataspace 0 rotation 0 consumer usage 0, isShared 0
08-24 05:39:54.423 +0000  3551  3634 V Camera3-Device: Camera 1: Created new stream

08-24 05:39:54.423 +0000  3551  3634 V CameraDeviceClient: createStream: mStreamMap add binder 0xee2acc80 streamId 0, surfaceId 0
08-24 05:39:54.423 +0000  3551  3634 V CameraDeviceClient: createStream: Camera 1: Successfully created a new stream ID 0 for output surface (720 x 480) with format 0x22.
08-24 05:39:54.423 +0000  3551  3634 V CameraDeviceClient: getRotationTransformLocked: begin

08-24 05:39:54.424 +0000  3551  3634 V Camera3-Device: Camera 1: Creating new stream 1: 720 x 480, format 33, dataspace 146931712 rotation 0 consumer usage 0, isShared 0
08-24 05:39:54.424 +0000  3551  3634 V Camera3-Device: Camera 1: Created new stream
08-24 05:39:54.424 +0000  3551  3634 V CameraDeviceClient: createStream: mStreamMap add binder 0xee2acd00 streamId 1, surfaceId 0
08-24 05:39:54.424 +0000  3551  3634 V CameraDeviceClient: createStream: Camera 1: Successfully created a new stream ID 1 for output surface (720 x 480) with format 0x21.
08-24 05:39:54.424 +0000  3551  3634 V CameraDeviceClient: getRotationTransformLocked: begin
08-24 05:39:54.424 +0000  3551  3634 I CameraDeviceClient: endConfigure: ending configure (0 input stream, 2 output surfaces)

08-24 05:39:54.424 +0000  3551  3634 V Camera3-Device: configureStreams: E
08-24 05:39:54.424 +0000  3551  3634 V Camera3-Device: configureStreamsLocked: Camera 1: Starting stream configuration
08-24 05:39:54.425 +0000  3551  3634 V Camera3-Device: configureStreams: v3.3 device found
08-24 05:39:54.425 +0000  3519  3632 I FslCameraHAL: configureStreams():1: stream_config 0xe5f7f418, num 2, streams 0xe75171e0, mode 0
08-24 05:39:54.425 +0000  3519  3632 I FslCameraHAL: config 0, type 0, res 720x480, fmt 0x22, usage 0x100, maxbufs 0, priv 0x0, rotation 0
08-24 05:39:54.425 +0000  3519  3632 I FslCameraHAL: config 1, type 0, res 720x480, fmt 0x21, usage 0x3, maxbufs 0, priv 0x0, rotation 0

08-24 05:39:54.425 +0000  3519  3632 I FslCameraHAL: Stream >> create preview stream
08-24 05:39:54.425 +0000  3519  3632 I FslCameraHAL: Stream >> stream: w:720, h:480, format:0x0, usage:0x20302, buffers:3
08-24 05:39:54.425 +0000  3519  3632 I FslCameraHAL: Stream >> new JpegBuilder() 
08-24 05:39:54.425 +0000  3519  3632 I FslCameraHAL: Stream create capture stream
08-24 05:39:54.425 +0000  3519  3632 I FslCameraHAL: Stream >> stream: w:720, h:480, format:0x21, usage:0x20303, buffers:2
08-24 05:39:54.425 +0000  3519  3632 I FslCameraHAL: Stream >> new JpegBuilder() 


08-24 05:39:54.425 +0000  3551  3634 E Camera3-Device: configureStreams: Stream 1: DataSpace override not allowed for format 0x21

08-24 05:39:54.428 +0000  3551  3634 D Camera3-Device: Set real time priority for request queue thread (tid 5545)
08-24 05:39:54.428 +0000  3551  3634 V Camera3-Device: configureStreamsLocked: Camera 1: Stream configuration complete
08-24 05:39:54.440 +0000  3551  3634 I CameraDeviceClient: submitRequestList-start of function. Request list size 1
08-24 05:39:54.440 +0000  3551  3634 V CameraDeviceClient: submitRequestList: Camera 1: Appending output stream 0 surface 0 to request
08-24 05:39:54.440 +0000  3551  3634 V CameraDeviceClient: submitRequestList: Camera 1: Creating request with ID 0 (1 of 1)
08-24 05:39:54.440 +0000  3551  3634 V Camera3-Device: convertMetadataListToRequestListLocked: requestId = 0
08-24 05:39:54.440 +0000  3551  3634 V Camera3-Device: unpauseForNewRequests: RequestThread: Going active
08-24 05:39:54.440 +0000  3551  5545 V Camera3-Device: prepareHalRequests: Request (frame num 0) had AF trigger 0x0
08-24 05:39:54.440 +0000  3551  5544 V Camera3-Device: notifyStatus: Camera 1: Now active
08-24 05:39:54.440 +0000  3551  3634 V Camera3-Device: Camera 1: Capture request 0 enqueued
08-24 05:39:54.440 +0000  3551  3634 V CameraDeviceClient: submitRequestList: Camera 1: End of function

08-24 05:39:54.481 +0000  3551  5545 I display : open gpu gralloc module!
08-24 05:39:54.486 +0000  3551  5545 V Camera3-Device: stream 0 now have 1 buffer caches, buf 0xedea03c0

08-24 05:39:54.494 +0000  3519  3632 E FslCameraHAL: configure: invalid stream parameters
08-24 05:39:54.494 +0000  3519  3620 E FslCameraHAL: invalid state:0x201 go into start state
08-24 05:39:54.496 +0000  3551  5546 V Camera2-FrameProcessorBase: processNewFrames: Camera 1: Process new frames
08-24 05:39:54.496 +0000  3551  5546 V Camera2-FrameProcessorBase: processSingleFrame: Camera 1: Process single frame (is empty? 0)
08-24 05:39:54.496 +0000  3551  5546 V Camera2-FrameProcessorBase: processListeners: Camera 1: Got 1 range listeners out of 1
08-24 05:39:54.496 +0000  3551  5546 V CameraDeviceClient: onResultAvailable
08-24 05:39:54.499 +0000  3551  5545 V Camera3-Device: stream 0 now have 2 buffer caches, buf 0xedea0780

08-24 05:39:54.523 +0000  3517  5551 W StreamHAL: Error from HAL stream in function get_presentation_position: Operation not permitted
08-24 05:39:54.523 +0000  3517  5551 W StreamHAL: Error from HAL stream in function get_presentation_position: Operation not permitted

08-24 05:39:51.993 +0000  4730  4730 D ViewRootImpl[GrantPermissionsActivity]: updatePointerIcon called with position out of bounds
08-24 05:39:54.626 +0000  3664  3843 I ActivityManager: Killing 4327:com.android.printspooler/u0a47 (adj 906): empty #17

你可能感兴趣的:(Android-系列,android,java,前端)