Camx ConfigureStreams流程

文章目录

    • frameworks/
    • hardware/interfaces/camera/
  • 1.configure_streams 流程
    • vendor\qcom\proprietary\camx\
    • 1.1 ConfigureStreams
    • 1.2 QueryCHIModuleOverride
    • 1.3 chi_initialize_override_session
    • 1.4 chxextensionmodule.cpp-->InitializeOverrideSession
    • chxusecaseutils.cpp-->GetMatchingUsecase
    • return usecaseId
    • 1.6 UsecaseFactory::CreateUsecaseObject
    • 1.7 AdvancedCameraUsecase
    • 1.8 AdvancedCameraUsecase::Initialize
    • 1.9 AdvancedCameraUsecase::PreUsecaseSelection(FeatureSetup)
    • 1.10AdvancedCameraUsecase::SelectFeatures
    • 1.11 CameraUsecaseBase::Initialize
    • 1.12 chxfeaturezsl::Create
    • 1.13 CameraUsecaseBase::CreatePipeline
    • 1.14 Pipeline::Create
    • 1.15 Pipeline::Initialize
    • 1.16 Pipeline::SetupRealtimePreviewPipelineDescriptor
    • 1.17 Pipeline::CreateDescriptor
    • chxextensionmodule.cpp-->CreatePipelineDescriptor
    • camxchi.cpp-->ChiCreatePipelineDescriptor
    • camxchicontext.cpp-->CreatePipelineDescriptor
    • camxsession.cpp-->Create
    • camxpipeline.cpp-->Initialize
    • camxpipeline.cpp-->CreateNodes
    • camxnode.cpp-->Create
    • camxnode.cpp-->Initialize
    • camxhwfactory.cpp-->CreateNode
    • camxtitan17xfactory.cpp-->HwCreateNode
    • create pipeline ok
    • CameraUsecaseBase::StartDeferThread()

frameworks/

frameworks/base/core/java/android/hardware/camera2/CameraDevice.java

public void createCaptureSession(
        SessionConfiguration config) throws CameraAccessException {
    throw new UnsupportedOperationException("No default implementation");
}

frameworks/av/services/camera/libcameraservice/api2/CameraDeviceClient.cpp

binder::Status CameraDeviceClient::endConfigure(int operatingMode,
        const hardware::camera2::impl::CameraMetadataNative& sessionParams,
        std::vector<int>* offlineStreamIds /*out*/) {
        // 
        status_t err = mDevice->configureStreams(sessionParams, operatingMode);
}

frameworks\av\services\camera\libcameraservice\device3\Camera3Device.cpp

status_t Camera3Device::HalInterface::configureStreams(const camera_metadata_t *sessionParams,
        camera3_stream_configuration *config, const std::vector<uint32_t>& bufferSizes) {
         auto err = mHidlSession_3_4->configureStreams_3_4(requestedConfiguration3_4, configStream34Cb)
}


hardware/interfaces/camera/

hardware/interfaces/camera/device/3.4/default/CameraDeviceSession.cpp

Return<void> CameraDeviceSession::configureStreams_3_4(
        const StreamConfiguration& requestedConfiguration,
        ICameraDeviceSession::configureStreams_3_4_cb _hidl_cb)  {
    configureStreams_3_4_Impl(requestedConfiguration, _hidl_cb);
    return Void();

void CameraDeviceSession::configureStreams_3_4_Impl(
        const StreamConfiguration& requestedConfiguration,
        ICameraDeviceSession::configureStreams_3_4_cb _hidl_cb,
        uint32_t streamConfigCounter, bool useOverriddenFields)  {
        // 
        status_t ret = mDevice->ops->configure_streams(mDevice, &stream_list);
}

1.configure_streams 流程

HAL 层 configure_streams(…) 方法,实际调用 configureStreams(…) 处理。
vendor/qcom/proprietary/camx/src/core/hal/camxhal3.cpp
在打开相机应用过程中,App在获取并打开相机设备之后,会调用CameraDevice.createCaptureSession来获取CameraDeviceSession,并且通过Camera api v2标准接口,通知Camera Service,调用其CameraDeviceClient.endConfigure方法,在该方法内部又会去通过HIDL接口ICameraDeviceSession::configureStreams_3_4通知Provider开始处理此次配置需求,在Provider内部,会去通过在调用open流程中获取的camera3_device_t结构体的configure_streams方法来将数据流的配置传入CamX-CHI中,之后由CamX-CHI完成对数据流的配置工作.

首先须知道以下概念:

UseCase , vendor/qcom/proprietary/chi-cdk/vendor/chioverride/default/chxusecase.h 上面有介绍类图。UseCase在camx中很有很多衍生类,这是camx针对不同的stream来建立不同的usecase对象,用来管理选择feature,并且创建 pipeline以及session。

1.类CameraUsecaseBase、UsecaseDefault、UsecaseDualCamera、UsecaseQuadCFA、UsecaseTorch和UsecaseMultiVRCamera都派生自公共类Usecase。
2.类AdvancedCameraUsecase派生自公共类CameraUsecaseBase。
3.类UsecaseMultiCamera派生自公共类AdvancedCameraUsecase。

ChiFeature, vendor/qcom/proprietary/chi-cdk/vendor/chioverride/default/chxfeature.h, usecase选择相应的feature,关联一组pipeline,收到request请求,根据request选择对应的feature

Node , vendro/qcom/propriatary/camx/src/core/camxnode.h ,下面有类图。Node是camx中非常重要的一个父类,是camx中处理camera 请求的一个中间节点,用于处理pipeline下发的请求,下面有类图介绍,比较重要**的Node子类已经标出来了。

pipeline , 一连串node的集合,通过pipeline下发给各个node处理。

session , 若干个有关联的pipeline的集合,用来管理pipeline,使用pipeline处理请求。

AdvancedCameraUsecase 是最常用的 usecase。下面将通过流程图描述调用的流程和在流程中调用的函数的详细信息来解释这个 usecase中的一些重要内容。同时,这也是 configure_streams 到 调用 UseCase 的整体流程。

vendor\qcom\proprietary\camx\

vendor\qcom\proprietary\camx\src\core\hal\camxhal3entry.cpp


// configure_streams

int configure_streams(
    const struct camera3_device*    pCamera3DeviceAPI,
    camera3_stream_configuration_t* pStreamConfigsAPI)
{
    JumpTableHAL3* pHAL3 = static_cast<JumpTableHAL3*>(g_dispatchHAL3.GetJumpTable());

    CAMX_ASSERT(pHAL3);
    CAMX_ASSERT(pHAL3->configure_streams);

    return pHAL3->configure_streams(pCamera3DeviceAPI, pStreamConfigsAPI);
}
/
// configure_streams
/
static int configure_streams(
    const struct camera3_device*    pCamera3DeviceAPI,
    camera3_stream_configuration_t* pStreamConfigsAPI)
{
    CAMX_ENTRYEXIT_SCOPE(CamxLogGroupHAL, SCOPEEventHAL3ConfigureStreams);

    CamxResult result = CamxResultSuccess;

    CAMX_ASSERT(NULL != pCamera3DeviceAPI);
    CAMX_ASSERT(NULL != pCamera3DeviceAPI->priv);
    CAMX_ASSERT(NULL != pStreamConfigsAPI);
    CAMX_ASSERT(pStreamConfigsAPI->num_streams > 0);
    CAMX_ASSERT(NULL != pStreamConfigsAPI->streams);

    if ((NULL != pCamera3DeviceAPI)          &&
        (NULL != pCamera3DeviceAPI->priv)    &&
        (NULL != pStreamConfigsAPI)          &&
        (pStreamConfigsAPI->num_streams > 0) &&
        (NULL != pStreamConfigsAPI->streams))
    {
        HALDevice* pHALDevice = GetHALDevice(pCamera3DeviceAPI);
        uint32_t numStreams      = pStreamConfigsAPI->num_streams;
        UINT32   logicalCameraId = pHALDevice->GetCameraId();
        UINT32   cameraId        = pHALDevice->GetFwCameraId();
        BINARY_LOG(LogEvent::HAL3_ConfigSetup, numStreams, logicalCameraId, cameraId);
        for (UINT32 stream = 0; stream < pStreamConfigsAPI->num_streams; stream++)
        {
            CAMX_ASSERT(NULL != pStreamConfigsAPI->streams[stream]);
            if (NULL == pStreamConfigsAPI->streams[stream])
            {
                CAMX_LOG_ERROR(CamxLogGroupHAL, "Invalid argument 2 for configure_streams()");
                // HAL interface requires -EINVAL (EInvalidArg) for invalid arguments
                result = CamxResultEInvalidArg;
                break;
            }
            else
            {
                camera3_stream_t& rConfigStream = *pStreamConfigsAPI->streams[stream];
                BINARY_LOG(LogEvent::HAL3_StreamInfo, rConfigStream);
                CAMX_LOG_INFO(CamxLogGroupHAL, "  stream[%d] = %p - info:", stream,
                    pStreamConfigsAPI->streams[stream]);
                CAMX_LOG_INFO(CamxLogGroupHAL, "            format       : %d, %s",
                    pStreamConfigsAPI->streams[stream]->format,
                    FormatToString(pStreamConfigsAPI->streams[stream]->format));
                CAMX_LOG_INFO(CamxLogGroupHAL, "            width        : %d",
                    pStreamConfigsAPI->streams[stream]->width);
                CAMX_LOG_INFO(CamxLogGroupHAL, "            height       : %d",
                    pStreamConfigsAPI->streams[stream]->height);
                CAMX_LOG_INFO(CamxLogGroupHAL, "            stream_type  : %08x, %s",
                    pStreamConfigsAPI->streams[stream]->stream_type,
                    StreamTypeToString(pStreamConfigsAPI->streams[stream]->stream_type));
                CAMX_LOG_INFO(CamxLogGroupHAL, "            usage        : %08x",
                    pStreamConfigsAPI->streams[stream]->usage);
                CAMX_LOG_INFO(CamxLogGroupHAL, "            max_buffers  : %d",
                    pStreamConfigsAPI->streams[stream]->max_buffers);
                CAMX_LOG_INFO(CamxLogGroupHAL, "            rotation     : %08x, %s",
                    pStreamConfigsAPI->streams[stream]->rotation,
                    RotationToString(pStreamConfigsAPI->streams[stream]->rotation));
                CAMX_LOG_INFO(CamxLogGroupHAL, "            data_space   : %08x, %s",
                    pStreamConfigsAPI->streams[stream]->data_space,
                    DataSpaceToString(pStreamConfigsAPI->streams[stream]->data_space));
                CAMX_LOG_INFO(CamxLogGroupHAL, "            priv         : %p",
                    pStreamConfigsAPI->streams[stream]->priv);
                pStreamConfigsAPI->streams[stream]->reserved[0] = NULL;
                pStreamConfigsAPI->streams[stream]->reserved[1] = NULL;
            }
        }
        CAMX_LOG_INFO(CamxLogGroupHAL, "  operation_mode: %d", pStreamConfigsAPI->operation_mode);
        Camera3StreamConfig* pStreamConfigs = reinterpret_cast<Camera3StreamConfig*>(pStreamConfigsAPI);
        //configureStreams
        result = pHALDevice->ConfigureStreams(pStreamConfigs);
        if ((CamxResultSuccess != result) && (CamxResultEInvalidArg != result))
        {
            // HAL interface requires -ENODEV (EFailed) if a fatal error occurs
            result = CamxResultEFailed;
        }

        if (CamxResultSuccess == result)
        {
            for (UINT32 stream = 0; stream < pStreamConfigsAPI->num_streams; stream++)
            {
                CAMX_ASSERT(NULL != pStreamConfigsAPI->streams[stream]);

                if (NULL == pStreamConfigsAPI->streams[stream])
                {
                    CAMX_LOG_ERROR(CamxLogGroupHAL, "Invalid argument 2 for configure_streams()");
                    // HAL interface requires -EINVAL (EInvalidArg) for invalid arguments
                    result = CamxResultEInvalidArg;
                    break;
                }
                else
                {
                    CAMX_LOG_CONFIG(CamxLogGroupHAL, " FINAL stream[%d] = %p - info:", stream,
                        pStreamConfigsAPI->streams[stream]);
                    CAMX_LOG_CONFIG(CamxLogGroupHAL, "            format       : %d, %s",
                        pStreamConfigsAPI->streams[stream]->format,
                        FormatToString(pStreamConfigsAPI->streams[stream]->format));
                    CAMX_LOG_CONFIG(CamxLogGroupHAL, "            width        : %d",
                        pStreamConfigsAPI->streams[stream]->width);
                    CAMX_LOG_CONFIG(CamxLogGroupHAL, "            height       : %d",
                        pStreamConfigsAPI->streams[stream]->height);
                    CAMX_LOG_CONFIG(CamxLogGroupHAL, "            stream_type  : %08x, %s",
                        pStreamConfigsAPI->streams[stream]->stream_type,
                        StreamTypeToString(pStreamConfigsAPI->streams[stream]->stream_type));
                    CAMX_LOG_CONFIG(CamxLogGroupHAL, "            usage        : %08x",
                        pStreamConfigsAPI->streams[stream]->usage);
                    CAMX_LOG_CONFIG(CamxLogGroupHAL, "            max_buffers  : %d",
                        pStreamConfigsAPI->streams[stream]->max_buffers);
                    CAMX_LOG_CONFIG(CamxLogGroupHAL, "            rotation     : %08x, %s",
                        pStreamConfigsAPI->streams[stream]->rotation,
                        RotationToString(pStreamConfigsAPI->streams[stream]->rotation));
                    CAMX_LOG_CONFIG(CamxLogGroupHAL, "            data_space   : %08x, %s",
                        pStreamConfigsAPI->streams[stream]->data_space,
                        DataSpaceToString(pStreamConfigsAPI->streams[stream]->data_space));
                    CAMX_LOG_CONFIG(CamxLogGroupHAL, "            priv         : %p",
                        pStreamConfigsAPI->streams[stream]->priv);
                    CAMX_LOG_CONFIG(CamxLogGroupHAL, "            reserved[0]         : %p",
                        pStreamConfigsAPI->streams[stream]->reserved[0]);
                    CAMX_LOG_CONFIG(CamxLogGroupHAL, "            reserved[1]         : %p",
                        pStreamConfigsAPI->streams[stream]->reserved[1]);

                    Camera3HalStream* pHalStream =
                        reinterpret_cast<Camera3HalStream*>(pStreamConfigsAPI->streams[stream]->reserved[0]);
                    if (pHalStream != NULL)
                    {
                        if (TRUE == HwEnvironment::GetInstance()->GetStaticSettings()->enableHALFormatOverride)
                        {
                            pStreamConfigsAPI->streams[stream]->format =
                                static_cast<HALPixelFormat>(pHalStream->overrideFormat);
                        }
                     }
                }
            }
        }
    }
    else
    {
        CAMX_LOG_ERROR(CamxLogGroupHAL, "Invalid argument(s) for configure_streams()");
        // HAL interface requires -EINVAL (EInvalidArg) for invalid arguments
        result = CamxResultEInvalidArg;
    }

    return Utils::CamxResultToErrno(result);
}

1.1 ConfigureStreams

vendor/qcom/proprietary/camx/src/core/hal/camxhal3.cpp

/
// HALDevice::ConfigureStreams
/
CamxResult HALDevice::ConfigureStreams(
    Camera3StreamConfig* pStreamConfigs)
{
    CamxResult result = CamxResultSuccess;
    // Validate the incoming stream configurations
    result = CheckValidStreamConfig(pStreamConfigs);
    if ((StreamConfigModeConstrainedHighSpeed == pStreamConfigs->operationMode) ||
        (StreamConfigModeSuperSlowMotionFRC == pStreamConfigs->operationMode))
    {
        SearchNumBatchedFrames (pStreamConfigs, &m_usecaseNumBatchedFrames, &m_FPSValue);
        CAMX_ASSERT(m_usecaseNumBatchedFrames > 1);
    }
    else
    {
        // Not a HFR usecase batch frames value need to set to 1.
        m_usecaseNumBatchedFrames = 1;
    }

    if (CamxResultSuccess == result)
    {
        if (TRUE == m_bCHIModuleInitialized)
        {
            GetCHIAppCallbacks()->chi_teardown_override_session(reinterpret_cast<camera3_device*>(&m_camera3Device), 0, NULL);
            ReleaseStreamConfig();
            DeInitRequestLogger();
        }
        m_bCHIModuleInitialized = CHIModuleInitialize(pStreamConfigs);
        ClearFrameworkRequestBuffer();
        if (FALSE == m_bCHIModuleInitialized)
        {
            CAMX_LOG_ERROR(CamxLogGroupHAL, "CHI Module failed to configure streams");
            result = CamxResultEFailed;
        }
        else
        {
            result = SaveStreamConfig(pStreamConfigs);
            result = InitializeRequestLogger();
            CAMX_LOG_VERBOSE(CamxLogGroupHAL, "CHI Module configured streams ... CHI is in control!");
        }
    }
    return result;
}

1.2 QueryCHIModuleOverride

1.3 chi_initialize_override_session

vendor/qcom/proprietary/chi-cdk/core/chiframework/chxextensioninterface.cpp
HAL3Module构造函数获取chi_hal_override_entry并调用该函数,该函数初始化所有的 Chi Override callbacks ,其中chi_initialize_override_session是其中之一。当HAL从框架接收到configure_streams()时,framework 调用chi_initialize_override_session。参考HALDevice::ConfigureStreams,它使用Camera3StreamConfig作为参数调用CHIModuleInitialize。

/
/// @brief Main entry point
/
static CDKResult chi_initialize_override_session(
    uint32_t                        cameraId,
    const camera3_device_t*         camera3_device,
    const chi_hal_ops_t*            chiHalOps,
    camera3_stream_configuration_t* stream_config,
    int*                            override_config,
    void**                          priv)
{
    ExtensionModule* pExtensionModule = ExtensionModule::GetInstance();

    pExtensionModule->InitializeOverrideSession(cameraId, camera3_device, chiHalOps, stream_config, override_config, priv);

    return CDKResultSuccess;

1.4 chxextensionmodule.cpp–>InitializeOverrideSession

文件位置:vendor/qcom/proprietary/chi-cdk/core/chiframework/chxextensionmodule.cpp

首次的时候没有usecase对象,需要创建usecase对象
这个函数在调用SetHALOps和GetMatchingUsecase之前检查stream_config、operation mode 和其他配置项(返回selectedUsecaseId)。如果selectedUsecaseId有效,则调用CreateUsecaseObject。

1.获取UsecaseId

// GetMatchingUsecase
selectedUsecaseId = m_pUsecaseSelector->GetMatchingUsecase(&m_logicalCameraInfo[logicalCameraId],
                                                           pStreamConfig);

2.创建usecase对象

m_pSelectedUsecase[logicalCameraId] =
    m_pUsecaseFactory->CreateUsecaseObject(&m_logicalCameraInfo[logicalCameraId],
                                           selectedUsecaseId, pStreamConfig);

具体代码:

/
// ExtensionModule::InitializeOverrideSession
/
CDKResult ExtensionModule::InitializeOverrideSession(
    uint32_t                        logicalCameraId,
    const camera3_device_t*         pCamera3Device,
    const chi_hal_ops_t*            chiHalOps,
    camera3_stream_configuration_t* pStreamConfig,
    int*                            pIsOverrideEnabled,
    VOID**                          pPrivate)
{
    CDKResult          result             = CDKResultSuccess;
    UINT32             modeCount          = 0;
    ChiSensorModeInfo* pAllModes          = NULL;
    UINT32             fps                = *m_pDefaultMaxFPS;
    BOOL               isVideoMode        = FALSE;
    uint32_t           operation_mode;
    static BOOL        fovcModeCheck      = EnableFOVCUseCase();
    UsecaseId          selectedUsecaseId  = UsecaseId::NoMatch;
    UINT               minSessionFps      = 0;
    UINT               maxSessionFps      = 0;
    CDKResult          tagOpResult        = CDKResultEFailed;
    ChiBLMParams       blmParams;

    *pPrivate             = NULL;
    *pIsOverrideEnabled   = FALSE;
    m_aFlushInProgress[logicalCameraId] = FALSE;
    m_firstResult                       = FALSE;
    m_hasFlushOccurred[logicalCameraId] = FALSE;
    blmParams.height                    = 0;
    blmParams.width                     = 0;

    if (NULL == m_hCHIContext)
    {
        m_hCHIContext = g_chiContextOps.pOpenContext();
    }

    ChiVendorTagsOps vendorTagOps = { 0 };
    g_chiContextOps.pTagOps(&vendorTagOps);
    operation_mode                = pStreamConfig->operation_mode >> 16;
    operation_mode                = operation_mode & 0x000F;
    pStreamConfig->operation_mode = pStreamConfig->operation_mode & 0xFFFF;

    UINT numOutputStreams = 0;
    for (UINT32 stream = 0; stream < pStreamConfig->num_streams; stream++)
    {
        if (0 != (pStreamConfig->streams[stream]->usage & GrallocUsageHwVideoEncoder))
        {
            isVideoMode = TRUE;

            if((pStreamConfig->streams[stream]->height * pStreamConfig->streams[stream]->width) >
             (blmParams.height * blmParams.width))
            {
                blmParams.height = pStreamConfig->streams[stream]->height;
                blmParams.width  = pStreamConfig->streams[stream]->width;
            }
        }

        if (CAMERA3_STREAM_OUTPUT == pStreamConfig->streams[stream]->stream_type)
        {
            numOutputStreams++;
        }

        //If video stream not present in that case store Preview/Snapshot Stream info
        if((pStreamConfig->streams[stream]->height > blmParams.height) &&
            (pStreamConfig->streams[stream]->width > blmParams.width)  &&
            (isVideoMode == FALSE))
        {
            blmParams.height = pStreamConfig->streams[stream]->height;
            blmParams.width  = pStreamConfig->streams[stream]->width;
        }
    }

    if (numOutputStreams > MaxExternalBuffers)
    {
        CHX_LOG_ERROR("numOutputStreams(%u) greater than MaxExternalBuffers(%u)", numOutputStreams, MaxExternalBuffers);
        result = CDKResultENotImplemented;
    }

    if ((isVideoMode == TRUE) && (operation_mode != 0))
    {
        UINT32             numSensorModes  = m_logicalCameraInfo[logicalCameraId].m_cameraCaps.numSensorModes;
        CHISENSORMODEINFO* pAllSensorModes = m_logicalCameraInfo[logicalCameraId].pSensorModeInfo;

        if ((operation_mode - 1) >= numSensorModes)
        {
            result = CDKResultEOverflow;
            CHX_LOG_ERROR("operation_mode: %d, numSensorModes: %d", operation_mode, numSensorModes);
        }
        else
        {
            fps = pAllSensorModes[operation_mode - 1].frameRate;
        }
    }

    if (CDKResultSuccess == result)
    {
#if defined(CAMX_ANDROID_API) && (CAMX_ANDROID_API >= 28) //Android-P or better
        camera_metadata_t *metadata = const_cast<camera_metadata_t*>(pStreamConfig->session_parameters);

        camera_metadata_entry_t entry = { 0 };

        // The client may choose to send NULL sesssion parameter, which is fine. For example, torch mode
        // will have NULL session param.
        if (metadata != NULL)
        {
            entry.tag = ANDROID_CONTROL_AE_TARGET_FPS_RANGE;

            int ret = find_camera_metadata_entry(metadata, entry.tag, &entry);

            if(ret == 0) {
                minSessionFps = entry.data.i32[0];
                maxSessionFps = entry.data.i32[1];
                m_usecaseMaxFPS = maxSessionFps;
            }
        }

        CHITAGSOPS   tagOps       = { 0 };
        UINT32       tagLocation  = 0;

        g_chiContextOps.pTagOps(&tagOps);

        tagOpResult = tagOps.pQueryVendorTagLocation(
            "org.codeaurora.qcamera3.sessionParameters",
            "availableStreamMap",
            &tagLocation);

        if (CDKResultSuccess == tagOpResult)
        {
            camera_metadata_entry_t entry = { 0 };

            if (metadata != NULL)
            {
                int ret = find_camera_metadata_entry(metadata, tagLocation, &entry);
            }
        }

        tagOpResult = tagOps.pQueryVendorTagLocation(
            "org.codeaurora.qcamera3.sessionParameters",
            "overrideResourceCostValidation",
            &tagLocation);

        if ((NULL != metadata) && (CDKResultSuccess == tagOpResult))
        {
            camera_metadata_entry_t resourcecostEntry = { 0 };

            if (0 == find_camera_metadata_entry(metadata, tagLocation, &resourcecostEntry))
            {
                BOOL bypassRCV = static_cast<BOOL>(resourcecostEntry.data.u8[0]);

                if (TRUE == bypassRCV)
                {
                    m_pResourcesUsedLock->Lock();
                    m_logicalCameraRCVBypassSet.insert(logicalCameraId);
                    m_pResourcesUsedLock->Unlock();
                }
            }
        }

#endif

        CHIHANDLE    staticMetaDataHandle = const_cast<camera_metadata_t*>(
                                            m_logicalCameraInfo[logicalCameraId].m_cameraInfo.static_camera_characteristics);
        UINT32       metaTagPreviewFPS    = 0;
        UINT32       metaTagVideoFPS      = 0;

        m_previewFPS           = 0;
        m_videoFPS             = 0;
        GetInstance()->GetVendorTagOps(&vendorTagOps);

        result = vendorTagOps.pQueryVendorTagLocation("org.quic.camera2.streamBasedFPS.info", "PreviewFPS",
                                                      &metaTagPreviewFPS);
        if (CDKResultSuccess == result)
        {
            vendorTagOps.pGetMetaData(staticMetaDataHandle, metaTagPreviewFPS, &m_previewFPS,
                                      sizeof(m_previewFPS));
        }

        result = vendorTagOps.pQueryVendorTagLocation("org.quic.camera2.streamBasedFPS.info", "VideoFPS", &metaTagVideoFPS);
        if (CDKResultSuccess == result)
        {
            vendorTagOps.pGetMetaData(staticMetaDataHandle, metaTagVideoFPS, &m_videoFPS,
                                      sizeof(m_videoFPS));
        }

        if ((StreamConfigModeConstrainedHighSpeed == pStreamConfig->operation_mode) ||
            (StreamConfigModeSuperSlowMotionFRC == pStreamConfig->operation_mode))
        {
            if ((StreamConfigModeConstrainedHighSpeed == pStreamConfig->operation_mode) &&
                (30 >= maxSessionFps))
            {
                minSessionFps   = DefaultFrameRateforHighSpeedSession;
                maxSessionFps   = DefaultFrameRateforHighSpeedSession;
                m_usecaseMaxFPS = maxSessionFps;

                CHX_LOG_INFO("minSessionFps = %d maxSessionFps = %d", minSessionFps, maxSessionFps);
            }

            SearchNumBatchedFrames(logicalCameraId, pStreamConfig,
                                   &m_usecaseNumBatchedFrames, &m_HALOutputBufferCombined,
                                   &m_usecaseMaxFPS, maxSessionFps);
            if (480 > m_usecaseMaxFPS)
            {
                m_CurrentpowerHint = PERF_LOCK_POWER_HINT_VIDEO_ENCODE_HFR;
            }
            else
            {
                // For 480FPS or higher, require more aggresive power hint
                m_CurrentpowerHint = PERF_LOCK_POWER_HINT_VIDEO_ENCODE_HFR_480FPS;
            }
        }
        else
        {
            // Not a HFR usecase, batch frames value need to be set to 1.
            m_usecaseNumBatchedFrames = 1;
            m_HALOutputBufferCombined = FALSE;
            if (maxSessionFps == 0)
            {
                m_usecaseMaxFPS = fps;
            }
            if (TRUE == isVideoMode)
            {
                if (30 >= m_usecaseMaxFPS)
                {
                    m_CurrentpowerHint = PERF_LOCK_POWER_HINT_VIDEO_ENCODE;
                }
                else
                {
                    m_CurrentpowerHint = PERF_LOCK_POWER_HINT_VIDEO_ENCODE_60FPS;
                }
            }
            else
            {
                m_CurrentpowerHint = PERF_LOCK_POWER_HINT_PREVIEW;
            }
        }

        if ((NULL != m_pPerfLockManager[logicalCameraId]) && (m_CurrentpowerHint != m_previousPowerHint))
        {
            m_pPerfLockManager[logicalCameraId]->ReleasePerfLock(m_previousPowerHint);
        }

        // Example [B == batch]: (240 FPS / 4 FPB = 60 BPS) / 30 FPS (Stats frequency goal) = 2 BPF i.e. skip every other stats
        *m_pStatsSkipPattern = m_usecaseMaxFPS / m_usecaseNumBatchedFrames / 30;
        if (*m_pStatsSkipPattern < 1)
        {
            *m_pStatsSkipPattern = 1;
        }

        m_VideoHDRMode = (StreamConfigModeVideoHdr == pStreamConfig->operation_mode);

        m_torchWidgetUsecase = (StreamConfigModeQTITorchWidget == pStreamConfig->operation_mode);

        // this check is introduced to avoid set *m_pEnableFOVC == 1 if fovcEnable is disabled in
        // overridesettings & fovc bit is set in operation mode.
        // as well as to avoid set,when we switch Usecases.
        if (TRUE == fovcModeCheck)
        {
            *m_pEnableFOVC = ((pStreamConfig->operation_mode & StreamConfigModeQTIFOVC) == StreamConfigModeQTIFOVC) ? 1 : 0;
        }

        SetHALOps(logicalCameraId, chiHalOps);

        m_logicalCameraInfo[logicalCameraId].m_pCamera3Device = pCamera3Device;
        // GetMatchingUsecase 获取UsecaseId
        selectedUsecaseId = m_pUsecaseSelector->GetMatchingUsecase(&m_logicalCameraInfo[logicalCameraId],
                                                                   pStreamConfig);

        // FastShutter mode supported only in ZSL usecase.
        if ((pStreamConfig->operation_mode == StreamConfigModeFastShutter) &&
            (UsecaseId::PreviewZSL         != selectedUsecaseId))
        {
            pStreamConfig->operation_mode = StreamConfigModeNormal;
        }
        m_operationMode[logicalCameraId] = pStreamConfig->operation_mode;
    }

    if (m_pBLMClient != NULL)
    {
        blmParams.numcamera         = m_logicalCameraInfo[logicalCameraId].numPhysicalCameras;
        blmParams.logicalCameraType = m_logicalCameraInfo[logicalCameraId].logicalCameraType;
        blmParams.FPS               = m_usecaseMaxFPS;
        blmParams.selectedusecaseId = selectedUsecaseId;
        blmParams.socId             = GetPlatformID();
        blmParams.isVideoMode       = isVideoMode;

        m_pBLMClient->SetUsecaseBwLevel(blmParams);
    }

    if (UsecaseId::NoMatch != selectedUsecaseId)
    {
        // 创建usecase对象
        m_pSelectedUsecase[logicalCameraId] =
            m_pUsecaseFactory->CreateUsecaseObject(&m_logicalCameraInfo[logicalCameraId],
                                                   selectedUsecaseId, pStreamConfig);

        if (NULL != m_pSelectedUsecase[logicalCameraId])
        {
            m_pStreamConfig[logicalCameraId] = static_cast<camera3_stream_configuration_t*>(
                CHX_CALLOC(sizeof(camera3_stream_configuration_t)));
            m_pStreamConfig[logicalCameraId]->streams = static_cast<camera3_stream_t**>(
                CHX_CALLOC(sizeof(camera3_stream_t*) * pStreamConfig->num_streams));
            m_pStreamConfig[logicalCameraId]->num_streams = pStreamConfig->num_streams;

            for (UINT32 i = 0; i< m_pStreamConfig[logicalCameraId]->num_streams; i++)
            {
                m_pStreamConfig[logicalCameraId]->streams[i] = pStreamConfig->streams[i];
            }

            m_pStreamConfig[logicalCameraId]->operation_mode = pStreamConfig->operation_mode;

            if (NULL != pStreamConfig->session_parameters)
            {
                m_pStreamConfig[logicalCameraId]->session_parameters =
                    (const camera_metadata_t *)allocate_copy_camera_metadata_checked(
                    pStreamConfig->session_parameters,
                    get_camera_metadata_size(pStreamConfig->session_parameters));
            }
            // use camera device / used for recovery only for regular session
            m_SelectedUsecaseId[logicalCameraId] = (UINT32)selectedUsecaseId;
            CHX_LOG_CONFIG("Logical cam Id = %d usecase addr = %p", logicalCameraId, m_pSelectedUsecase[
                logicalCameraId]);

            m_pCameraDeviceInfo[logicalCameraId].m_pCamera3Device = pCamera3Device;

            *pIsOverrideEnabled = TRUE;

            m_TeardownInProgress[logicalCameraId]      = FALSE;
            m_RecoveryInProgress[logicalCameraId]      = FALSE;
            m_terminateRecoveryThread[logicalCameraId] = FALSE;

            m_pPCRLock[logicalCameraId]                  = Mutex::Create();
            m_pDestroyLock[logicalCameraId]              = Mutex::Create();
            m_pRecoveryLock[logicalCameraId]             = Mutex::Create();
            m_pTriggerRecoveryLock[logicalCameraId]      = Mutex::Create();
            m_pTriggerRecoveryCondition[logicalCameraId] = Condition::Create();
            m_pRecoveryCondition[logicalCameraId]        = Condition::Create();
            m_recoveryThreadPrivateData[logicalCameraId] = { logicalCameraId, this };

            // Create recovery thread and wait on being signaled
            m_pRecoveryThread[logicalCameraId].pPrivateData = &m_recoveryThreadPrivateData[logicalCameraId];

            result = ChxUtils::ThreadCreate(ExtensionModule::RecoveryThread,
                &m_pRecoveryThread[logicalCameraId],
                &m_pRecoveryThread[logicalCameraId].hThreadHandle);
            if (CDKResultSuccess != result)
            {
                CHX_LOG_ERROR("Failed to create recovery thread for logical camera %d result %d", logicalCameraId, result);
            }
        }
        else
        {
            CHX_LOG_ERROR("For cameraId = %d CreateUsecaseObject failed", logicalCameraId);
            m_logicalCameraInfo[logicalCameraId].m_pCamera3Device = NULL;
        }
    }

    if ((CDKResultSuccess != result) || (UsecaseId::Torch == selectedUsecaseId))
    {
        // reset resource count in failure case or Torch case
        ResetResourceCost(m_logicalCameraInfo[logicalCameraId].cameraId);
    }

    CHX_LOG_INFO(" logicalCameraId = %d, m_totalResourceBudget = %d, activeResourseCost = %d, m_IFEResourceCost = %d",
            logicalCameraId, m_totalResourceBudget, GetActiveResourceCost(), m_IFEResourceCost[logicalCameraId]);

    return result;
}

chxusecaseutils.cpp–>GetMatchingUsecase

vendor\qcom\proprietary\chi-cdk\core\chiusecase\chxusecaseutils.cpp


// UsecaseSelector::GetMatchingUsecase

UsecaseId UsecaseSelector::GetMatchingUsecase(
    const LogicalCameraInfo*        pCamInfo,
    camera3_stream_configuration_t* pStreamConfig)
{
    UsecaseId usecaseId = UsecaseId::Default;
    UINT32 VRDCEnable = ExtensionModule::GetInstance()->GetDCVRMode();
    if ((pStreamConfig->num_streams == 2) && IsQuadCFASensor(pCamInfo, NULL) &&
        (LogicalCameraType_Default == pCamInfo->logicalCameraType))
    {
        // need to validate preview size <= binning size, otherwise return error

        /// If snapshot size is less than sensor binning size, select defaut zsl usecase.
        /// Only if snapshot size is larger than sensor binning size, select QuadCFA usecase.
        /// Which means for snapshot in QuadCFA usecase,
        ///   - either do upscale from sensor binning size,
        ///   - or change sensor mode to full size quadra mode.
        if (TRUE == QuadCFAMatchingUsecase(pCamInfo, pStreamConfig))
        {
            usecaseId = UsecaseId::QuadCFA;
            CHX_LOG_CONFIG("Quad CFA usecase selected");
            return usecaseId;
        }
    }

    if (pStreamConfig->operation_mode == StreamConfigModeSuperSlowMotionFRC)
    {
        usecaseId = UsecaseId::SuperSlowMotionFRC;
        CHX_LOG_CONFIG("SuperSlowMotionFRC usecase selected");
        return usecaseId;
    }

    /// Reset the usecase flags
    VideoEISV2Usecase   = 0;
    VideoEISV3Usecase   = 0;
    GPURotationUsecase  = FALSE;
    GPUDownscaleUsecase = FALSE;

    if ((NULL != pCamInfo) && (pCamInfo->numPhysicalCameras > 1) && VRDCEnable)
    {
        CHX_LOG_CONFIG("MultiCameraVR usecase selected");
        usecaseId = UsecaseId::MultiCameraVR;
    }
    else if ((NULL != pCamInfo) && (pCamInfo->numPhysicalCameras > 1) && (pStreamConfig->num_streams > 1))
    {
        CHX_LOG_CONFIG("MultiCamera usecase selected");
        usecaseId = UsecaseId::MultiCamera;
    }
    else
    {
        SnapshotStreamConfig snapshotStreamConfig;
        CHISTREAM**          ppChiStreams = reinterpret_cast<CHISTREAM**>(pStreamConfig->streams);
        switch (pStreamConfig->num_streams)
        {
            case 2:
                if (TRUE == IsRawJPEGStreamConfig(pStreamConfig))
                {
                    CHX_LOG_CONFIG("Raw + JPEG usecase selected");
                    usecaseId = UsecaseId::RawJPEG;
                    break;
                }

                /// @todo Enable ZSL by setting overrideDisableZSL to FALSE
                if (FALSE == m_pExtModule->DisableZSL())
                {
                    if (TRUE == IsPreviewZSLStreamConfig(pStreamConfig))
                    {
                        usecaseId = UsecaseId::PreviewZSL;
                        CHX_LOG_CONFIG("ZSL usecase selected");
                    }
                }

                if(TRUE == m_pExtModule->UseGPURotationUsecase())
                {
                    CHX_LOG_CONFIG("GPU Rotation usecase flag set");
                    GPURotationUsecase = TRUE;
                }

                if (TRUE == m_pExtModule->UseGPUDownscaleUsecase())
                {
                    CHX_LOG_CONFIG("GPU Downscale usecase flag set");
                    GPUDownscaleUsecase = TRUE;
                }

                if (TRUE == m_pExtModule->EnableMFNRUsecase())
                {
                    if (TRUE == MFNRMatchingUsecase(pStreamConfig))
                    {
                        usecaseId = UsecaseId::MFNR;
                        CHX_LOG_CONFIG("MFNR usecase selected");
                    }
                }

                if (TRUE == m_pExtModule->EnableHFRNo3AUsecas())
                {
                    CHX_LOG_CONFIG("HFR without 3A usecase flag set");
                    HFRNo3AUsecase = TRUE;
                }

                break;

            case 3:
                VideoEISV2Usecase = m_pExtModule->EnableEISV2Usecase();
                VideoEISV3Usecase = m_pExtModule->EnableEISV3Usecase();
                if (FALSE == m_pExtModule->DisableZSL() && (TRUE == IsPreviewZSLStreamConfig(pStreamConfig)))
                {
                    usecaseId = UsecaseId::PreviewZSL;
                    CHX_LOG_CONFIG("ZSL usecase selected");
                }
                else if(TRUE == IsRawJPEGStreamConfig(pStreamConfig))
                {
                    CHX_LOG_CONFIG("Raw + JPEG usecase selected");
                    usecaseId = UsecaseId::RawJPEG;
                }
                else if((FALSE == IsVideoEISV2Enabled(pStreamConfig)) && (FALSE == IsVideoEISV3Enabled(pStreamConfig)) &&
                    (TRUE == IsVideoLiveShotConfig(pStreamConfig)) && (FALSE == m_pExtModule->DisableZSL()))
                {
                    CHX_LOG_CONFIG("Video With Liveshot, ZSL usecase selected");
                    usecaseId = UsecaseId::VideoLiveShot;
                }

                break;

            case 4:
                GetSnapshotStreamConfiguration(pStreamConfig->num_streams, ppChiStreams, snapshotStreamConfig);
                if ((SnapshotStreamType::HEIC == snapshotStreamConfig.type) && (NULL != snapshotStreamConfig.pRawStream))
                {
                    CHX_LOG_CONFIG("Raw + HEIC usecase selected");
                    usecaseId = UsecaseId::RawJPEG;
                }
                break;

            default:
                CHX_LOG_CONFIG("Default usecase selected");
                break;

        }
    }

    if (TRUE == ExtensionModule::GetInstance()->IsTorchWidgetUsecase())
    {
        CHX_LOG_CONFIG("Torch widget usecase selected");
        usecaseId = UsecaseId::Torch;
    }

    CHX_LOG_INFO("usecase ID:%d",usecaseId);
    return usecaseId;
}

return usecaseId

1.6 UsecaseFactory::CreateUsecaseObject

vendor\qcom\proprietary\chi-cdk\core\chiusecase\chxusecaseutils.cpp
基于usecaseId,创建正确的usecase。在调用usecase的创建方法时,LogicalCameraInfo和StreamConfig被用作参数。

1.参数:CameraInfo,usecaseId,pStreamConfig
2.根据UsecaseId选择不同的类型进行create Usecase
a.pUsecase = AdvancedCameraUsecase::Create()
b.pUsecase = UsecaseDualCamera::Create()
c.UsecaseQuadCFA::Create()
d.UsecaseTorch::Create()
3.后置普通拍照UsercaseId是PreviewZSL,走的是advancedCameraUsecase::Create()

/
// UsecaseFactory::CreateUsecaseObject
/
Usecase* UsecaseFactory::CreateUsecaseObject(
    LogicalCameraInfo*              pLogicalCameraInfo,     ///< camera info
    UsecaseId                       usecaseId,              ///< Usecase Id
    camera3_stream_configuration_t* pStreamConfig)          ///< Stream config
{
    Usecase* pUsecase  = NULL;
    UINT     camera0Id = pLogicalCameraInfo->ppDeviceInfo[0]->cameraId;

    switch (usecaseId)
    {
        case UsecaseId::PreviewZSL:
        case UsecaseId::VideoLiveShot:
            pUsecase = AdvancedCameraUsecase::Create(pLogicalCameraInfo, pStreamConfig, usecaseId);
            break;
        case UsecaseId::MultiCamera:
            {
#if defined(CAMX_ANDROID_API) && (CAMX_ANDROID_API >= 28) //Android-P or better

                LogicalCameraType logicalCameraType = m_pExtModule->GetCameraType(pLogicalCameraInfo->cameraId);

                if (LogicalCameraType_DualApp == logicalCameraType)
                {
                    pUsecase = UsecaseDualCamera::Create(pLogicalCameraInfo, pStreamConfig);
                }
                else
#endif
                {
                    pUsecase = UsecaseMultiCamera::Create(pLogicalCameraInfo, pStreamConfig);
                }
                break;
            }
        case UsecaseId::MultiCameraVR:
            //pUsecase = UsecaseMultiVRCamera::Create(pLogicalCameraInfo, pStreamConfig);
            break;
        case UsecaseId::QuadCFA:
            pUsecase = AdvancedCameraUsecase::Create(pLogicalCameraInfo, pStreamConfig, usecaseId);
            break;
        case UsecaseId::Torch:
            pUsecase = UsecaseTorch::Create(pLogicalCameraInfo, pStreamConfig);
            break;
#if (!defined(LE_CAMERA)) // SuperSlowMotion not supported in LE
        case UsecaseId::SuperSlowMotionFRC:
            pUsecase = UsecaseSuperSlowMotionFRC::Create(pLogicalCameraInfo, pStreamConfig);
            break;
#endif
        default:
            pUsecase = AdvancedCameraUsecase::Create(pLogicalCameraInfo, pStreamConfig, usecaseId);
            break;
    }

    return pUsecase;
}

1.7 AdvancedCameraUsecase

文件位置:vendor\qcom\proprietary\chi-cdk\core\chiusecase\chxadvancedcamerausecase.cpp

chiusecase\chxadvancedcamerausecase.cpp中有两个重要的类,一个是CameraUsecase,
一个是它的子类–AdvancedCameraUsecase,下面执行的时候都会强调的。
这个函数在检查StreamConfig之后调用usecase的初始化。如果初始化成功,则返回usecase handle ;如果失败,则调用Destroy。

case UsecaseId::PreviewZSL:走单摄的Usecase创建
1.AdvancedCameraUsecase::Create()是public static类型
2.创建新的pAdvancedCameraUsecase对象,并调用Initialize()
pAdvancedCameraUsecase = CHX_NEW AdvanceCameraUsecase;
pAdvancedCameraUsecase->Initialize


/// AdvancedCameraUsecase::Create

AdvancedCameraUsecase* AdvancedCameraUsecase::Create(
    LogicalCameraInfo*              pCameraInfo,   ///< Camera info
    camera3_stream_configuration_t* pStreamConfig, ///< Stream configuration
    UsecaseId                       usecaseId)     ///< Identifier for usecase function
{
    CDKResult              result                 = CDKResultSuccess;
    AdvancedCameraUsecase* pAdvancedCameraUsecase = CHX_NEW AdvancedCameraUsecase;

    if ((NULL != pAdvancedCameraUsecase) && (NULL != pStreamConfig))
    {
        result = pAdvancedCameraUsecase->Initialize(pCameraInfo, pStreamConfig, usecaseId);

        if (CDKResultSuccess != result)
        {
            pAdvancedCameraUsecase->Destroy(FALSE);
            pAdvancedCameraUsecase = NULL;
        }
    }
    else
    {
        result = CDKResultEFailed;
    }

    return pAdvancedCameraUsecase;
}


1.8 AdvancedCameraUsecase::Initialize

1.MaxPipelines 25
2.匹配执行的feature
AdvancedCameraUsecase::FeatureSetup()


/// AdvancedCameraUsecase::Initialize

CDKResult AdvancedCameraUsecase::Initialize(
    LogicalCameraInfo*              pCameraInfo,   ///< Camera info
    camera3_stream_configuration_t* pStreamConfig, ///< Stream configuration
    UsecaseId                       usecaseId)     ///< Identifier for the usecase function
{
    ATRACE_BEGIN("AdvancedCameraUsecase::Initialize");
    CDKResult result = CDKResultSuccess;

    m_usecaseId                     = usecaseId;
    m_cameraId                      = pCameraInfo->cameraId;
    m_pLogicalCameraInfo            = pCameraInfo;

    m_pResultMutex                  = Mutex::Create();
    m_pSetFeatureMutex              = Mutex::Create();
    m_pRealtimeReconfigDoneMutex    = Mutex::Create();
    m_isReprocessUsecase            = FALSE;
    m_numOfPhysicalDevices          = pCameraInfo->numPhysicalCameras;
    m_isUsecaseCloned               = FALSE;

    for (UINT32 i = 0 ; i < m_numOfPhysicalDevices; i++)
    {
        m_cameraIdMap[i] = pCameraInfo->ppDeviceInfo[i]->cameraId;
    }

    ExtensionModule::GetInstance()->GetVendorTagOps(&m_vendorTagOps);
    CHX_LOG("pGetMetaData:%p, pSetMetaData:%p", m_vendorTagOps.pGetMetaData, m_vendorTagOps.pSetMetaData);
    // 该函数遍历usecase XML数据中的所有usecase名称,并返回查找与“UsecaseZSL”匹配的Chiusecase的句柄
    pAdvancedUsecase = GetXMLUsecaseByName(ZSL_USECASE_NAME);

    if (NULL == pAdvancedUsecase)
    {
        CHX_LOG_ERROR("Fail to get ZSL usecase from XML!");
        result = CDKResultEFailed;
    }

    ChxUtils::Memset(m_enabledFeatures, 0, sizeof(m_enabledFeatures));
    ChxUtils::Memset(m_rejectedSnapshotRequestList, 0, sizeof(m_rejectedSnapshotRequestList));

    if (TRUE == IsMultiCameraUsecase())
    {
        m_isRdiStreamImported   = TRUE;
        m_isFdStreamImported    = TRUE;
    }
    else
    {
        m_isRdiStreamImported   = FALSE;
        m_isFdStreamImported    = FALSE;
        m_inputOutputType       = static_cast<UINT32>(InputOutputType::NO_SPECIAL);
    }

    for (UINT32 i = 0; i < m_numOfPhysicalDevices; i++)
    {
        if (FALSE == m_isRdiStreamImported)
        {
            m_pRdiStream[i] = static_cast<CHISTREAM*>(CHX_CALLOC(sizeof(CHISTREAM)));
        }

        if (FALSE == m_isFdStreamImported)
        {
            m_pFdStream[i]  = static_cast<CHISTREAM*>(CHX_CALLOC(sizeof(CHISTREAM)));
        }

        m_pBayer2YuvStream[i] = static_cast<CHISTREAM*>(CHX_CALLOC(sizeof(CHISTREAM)));
        m_pJPEGInputStream[i] = static_cast<CHISTREAM*>(CHX_CALLOC(sizeof(CHISTREAM)));
    }

    for (UINT32 i = 0; i < MaxPipelines; i++)
    {
        m_pipelineToSession[i] = InvalidSessionId;
    }

    m_realtimeSessionId = static_cast<UINT32>(InvalidSessionId);

    if (NULL == pStreamConfig)
    {
        CHX_LOG_ERROR("pStreamConfig is NULL");
        result = CDKResultEFailed;
    }

    if (CDKResultSuccess == result)
    {
        CHX_LOG_INFO("AdvancedCameraUsecase::Initialize usecaseId:%d num_streams:%d", m_usecaseId, pStreamConfig->num_streams);
        CHX_LOG_INFO("CHI Input Stream Configs:");
        for (UINT stream = 0; stream < pStreamConfig->num_streams; stream++)
        {
             CHX_LOG_INFO("\tstream = %p streamType = %d streamFormat = %d streamWidth = %d streamHeight = %d",
                          pStreamConfig->streams[stream],
                          pStreamConfig->streams[stream]->stream_type,
                          pStreamConfig->streams[stream]->format,
                          pStreamConfig->streams[stream]->width,
                          pStreamConfig->streams[stream]->height);

             if (CAMERA3_STREAM_INPUT == pStreamConfig->streams[stream]->stream_type)
             {
                 CHX_LOG_INFO("Reprocess usecase");
                 m_isReprocessUsecase = TRUE;
             }
        }
        result = CreateMetadataManager(m_cameraId, false, NULL, true);
    }

    // Default sensor mode pick hint
    m_defaultSensorModePickHint.sensorModeCaps.value    = 0;
    m_defaultSensorModePickHint.postSensorUpscale       = FALSE;
    m_defaultSensorModePickHint.sensorModeCaps.u.Normal = TRUE;

    if (TRUE == IsQuadCFAUsecase() && (CDKResultSuccess == result))
    {
        CHIDIMENSION binningSize = { 0 };

        // get binning mode sensor output size,
        // if more than one binning mode, choose the largest one
        for (UINT i = 0; i < pCameraInfo->m_cameraCaps.numSensorModes; i++)
        {
            CHX_LOG("i:%d, sensor mode:%d, size:%dx%d",
                i, pCameraInfo->pSensorModeInfo[i].sensorModeCaps.value,
                pCameraInfo->pSensorModeInfo[i].frameDimension.width,
                pCameraInfo->pSensorModeInfo[i].frameDimension.height);

            if (1 == pCameraInfo->pSensorModeInfo[i].sensorModeCaps.u.Normal)
            {
                if ((pCameraInfo->pSensorModeInfo[i].frameDimension.width  > binningSize.width) ||
                    (pCameraInfo->pSensorModeInfo[i].frameDimension.height > binningSize.height))
                {
                    binningSize.width  = pCameraInfo->pSensorModeInfo[i].frameDimension.width;
                    binningSize.height = pCameraInfo->pSensorModeInfo[i].frameDimension.height;
                }
            }
        }

        CHX_LOG("sensor binning mode size:%dx%d", binningSize.width, binningSize.height);

        // For Quad CFA sensor, should use binning mode for preview.
        // So set postSensorUpscale flag here to allow sensor pick binning sensor mode.
        m_QuadCFASensorInfo.sensorModePickHint.sensorModeCaps.value    = 0;
        m_QuadCFASensorInfo.sensorModePickHint.postSensorUpscale       = TRUE;
        m_QuadCFASensorInfo.sensorModePickHint.sensorModeCaps.u.Normal = TRUE;
        m_QuadCFASensorInfo.sensorModePickHint.sensorOutputSize.width  = binningSize.width;
        m_QuadCFASensorInfo.sensorModePickHint.sensorOutputSize.height = binningSize.height;

        // For Quad CFA usecase, should use full size mode for snapshot.
        m_defaultSensorModePickHint.sensorModeCaps.value               = 0;
        m_defaultSensorModePickHint.postSensorUpscale                  = FALSE;
        m_defaultSensorModePickHint.sensorModeCaps.u.QuadCFA           = TRUE;
    }

    if (CDKResultSuccess == result)
    {
        FeatureSetup(pStreamConfig);
        // 它调用ConfigureStream和BuildUsecase,这两个程序实质上创建了usecase级别的流,并获取与这个usecase关联的pipelines and sessions的数量。特性管道和需求也通过调用捕获。
        result = SelectUsecaseConfig(pCameraInfo, pStreamConfig);
    }

    if ((NULL != m_pChiUsecase) && (CDKResultSuccess == result) && (NULL != m_pPipelineToCamera))
    {
        CHX_LOG_INFO("Usecase %s selected", m_pChiUsecase->pUsecaseName);

        m_pCallbacks = static_cast<ChiCallBacks*>(CHX_CALLOC(sizeof(ChiCallBacks) * m_pChiUsecase->numPipelines));

        CHX_LOG_INFO("Pipelines need to create in advance usecase:%d", m_pChiUsecase->numPipelines);
        for (UINT i = 0; i < m_pChiUsecase->numPipelines; i++)
        {
            CHX_LOG_INFO("[%d/%d], pipeline name:%s, pipeline type:%d, session id:%d, camera id:%d",
                          i,
                          m_pChiUsecase->numPipelines,
                          m_pChiUsecase->pPipelineTargetCreateDesc[i].pPipelineName,
                          GetAdvancedPipelineTypeByPipelineId(i),
                          (NULL != m_pPipelineToSession) ? m_pPipelineToSession[i] : i,
                          m_pPipelineToCamera[i]);
        }

        if (NULL != m_pCallbacks)
        {
            for (UINT i = 0; i < m_pChiUsecase->numPipelines; i++)
            {
                m_pCallbacks[i].ChiNotify                      = AdvancedCameraUsecase::ProcessMessageCb;
                m_pCallbacks[i].ChiProcessCaptureResult        = AdvancedCameraUsecase::ProcessResultCb;
                m_pCallbacks[i].ChiProcessPartialCaptureResult = AdvancedCameraUsecase::ProcessDriverPartialCaptureResultCb;
            }

            result = CameraUsecaseBase::Initialize(m_pCallbacks, pStreamConfig);

            for (UINT index = 0; index < m_pChiUsecase->numPipelines; ++index)
            {
                INT32  pipelineType = GET_PIPELINE_TYPE_BY_ID(m_pipelineStatus[index].pipelineId);
                UINT32 rtIndex      = GET_FEATURE_INSTANCE_BY_ID(m_pipelineStatus[index].pipelineId);

                if (CDKInvalidId == m_metadataClients[index])
                {
                    result = CDKResultEFailed;
                    break;
                }

                if ((rtIndex < MaxRealTimePipelines) && (pipelineType < AdvancedPipelineType::PipelineCount))
                {
                    m_pipelineToClient[rtIndex][pipelineType] = m_metadataClients[index];
                    m_pMetadataManager->SetPipelineId(m_metadataClients[index], m_pipelineStatus[index].pipelineId);
                }
            }
        }

        PostUsecaseCreation(pStreamConfig);

        UINT32 maxRequiredFrameCnt = GetMaxRequiredFrameCntForOfflineInput(0);
        if (TRUE == IsMultiCameraUsecase())
        {
            //todo: it is better to calculate max required frame count according to pipeline,
            // for example,some customer just want to enable MFNR feature for wide sensor,
            // some customer just want to enable SWMF feature for tele sensor.
            // here suppose both sensor enable same feature simply.
            for (UINT i = 0; i < m_numOfPhysicalDevices; i++)
            {
                maxRequiredFrameCnt = GetMaxRequiredFrameCntForOfflineInput(i);
                UpdateValidRDIBufferLength(i, maxRequiredFrameCnt + 1);
                UpdateValidFDBufferLength(i, maxRequiredFrameCnt + 1);
                CHX_LOG_CONFIG("physicalCameraIndex:%d,validBufferLength:%d",
                    i, GetValidBufferLength(i));
            }

        }
        else
        {
            if (m_rdiStreamIndex != InvalidId)
            {
                UpdateValidRDIBufferLength(m_rdiStreamIndex, maxRequiredFrameCnt + 1);
                CHX_LOG_INFO("m_rdiStreamIndex:%d validBufferLength:%d",
                             m_rdiStreamIndex, GetValidBufferLength(m_rdiStreamIndex));
            }
            else
            {
                CHX_LOG_INFO("No RDI stream");
            }

            if (m_fdStreamIndex != InvalidId)
            {
                UpdateValidFDBufferLength(m_fdStreamIndex, maxRequiredFrameCnt + 1);
                CHX_LOG_INFO("m_fdStreamIndex:%d validBufferLength:%d",
                             m_fdStreamIndex, GetValidBufferLength(m_fdStreamIndex));
            }
            else
            {
                CHX_LOG_INFO("No FD stream");
            }
        }
    }
    else
    {
        result = CDKResultEFailed;
    }

    ATRACE_END();

    return result;
}

1.9 AdvancedCameraUsecase::PreUsecaseSelection(FeatureSetup)


/// AdvancedCameraUsecase::PreUsecaseSelection

CDKResult AdvancedCameraUsecase::FeatureSetup(
    camera3_stream_configuration_t* pStreamConfig)
{
    CDKResult result = CDKResultSuccess;

    if ((UsecaseId::PreviewZSL    == m_usecaseId) ||
        (UsecaseId::YUVInBlobOut  == m_usecaseId) ||
        (UsecaseId::VideoLiveShot == m_usecaseId) ||
        (UsecaseId::QuadCFA       == m_usecaseId) ||
        (UsecaseId::RawJPEG       == m_usecaseId))
    {
        SelectFeatures(pStreamConfig);
    }
    else if (UsecaseId::MultiCamera == m_usecaseId)
    {
        SelectFeatures(pStreamConfig);
    }
    return result;
}

1.10AdvancedCameraUsecase::SelectFeatures

1.选择feature,从这里开始可以交由oem进行定制
2.这里会将可能使用到的feature都创建好,后面使用到会选择
a.enum AdvanceFeatureType 定义了feature的mask值
b.enabledAdvanceFeatures从camxsettings.xml的定义中获取
Enable Advance Feature
3.创建各个feature,默认赋值第一个 m_enableFeature[physicalCameraIndex][index]
保存了各个物理摄像头所需要的feature,即有的case下的一个摄像头需要多个feature来支持
m_pActiveFeature = m_enableFeatures[0][0];
4.m_enabledFeatures[physicalCameraIndex][index] physicalCameraIndex index

// START of OEM to change section
VOID AdvancedCameraUsecase::SelectFeatures(camera3_stream_configuration_t* pStreamConfig)
{
    // OEM to change
    // this function to decide which features to run per the current pStreamConfig and static settings
    INT32  index                  = 0;
    UINT32 enabledAdvanceFeatures = 0;

    enabledAdvanceFeatures = ExtensionModule::GetInstance()->GetAdvanceFeatureMask();
    CHX_LOG("SelectFeatures(), enabled feature mask:%x", enabledAdvanceFeatures);

    // FastShutter support is there for SWMF and MFNR
    if (StreamConfigModeFastShutter == ExtensionModule::GetInstance()->GetOpMode(m_cameraId))
    {
        enabledAdvanceFeatures = AdvanceFeatureSWMF|AdvanceFeatureMFNR;
    }
    CHX_LOG("SelectFeatures(), enabled feature mask:%x", enabledAdvanceFeatures);

    for (UINT32 physicalCameraIndex = 0 ; physicalCameraIndex < m_numOfPhysicalDevices ; physicalCameraIndex++)
    {
        index = 0;
        if ((UsecaseId::PreviewZSL      == m_usecaseId)   ||
            (UsecaseId::MultiCamera     == m_usecaseId)   ||
            (UsecaseId::QuadCFA         == m_usecaseId)   ||
            (UsecaseId::VideoLiveShot   == m_usecaseId)   ||
            (UsecaseId::RawJPEG         == m_usecaseId))
        {
            if (AdvanceFeatureMFNR == (enabledAdvanceFeatures & AdvanceFeatureMFNR))
            {
                m_isOfflineNoiseReprocessEnabled = ExtensionModule::GetInstance()->EnableOfflineNoiseReprocessing();
                m_isFDstreamBuffersNeeded = TRUE;
            }

            if ((AdvanceFeatureSWMF         == (enabledAdvanceFeatures & AdvanceFeatureSWMF))   ||
                (AdvanceFeatureHDR          == (enabledAdvanceFeatures & AdvanceFeatureHDR))    ||
                ((AdvanceFeature2Wrapper    == (enabledAdvanceFeatures & AdvanceFeature2Wrapper))))
            {
                Feature2WrapperCreateInputInfo feature2WrapperCreateInputInfo;
                feature2WrapperCreateInputInfo.pUsecaseBase             = this;
                feature2WrapperCreateInputInfo.pMetadataManager         = m_pMetadataManager;
                feature2WrapperCreateInputInfo.pFrameworkStreamConfig   =
                    reinterpret_cast<ChiStreamConfigInfo*>(pStreamConfig);

                for (UINT32 i = 0; i < feature2WrapperCreateInputInfo.pFrameworkStreamConfig->numStreams; i++)
                {
                    feature2WrapperCreateInputInfo.pFrameworkStreamConfig->pChiStreams[i]->pHalStream = NULL;
                }

                if (NULL == m_pFeature2Wrapper)
                {
                    if (TRUE == IsMultiCameraUsecase())
                    {
                        if (FALSE == IsFusionStreamIncluded(pStreamConfig))
                        {
                            feature2WrapperCreateInputInfo.inputOutputType =
                                static_cast<UINT32>(InputOutputType::YUV_OUT);
                        }

                        for (UINT8 streamIndex = 0; streamIndex < m_numOfPhysicalDevices; streamIndex++)
                        {
                            feature2WrapperCreateInputInfo.internalInputStreams.push_back(m_pRdiStream[streamIndex]);
                            feature2WrapperCreateInputInfo.internalInputStreams.push_back(m_pFdStream[streamIndex]);
                        }

                        m_isFDstreamBuffersNeeded = TRUE;
                    }

                    m_pFeature2Wrapper = Feature2Wrapper::Create(&feature2WrapperCreateInputInfo, physicalCameraIndex);
                }

                m_enabledFeatures[physicalCameraIndex][index] = m_pFeature2Wrapper;
                index++;
            }
        }

        m_enabledFeaturesCount[physicalCameraIndex] = index;
    }

    if (m_enabledFeaturesCount[0] > 0)
    {
        if (NULL == m_pActiveFeature)
        {
            m_pActiveFeature = m_enabledFeatures[0][0];
        }

        CHX_LOG_INFO("num features selected:%d, FeatureType for preview:%d",
            m_enabledFeaturesCount[0], m_pActiveFeature->GetFeatureType());
    }
    else
    {
        CHX_LOG_INFO("No features selected");
    }

    m_pLastSnapshotFeature = m_pActiveFeature;

}

1.11 CameraUsecaseBase::Initialize


/// CameraUsecaseBase::Initialize

CDKResult CameraUsecaseBase::Initialize(
    ChiCallBacks*                   pCallbacks,
    camera3_stream_configuration_t* pStreamConfig)
{
    ATRACE_BEGIN("CameraUsecaseBase::Initialize");

    CDKResult result               = Usecase::Initialize(false);
    BOOL      bReprocessUsecase    = FALSE;

    m_lastResultMetadataFrameNum   = -1;
    m_effectModeValue              = ANDROID_CONTROL_EFFECT_MODE_OFF;
    m_sceneModeValue               = ANDROID_CONTROL_SCENE_MODE_DISABLED;
    m_rtSessionIndex               = InvalidId;

    m_finalPipelineIDForPartialMetaData = InvalidId;

    m_deferOfflineThreadCreateDone = FALSE;
    m_pDeferOfflineDoneMutex       = Mutex::Create();
    m_pDeferOfflineDoneCondition   = Condition::Create();
    m_deferOfflineSessionDone      = FALSE;
    m_pCallBacks                   = pCallbacks;
    m_GpuNodePresence              = FALSE;
    m_debugLastResultFrameNumber   = static_cast<UINT32>(-1);
    m_pEmptyMetaData               = ChxUtils::AndroidMetadata::AllocateMetaData(0,0);
    m_rdiStreamIndex               = InvalidId;
    m_fdStreamIndex                = InvalidId;
    m_isRequestBatchingOn          = false;
    m_batchRequestStartIndex       = UINT32_MAX;
    m_batchRequestEndIndex         = UINT32_MAX;

    ChxUtils::Memset(&m_sessions[0], 0, sizeof(m_sessions));

    // Default to 1-1 mapping of sessions and pipelines
    if (0 == m_numSessions)
    {
        m_numSessions = m_pChiUsecase->numPipelines;
    }

    CHX_ASSERT(0 != m_numSessions);

    if (CDKResultSuccess == result)
    {
        ChxUtils::Memset(m_pClonedStream, 0, (sizeof(ChiStream*)*MaxChiStreams));
        ChxUtils::Memset(m_pFrameworkOutStreams, 0, (sizeof(ChiStream*)*MaxChiStreams));
        m_bCloningNeeded         = FALSE;
        m_numberOfOfflineStreams = 0;

        for (UINT i = 0; i < m_pChiUsecase->numPipelines; i++)
        {
            if (m_pChiUsecase->pPipelineTargetCreateDesc[i].sourceTarget.numTargets > 0)
            {
                bReprocessUsecase = TRUE;
                break;
            }
        }

        for (UINT i = 0; i < m_pChiUsecase->numPipelines; i++)
        {
            if (TRUE == m_pChiUsecase->pPipelineTargetCreateDesc[i].pipelineCreateDesc.isRealTime)
            {
                // Cloning of streams needs when source target stream is enabled and
                // all the streams are connected in both real time and offline pipelines
                // excluding the input stream count
                m_bCloningNeeded = bReprocessUsecase && (UsecaseId::PreviewZSL != m_usecaseId) &&
                    (m_pChiUsecase->pPipelineTargetCreateDesc[i].sinkTarget.numTargets == (m_pChiUsecase->numTargets - 1));
                if (TRUE == m_bCloningNeeded)
                {
                    break;
                }
            }
        }
        CHX_LOG("m_bCloningNeeded = %d", m_bCloningNeeded);
        // here just generate internal buffer index which will be used for feature to related target buffer
        GenerateInternalBufferIndex() ;

        for (UINT i = 0; i < m_pChiUsecase->numPipelines; i++)
        {
            // use mapping if available, otherwise default to 1-1 mapping
            UINT sessionId  = (NULL != m_pPipelineToSession) ? m_pPipelineToSession[i] : i;
            UINT pipelineId = m_sessions[sessionId].numPipelines++;

            // Assign the ID to pipelineID
            m_sessions[sessionId].pipelines[pipelineId].id = i;

            CHX_LOG("Creating Pipeline %s at index %u for session %u, session's pipeline %u, camera id:%d",
                m_pChiUsecase->pPipelineTargetCreateDesc[i].pPipelineName, i, sessionId, pipelineId, m_pPipelineToCamera[i]);

            result = CreatePipeline(m_pPipelineToCamera[i],
                                    &m_pChiUsecase->pPipelineTargetCreateDesc[i],
                                    &m_sessions[sessionId].pipelines[pipelineId],
                                    pStreamConfig);

            if (CDKResultSuccess != result)
            {
                CHX_LOG_ERROR("Failed to Create Pipeline %s at index %u for session %u, session's pipeline %u, camera id:%d",
                    m_pChiUsecase->pPipelineTargetCreateDesc[i].pPipelineName, i, sessionId, pipelineId, m_pPipelineToCamera[i]);
                break;
            }

            m_sessions[sessionId].pipelines[pipelineId].isHALInputStream = PipelineHasHALInputStream(&m_pChiUsecase->pPipelineTargetCreateDesc[i]);

            if (FALSE == m_GpuNodePresence)
            {
                for (UINT nodeIndex = 0;
                        nodeIndex < m_pChiUsecase->pPipelineTargetCreateDesc[i].pipelineCreateDesc.numNodes; nodeIndex++)
                {
                    UINT32 nodeIndexId =
                                m_pChiUsecase->pPipelineTargetCreateDesc[i].pipelineCreateDesc.pNodes->nodeId;
                    if (255 == nodeIndexId)
                    {
                        if (NULL != m_pChiUsecase->pPipelineTargetCreateDesc[i].pipelineCreateDesc.pNodes->pNodeProperties)
                        {
                            const CHAR* gpuNodePropertyValue = "com.qti.node.gpu";
                            const CHAR* nodePropertyValue = (const CHAR*)
                                m_pChiUsecase->pPipelineTargetCreateDesc[i].pipelineCreateDesc.pNodes->pNodeProperties->pValue;
                            if (!strcmp(gpuNodePropertyValue, nodePropertyValue))
                            {
                                m_GpuNodePresence = TRUE;
                                break;
                            }
                        }
                    }
                }
            }

            PipelineCreated(sessionId, pipelineId);

        }
        if (CDKResultSuccess == result)
        {
            //create internal buffer
            CreateInternalBufferManager();

            //If Session's Pipeline has HAL input stream port,
            //create it on main thread to return important Stream
            //information during configure_stream call.
            result = CreateSessionsWithInputHALStream(pCallbacks);
        }

        if (CDKResultSuccess == result)
        {
            result = StartDeferThread();
        }

        if (CDKResultSuccess == result)
        {
            result = CreateRTSessions(pCallbacks);
        }

        if (CDKResultSuccess == result)
        {
            INT32 frameworkBufferCount = BufferQueueDepth;

            for (UINT32 sessionIndex = 0; sessionIndex < m_numSessions; ++sessionIndex)
            {
                PipelineData* pPipelineData = m_sessions[sessionIndex].pipelines;

                for (UINT32 pipelineIndex = 0; pipelineIndex < m_sessions[sessionIndex].numPipelines; pipelineIndex++)
                {
                    Pipeline* pPipeline = pPipelineData[pipelineIndex].pPipeline;
                    if (TRUE == pPipeline->IsRealTime())
                    {
                        m_metadataClients[pPipelineData[pipelineIndex].id] =
                             m_pMetadataManager->RegisterClient(
                                pPipeline->IsRealTime(),
                                pPipeline->GetTagList(),
                                pPipeline->GetTagCount(),
                                pPipeline->GetPartialTagCount(),
                                pPipeline->GetMetadataBufferCount() + BufferQueueDepth,
                                ChiMetadataUsage::RealtimeOutput);

                        pPipelineData[pipelineIndex].pPipeline->SetMetadataClientId(
                            m_metadataClients[pPipelineData[pipelineIndex].id]);

                        // update tag filters
                        PrepareHFRTagFilterList(pPipelineData[pipelineIndex].pPipeline->GetMetadataClientId());
                        frameworkBufferCount += pPipeline->GetMetadataBufferCount();
                    }
                    ChiMetadata* pMetadata = pPipeline->GetDescriptorMetadata();
                    result = pMetadata->SetTag("com.qti.chi.logicalcamerainfo", "NumPhysicalCameras", &m_numOfPhysicalDevices,
                        sizeof(m_numOfPhysicalDevices));
                    if (CDKResultSuccess != result)
                    {
                        CHX_LOG_ERROR("Failed to set metadata tag NumPhysicalCameras");
                    }
                }
            }

            m_pMetadataManager->InitializeFrameworkInputClient(frameworkBufferCount);
        }
    }

    ATRACE_END();
    return result;
}

1.12 chxfeaturezsl::Create

这会根据usecase和其他的条件选择相应的feature,前置的选择FeatureZsl模式

1.13 CameraUsecaseBase::CreatePipeline


/// CameraUsecaseBase::CreatePipeline

CDKResult CameraUsecaseBase::CreatePipeline(
    UINT32                              cameraId,
    ChiPipelineTargetCreateDescriptor*  pPipelineDesc,
    PipelineData*                       pPipelineData,
    camera3_stream_configuration_t*     pStreamConfig)
{
    CDKResult result = CDKResultSuccess;

    pPipelineData->pPipeline = Pipeline::Create(cameraId, PipelineType::Default, pPipelineDesc->pPipelineName);

    if (NULL != pPipelineData->pPipeline)
    {
        UINT                         numStreams  = 0;
        ChiTargetPortDescriptorInfo* pSinkTarget = &pPipelineDesc->sinkTarget;
        ChiTargetPortDescriptorInfo* pSrcTarget  = &pPipelineDesc->sourceTarget;

        ChiPortBufferDescriptor pipelineOutputBuffer[MaxChiStreams];
        ChiPortBufferDescriptor pipelineInputBuffer[MaxChiStreams];

        ChxUtils::Memset(pipelineOutputBuffer, 0, sizeof(pipelineOutputBuffer));
        ChxUtils::Memset(pipelineInputBuffer, 0, sizeof(pipelineInputBuffer));

        UINT32 tagId = ExtensionModule::GetInstance()->GetVendorTagId(VendorTag::FastShutterMode);
        UINT8 isFSMode = 0;
        if (StreamConfigModeFastShutter == ExtensionModule::GetInstance()->GetOpMode(m_cameraId))
        {
            isFSMode = 1;
        }

        if (TRUE == pPipelineData->pPipeline->HasSensorNode(&pPipelineDesc->pipelineCreateDesc))
        {
            ChiMetadata* pMetadata = pPipelineData->pPipeline->GetDescriptorMetadata();
            if (NULL != pMetadata)
            {
                CSIDBinningInfo binningInfo ={ 0 };
                CameraCSIDTrigger(&binningInfo, pPipelineDesc);

                result = pMetadata->SetTag("org.quic.camera.ifecsidconfig",
                                           "csidbinninginfo",
                                           &binningInfo,
                                           sizeof(binningInfo));
                if (CDKResultSuccess != result)
                {
                    CHX_LOG_ERROR("Failed to set metadata ifecsidconfig");
                    result = CDKResultSuccess;
                }
            }
        }

        result = pPipelineData->pPipeline->SetVendorTag(tagId, static_cast<VOID*>(&isFSMode), 1);
        if (CDKResultSuccess != result)
        {
            CHX_LOG_ERROR("Failed to set metadata FSMode");
            result = CDKResultSuccess;
        }

        if (NULL != pStreamConfig)
        {
            pPipelineData->pPipeline->SetAndroidMetadata(pStreamConfig);
        }

        for (UINT sinkIdx = 0; sinkIdx < pSinkTarget->numTargets; sinkIdx++)
        {
            ChiTargetPortDescriptor* pSinkTargetDesc = &pSinkTarget->pTargetPortDesc[sinkIdx];


            UINT previewFPS  = ExtensionModule::GetInstance()->GetPreviewFPS();
            UINT videoFPS    = ExtensionModule::GetInstance()->GetVideoFPS();
            UINT pipelineFPS = ExtensionModule::GetInstance()->GetUsecaseMaxFPS();

            pSinkTargetDesc->pTarget->pChiStream->streamParams.streamFPS = pipelineFPS;

            // override ChiStream FPS value for Preview/Video streams with stream-specific values only IF
            // APP has set valid stream-specific fps
            if (UsecaseSelector::IsPreviewStream(reinterpret_cast<camera3_stream_t*>(pSinkTargetDesc->pTarget->pChiStream)))
            {
                pSinkTargetDesc->pTarget->pChiStream->streamParams.streamFPS = (previewFPS == 0) ? pipelineFPS : previewFPS;
            }
            else if (UsecaseSelector::IsVideoStream(reinterpret_cast<camera3_stream_t*>(pSinkTargetDesc->pTarget->pChiStream)))
            {
                pSinkTargetDesc->pTarget->pChiStream->streamParams.streamFPS = (videoFPS == 0) ? pipelineFPS : videoFPS;
            }

            if ((pSrcTarget->numTargets > 0) && (TRUE == m_bCloningNeeded))
            {
                m_pFrameworkOutStreams[m_numberOfOfflineStreams] = pSinkTargetDesc->pTarget->pChiStream;
                m_pClonedStream[m_numberOfOfflineStreams]        = static_cast<CHISTREAM*>(CHX_CALLOC(sizeof(CHISTREAM)));

                ChxUtils::Memcpy(m_pClonedStream[m_numberOfOfflineStreams], pSinkTargetDesc->pTarget->pChiStream, sizeof(CHISTREAM));

                pipelineOutputBuffer[sinkIdx].pStream     = m_pClonedStream[m_numberOfOfflineStreams];
                pipelineOutputBuffer[sinkIdx].pNodePort   = pSinkTargetDesc->pNodePort;
                pipelineOutputBuffer[sinkIdx].numNodePorts= pSinkTargetDesc->numNodePorts;
                pPipelineData->pStreams[numStreams++]     = pipelineOutputBuffer[sinkIdx].pStream;
                m_numberOfOfflineStreams++;

                CHX_LOG("CloningNeeded sinkIdx %d numStreams %d pStream %p nodePortId %d",
                        sinkIdx,
                        numStreams-1,
                        pipelineOutputBuffer[sinkIdx].pStream,
                        pipelineOutputBuffer[sinkIdx].pNodePort[0].nodePortId);
            }
            else
            {
                pipelineOutputBuffer[sinkIdx].pStream      = pSinkTargetDesc->pTarget->pChiStream;
                pipelineOutputBuffer[sinkIdx].pNodePort    = pSinkTargetDesc->pNodePort;
                pipelineOutputBuffer[sinkIdx].numNodePorts = pSinkTargetDesc->numNodePorts;
                pPipelineData->pStreams[numStreams++]   = pipelineOutputBuffer[sinkIdx].pStream;
                CHX_LOG("sinkIdx %d numStreams %d pStream %p format %u %d:%d nodePortID %d",
                        sinkIdx,
                        numStreams - 1,
                        pipelineOutputBuffer[sinkIdx].pStream,
                        pipelineOutputBuffer[sinkIdx].pStream->format,
                        pipelineOutputBuffer[sinkIdx].pNodePort[0].nodeId,
                        pipelineOutputBuffer[sinkIdx].pNodePort[0].nodeInstanceId,
                        pipelineOutputBuffer[sinkIdx].pNodePort[0].nodePortId);
            }
        }

        for (UINT sourceIdx = 0; sourceIdx < pSrcTarget->numTargets; sourceIdx++)
        {
            UINT                     i              = 0;
            ChiTargetPortDescriptor* pSrcTargetDesc = &pSrcTarget->pTargetPortDesc[sourceIdx];

            pipelineInputBuffer[sourceIdx].pStream = pSrcTargetDesc->pTarget->pChiStream;

            pipelineInputBuffer[sourceIdx].pNodePort    = pSrcTargetDesc->pNodePort;
            pipelineInputBuffer[sourceIdx].numNodePorts = pSrcTargetDesc->numNodePorts;

            for (i = 0; i < numStreams; i++)
            {
                if (pPipelineData->pStreams[i] == pipelineInputBuffer[sourceIdx].pStream)
                {
                    break;
                }
            }
            if (numStreams == i)
            {
                pPipelineData->pStreams[numStreams++] = pipelineInputBuffer[sourceIdx].pStream;
            }

            for (UINT portIndex = 0; portIndex < pipelineInputBuffer[sourceIdx].numNodePorts; portIndex++)
            {
                CHX_LOG("sourceIdx %d portIndex %d numStreams %d pStream %p format %u %d:%d nodePortID %d",
                        sourceIdx,
                        portIndex,
                        numStreams - 1,
                        pipelineInputBuffer[sourceIdx].pStream,
                        pipelineInputBuffer[sourceIdx].pStream->format,
                        pipelineInputBuffer[sourceIdx].pNodePort[portIndex].nodeId,
                        pipelineInputBuffer[sourceIdx].pNodePort[portIndex].nodeInstanceId,
                        pipelineInputBuffer[sourceIdx].pNodePort[portIndex].nodePortId);
            }
        }
        pPipelineData->pPipeline->SetOutputBuffers(pSinkTarget->numTargets, &pipelineOutputBuffer[0]);
        pPipelineData->pPipeline->SetInputBuffers(pSrcTarget->numTargets, &pipelineInputBuffer[0]);
        pPipelineData->pPipeline->SetPipelineNodePorts(&pPipelineDesc->pipelineCreateDesc);
        pPipelineData->pPipeline->SetPipelineName(pPipelineDesc->pPipelineName);

        CHX_LOG("set sensor mode pick hint: %p", GetSensorModePickHint(pPipelineData->id));
        pPipelineData->pPipeline->SetSensorModePickHint(GetSensorModePickHint(pPipelineData->id));

        pPipelineData->numStreams       = numStreams;

        result = pPipelineData->pPipeline->CreateDescriptor();
    }

    return result;
}

1.14 Pipeline::Create

vendor\qcom\proprietary\chi-cdk\core\chiframework\chxpipeline.cpp


// Pipeline::Create

Pipeline* Pipeline::Create(
    UINT32       cameraId,
    PipelineType type,
    const CHAR*  pName)
{
    Pipeline* pPipeline = CHX_NEW Pipeline;

    if (NULL != pPipeline)
    {
        pPipeline->Initialize(cameraId, type);

        pPipeline->m_pPipelineName = pName;
    }

    return pPipeline;
}

1.15 Pipeline::Initialize


// Pipeline::Initialize

CDKResult Pipeline::Initialize(
    UINT32       cameraId,
    PipelineType type)
{
    CDKResult result = CDKResultSuccess;

    m_cameraId              = cameraId;
    m_type                  = type;
    m_pipelineActivated     = FALSE;
    m_isDeferFinalizeNeeded = FALSE;
    m_pSensorModePickhint   = NULL;
    m_isNameAllocated       = FALSE;

    m_pPipelineDescriptorMetadata = ChiMetadata::Create();
    if (NULL == m_pPipelineDescriptorMetadata)
    {
        result = CDKResultENoMemory;
        CHX_LOG_ERROR("Failed to allocate memory for Pipeline Metadata");
    }

    if (m_type == PipelineType::OfflinePreview)
    {
        m_numInputBuffers  = 1; // Sensor - so no input buffer
        m_numOutputBuffers = 1; // Preview
        SetupRealtimePreviewPipelineDescriptor();
    }

    return result;
}

1.16 Pipeline::SetupRealtimePreviewPipelineDescriptor

pipeline 创建成功,Node:BPS\IPE\JPEG\JPEG AGRREGATOR\Links


/// Pipeline::SetupRealtimePreviewPipelineDescriptor

VOID Pipeline::SetupRealtimePreviewPipelineDescriptor()
{
    m_pipelineDescriptor.size       = sizeof(CHIPIPELINECREATEDESCRIPTOR);
    m_pipelineDescriptor.numNodes   = 1;
    m_pipelineDescriptor.pNodes     = &m_nodes[0];
    m_pipelineDescriptor.numLinks   = 1;
    m_pipelineDescriptor.pLinks     = &m_links[0];
    m_pipelineDescriptor.isRealTime = FALSE;

    // Nodes
    UINT32 nodeIndex = 0;
#if 0
    // ---------------------------------------------------------------------------
    // ---------------------------------- BPS ------------------------------------
    // ---------------------------------------------------------------------------
    m_nodes[nodeIndex].nodeId                      = 65539;
    m_nodes[nodeIndex].nodeInstanceId              = 0;
    m_nodes[nodeIndex].nodeAllPorts.numInputPorts  = 1;
    m_nodes[nodeIndex].nodeAllPorts.pInputPorts    = &m_inputPorts[BPSNode];
    m_nodes[nodeIndex].nodeAllPorts.numOutputPorts = 1;
    m_nodes[nodeIndex].nodeAllPorts.pOutputPorts   = &m_outputPorts[BPSNode];

    // BPS output port
    m_outputPorts[BPSNode].portId                  = 1;
    m_outputPorts[BPSNode].isSinkPort              = FALSE;
    m_outputPorts[BPSNode].isOutputStreamBuffer    = FALSE;
    // BPS input port
    m_inputPorts[BPSNode].portId                   = 0;
    m_inputPorts[BPSNode].isInputStreamBuffer      = TRUE;

    // ---------------------------------------------------------------------------
    // ---------------------------------- IPE ------------------------------------
    // ---------------------------------------------------------------------------
    nodeIndex++;
#endif

    m_nodes[nodeIndex].nodeId                      = 65538;
    m_nodes[nodeIndex].nodeInstanceId              = 0;
    m_nodes[nodeIndex].nodeAllPorts.numInputPorts  = 1;
    m_nodes[nodeIndex].nodeAllPorts.pInputPorts    = &m_inputPorts[IPENode];
    m_nodes[nodeIndex].nodeAllPorts.numOutputPorts = 1;
    m_nodes[nodeIndex].nodeAllPorts.pOutputPorts   = &m_outputPorts[IPENode];

    // IPE output port
    m_outputPorts[IPENode].portId                  = 8;
    m_outputPorts[IPENode].isSinkPort              = TRUE;
    m_outputPorts[IPENode].isOutputStreamBuffer    = TRUE;
    // IPE input port
    m_inputPorts[IPENode].portId                   = 0;
    m_inputPorts[IPENode].isInputStreamBuffer      = TRUE;

#if 0
    // ---------------------------------------------------------------------------
    // ---------------------------------- JPEG -----------------------------------
    // ---------------------------------------------------------------------------
    nodeIndex++;

    m_nodes[nodeIndex].nodeId                        = 65537;
    m_nodes[nodeIndex].nodeInstanceId                = 0;
    m_nodes[nodeIndex].nodeAllPorts.numInputPorts    = 1;
    m_nodes[nodeIndex].nodeAllPorts.pInputPorts      = &m_inputPorts[JPEGNode];
    m_nodes[nodeIndex].nodeAllPorts.numOutputPorts   = 1;
    m_nodes[nodeIndex].nodeAllPorts.pOutputPorts     = &m_outputPorts[JPEGNode];

    // JPEG output port
    m_outputPorts[JPEGNode].portId                   = 1;
    m_outputPorts[JPEGNode].isSinkPort               = FALSE;
    m_outputPorts[JPEGNode].isOutputStreamBuffer     = FALSE;
    // JPEG input port
    m_inputPorts[JPEGNode].portId                    = 0;
    m_inputPorts[JPEGNode].isInputStreamBuffer       = FALSE;

    // ---------------------------------------------------------------------------
    // ---------------------------------- JPEG AGRREGATOR ------------------------
    // ---------------------------------------------------------------------------
    nodeIndex++;

    m_nodes[nodeIndex].nodeId                        = 6;
    m_nodes[nodeIndex].nodeInstanceId                = 0;
    m_nodes[nodeIndex].nodeAllPorts.numInputPorts    = 1;
    m_nodes[nodeIndex].nodeAllPorts.pInputPorts      = &m_inputPorts[JPEGAgrregatorNode];
    m_nodes[nodeIndex].nodeAllPorts.numOutputPorts   = 1;
    m_nodes[nodeIndex].nodeAllPorts.pOutputPorts     = &m_outputPorts[JPEGAgrregatorNode];

    // JPEG output port
    m_outputPorts[JPEGAgrregatorNode].portId                = 1;
    m_outputPorts[JPEGAgrregatorNode].isSinkPort            = TRUE;
    m_outputPorts[JPEGAgrregatorNode].isOutputStreamBuffer  = TRUE;
    // JPEG input port
    m_inputPorts[JPEGAgrregatorNode].portId                 = 0;
    m_inputPorts[JPEGAgrregatorNode].isInputStreamBuffer    = FALSE;
#endif
    // ---------------------------------------------------------------------------
    // --------------------------------- Links -----------------------------------
    // ---------------------------------------------------------------------------

#if 0
    // BPS --> IPE
    m_links[0].srcNode.nodeId                     = 65539;
    m_links[0].srcNode.nodeInstanceId             = 0;
    m_links[0].srcNode.nodePortId                 = 1;
    m_links[0].numDestNodes                       = 1;
    m_links[0].pDestNodes                         = &m_linkNodeDescriptors[0];

    m_linkNodeDescriptors[0].nodeId               = 65538;
    m_linkNodeDescriptors[0].nodeInstanceId       = 0;
    m_linkNodeDescriptors[0].nodePortId           = 0;

    m_links[0].bufferProperties.bufferFlags       = BufferMemFlagHw;
    m_links[0].bufferProperties.bufferFormat      = ChiFormatUBWCTP10;
    m_links[0].bufferProperties.bufferHeap        = BufferHeapIon;
    m_links[0].bufferProperties.bufferQueueDepth  = 8;

    // IPE --> JPEG
    m_links[1].srcNode.nodeId                     = 65538;
    m_links[1].srcNode.nodeInstanceId             = 0;
    m_links[1].srcNode.nodePortId                 = 8;
    m_links[1].numDestNodes                       = 1;
    m_links[1].pDestNodes                         = &m_linkNodeDescriptors[1];

    m_linkNodeDescriptors[1].nodeId               = 65537;
    m_linkNodeDescriptors[1].nodeInstanceId       = 0;
    m_linkNodeDescriptors[1].nodePortId           = 0;

    m_links[1].bufferProperties.bufferFlags       = (BufferMemFlagHw | BufferMemFlagLockable);
    m_links[1].bufferProperties.bufferFormat      = ChiFormatYUV420NV12;
    m_links[1].bufferProperties.bufferHeap        = BufferHeapIon;
    m_links[1].bufferProperties.bufferQueueDepth  = 8;

    // JPEG --> JPEG Agrregator
    m_links[2].srcNode.nodeId                     = 65537;
    m_links[2].srcNode.nodeInstanceId             = 0;
    m_links[2].srcNode.nodePortId                 = 1;
    m_links[2].numDestNodes                       = 1;
    m_links[2].pDestNodes                         = &m_linkNodeDescriptors[2];

    m_linkNodeDescriptors[2].nodeId               = 6;
    m_linkNodeDescriptors[2].nodeInstanceId       = 0;
    m_linkNodeDescriptors[2].nodePortId           = 0;

    m_links[2].bufferProperties.bufferFlags       = (BufferMemFlagHw | BufferMemFlagLockable);
    m_links[2].bufferProperties.bufferFormat      = ChiFormatYUV420NV12;
    m_links[2].bufferProperties.bufferHeap        = BufferHeapIon;
    m_links[2].bufferProperties.bufferQueueDepth  = 8;

    // JPEG Aggregator --> Sink Buffer
    m_links[3].srcNode.nodeId                     = 6;
    m_links[3].srcNode.nodeInstanceId             = 0;
    m_links[3].srcNode.nodePortId                 = 1;
    m_links[3].numDestNodes                       = 1;
    m_links[3].pDestNodes                         = &m_linkNodeDescriptors[3];

    m_linkNodeDescriptors[3].nodeId               = 2;
    m_linkNodeDescriptors[3].nodeInstanceId       = 0;
    m_linkNodeDescriptors[3].nodePortId           = 0;
#endif

    m_links[0].srcNode.nodeId                     = 65538;
    m_links[0].srcNode.nodeInstanceId             = 0;
    m_links[0].srcNode.nodePortId                 = 8;
    m_links[0].numDestNodes                       = 1;
    m_links[0].pDestNodes                         = &m_linkNodeDescriptors[0];

    m_linkNodeDescriptors[0].nodeId               = 2;
    m_linkNodeDescriptors[0].nodeInstanceId       = 0;
    m_linkNodeDescriptors[0].nodePortId           = 0;
}

1.17 Pipeline::CreateDescriptor


// Pipeline::CreateDescriptor

CDKResult Pipeline::CreateDescriptor()
{
    CDKResult          result                    = CDKResultSuccess;
    PipelineCreateData pipelineCreateData        = { 0 };

    m_pipelineDescriptor.isRealTime              = HasSensorNode(&m_pipelineDescriptor);

    // m_cameraId from usecase side must be correct, even for pipelines without sensor Node
    m_pipelineDescriptor.cameraId                = m_cameraId;

    pipelineCreateData.pPipelineName             = m_pPipelineName;
    pipelineCreateData.numOutputs                = m_numOutputBuffers;
    pipelineCreateData.pOutputDescriptors        = &m_pipelineOutputBuffer[0];
    pipelineCreateData.numInputs                 = m_numInputBuffers;
    pipelineCreateData.pInputOptions             = &m_pipelineInputOptions[0];
    pipelineCreateData.pPipelineCreateDescriptor = &m_pipelineDescriptor;

    pipelineCreateData.pPipelineCreateDescriptor->numBatchedFrames          =
        ExtensionModule::GetInstance()->GetNumBatchedFrames();
    pipelineCreateData.pPipelineCreateDescriptor->HALOutputBufferCombined   =
        ExtensionModule::GetInstance()->GetHALOutputBufferCombined();
    pipelineCreateData.pPipelineCreateDescriptor->maxFPSValue               =
        ExtensionModule::GetInstance()->GetUsecaseMaxFPS();

    const CHAR* pClientName = "Chi::Pipeline::CreateDescriptor";
    SetTuningUsecase();

    m_pPipelineDescriptorMetadata->AddReference(pClientName);
    m_pipelineDescriptor.hPipelineMetadata = m_pPipelineDescriptorMetadata->GetHandle();


    CHX_LOG_CONFIG("Pipeline[%s] pipeline pointer %p numInputs=%d, numOutputs=%d stream w x h: %d x %d",
            m_pPipelineName, this, pipelineCreateData.numInputs, pipelineCreateData.numOutputs,
            pipelineCreateData.pOutputDescriptors->pStream->width,
            pipelineCreateData.pOutputDescriptors->pStream->height);

    // Update stats skip pattern in node property with value from override

    for (UINT node = 0; node < m_pipelineDescriptor.numNodes; node++)
    {
        ChiNode* pChiNode = &m_pipelineDescriptor.pNodes[node];

        for (UINT i = 0; i < pChiNode->numProperties; i++)
        {
            if (pChiNode->pNodeProperties[i].id == NodePropertyStatsSkipPattern)
            {
                m_statsSkipPattern = ExtensionModule::GetInstance()->GetStatsSkipPattern();
                pChiNode->pNodeProperties[i].pValue = &m_statsSkipPattern;
            }
            if (pChiNode->pNodeProperties[i].id == NodePropertyEnableFOVC)
            {
                m_enableFOVC = ExtensionModule::GetInstance()->EnableFOVCUseCase();
                pChiNode->pNodeProperties[i].pValue = &m_enableFOVC;
            }
        }

    }

    m_hPipelineHandle = ExtensionModule::GetInstance()->CreatePipelineDescriptor(&pipelineCreateData);

    m_pPipelineDescriptorMetadata->ReleaseReference(pClientName);

    if (NULL == m_hPipelineHandle)
    {
        result = CDKResultEFailed;
        CHX_LOG_ERROR("Fail due to NULL pipeline handle");
    }
    else
    {
        if (FALSE == ExtensionModule::GetInstance()->IsTorchWidgetUsecase())
        {
            // sensor mode selection not required for torch widget usecase.
            DesiredSensorMode desiredSensorMode = { 0 };
            desiredSensorMode.frameRate = ExtensionModule::GetInstance()->GetUsecaseMaxFPS();
            if (ExtensionModule::GetInstance()->GetVideoHDRMode())
            {
                desiredSensorMode.sensorModeCaps.u.ZZHDR = 1;
            }
            else if (SelectInSensorHDR3ExpUsecase::InSensorHDR3ExpPreview ==
                     ExtensionModule::GetInstance()->SelectInSensorHDR3ExpUsecase())
            {
                desiredSensorMode.sensorModeCaps.u.IHDR = 1;
            }

            UINT index = FindHighestWidthInputIndex(m_pipelineInputOptions, m_numInputBuffers);
            // @todo Select the highest width/height from all the input buffer requirements
            desiredSensorMode.optimalWidth  = m_pipelineInputOptions[index].bufferOptions.optimalDimension.width;
            desiredSensorMode.optimalHeight = m_pipelineInputOptions[index].bufferOptions.optimalDimension.height;
            desiredSensorMode.maxWidth      = m_pipelineInputOptions[index].bufferOptions.maxDimension.width;
            desiredSensorMode.maxHeight     = m_pipelineInputOptions[index].bufferOptions.maxDimension.height;
            desiredSensorMode.minWidth      = m_pipelineInputOptions[index].bufferOptions.minDimension.width;
            desiredSensorMode.minHeight     = m_pipelineInputOptions[index].bufferOptions.minDimension.height;
            desiredSensorMode.forceMode     = ExtensionModule::GetInstance()->GetForceSensorMode();

            if (NULL != m_pSensorModePickhint)
            {
                CHX_LOG("input option:%dx%d, upscale:%d, override optimal size:%dx%d, sensor mode caps:%x",
                    desiredSensorMode.optimalWidth, desiredSensorMode.optimalHeight,
                    m_pSensorModePickhint->postSensorUpscale,
                    m_pSensorModePickhint->sensorOutputSize.width,
                    m_pSensorModePickhint->sensorOutputSize.height,
                    m_pSensorModePickhint->sensorModeCaps.value);

                if ((TRUE == m_pSensorModePickhint->postSensorUpscale) &&
                    (m_pSensorModePickhint->sensorOutputSize.width  < desiredSensorMode.optimalWidth) &&
                    (m_pSensorModePickhint->sensorOutputSize.height < desiredSensorMode.optimalHeight))
                {
                    desiredSensorMode.optimalWidth  = m_pSensorModePickhint->sensorOutputSize.width;
                    desiredSensorMode.optimalHeight = m_pSensorModePickhint->sensorOutputSize.height;
                    desiredSensorMode.maxWidth      = desiredSensorMode.optimalWidth;
                    desiredSensorMode.maxHeight     = desiredSensorMode.optimalHeight;
                    desiredSensorMode.minWidth      = desiredSensorMode.optimalWidth;
                    desiredSensorMode.minHeight     = desiredSensorMode.optimalHeight;
                }

                if (0 != m_pSensorModePickhint->sensorModeCaps.value)
                {
                    desiredSensorMode.sensorModeCaps.value = m_pSensorModePickhint->sensorModeCaps.value;
                }
            }
            if (StreamConfigModeFastShutter == ExtensionModule::GetInstance()->GetOpMode(m_cameraId))
            {
                desiredSensorMode.sensorModeCaps.u.FS = 1;
            }

            m_pSelectedSensorMode                   = ChxSensorModeSelect::FindBestSensorMode(m_cameraId, &desiredSensorMode);
            m_pSelectedSensorMode->batchedFrames    = ExtensionModule::GetInstance()->GetNumBatchedFrames();
            m_pSelectedSensorMode->HALOutputBufferCombined = ExtensionModule::GetInstance()->GetHALOutputBufferCombined();
        }

        if (TRUE == m_pipelineDescriptor.isRealTime)
        {
            m_pipelineInfo.pipelineInputInfo.isInputSensor              = TRUE;
            m_pipelineInfo.pipelineInputInfo.sensorInfo.cameraId        = m_cameraId;
            m_pipelineInfo.pipelineInputInfo.sensorInfo.pSensorModeInfo = m_pSelectedSensorMode;
            CHX_LOG_CONFIG("Pipeline[%s] Pipeline pointer %p Selected sensor Mode W=%d, H=%d",
                            m_pPipelineName,
                            this,
                            m_pipelineInfo.pipelineInputInfo.sensorInfo.pSensorModeInfo->frameDimension.width,
                            m_pipelineInfo.pipelineInputInfo.sensorInfo.pSensorModeInfo->frameDimension.height);
        }
        else
        {
            m_pipelineInfo.pipelineInputInfo.isInputSensor                           = FALSE;
            m_pipelineInfo.pipelineInputInfo.inputBufferInfo.numInputBuffers         = m_numInputBuffers;
            m_pipelineInfo.pipelineInputInfo.inputBufferInfo.pInputBufferDescriptors = GetInputBufferDescriptors();
        }

        m_pipelineInfo.hPipelineDescriptor                = reinterpret_cast<CHIPIPELINEDESCRIPTOR>(m_hPipelineHandle);
        m_pipelineInfo.pipelineOutputInfo.hPipelineHandle = NULL;
        m_pipelineInfo.pipelineResourcePolicy             = m_resourcePolicy;
        m_pipelineInfo.isDeferFinalizeNeeded              = m_isDeferFinalizeNeeded;
    }

    return result;
}

chxextensionmodule.cpp–>CreatePipelineDescriptor

/
// ExtensionModule::CreatePipelineDescriptor
/
CHIPIPELINEDESCRIPTOR ExtensionModule::CreatePipelineDescriptor(
    PipelineCreateData* pPipelineCreateData) ///< Pipeline create descriptor
{
    return (g_chiContextOps.pCreatePipelineDescriptor(m_hCHIContext,
                                                      pPipelineCreateData->pPipelineName,
                                                      pPipelineCreateData->pPipelineCreateDescriptor,
                                                      pPipelineCreateData->numOutputs,
                                                      pPipelineCreateData->pOutputDescriptors,
                                                      pPipelineCreateData->numInputs,
                                                      pPipelineCreateData->pInputOptions));
}

camxchi.cpp–>ChiCreatePipelineDescriptor


/// ChiCreatePipelineDescriptor

static CHIPIPELINEDESCRIPTOR ChiCreatePipelineDescriptor(
    CHIHANDLE                          hChiContext,
    const CHAR*                        pPipelineName,
    const CHIPIPELINECREATEDESCRIPTOR* pCreateDescriptor,
    UINT32                             numOutputs,
    CHIPORTBUFFERDESCRIPTOR*           pOutputBufferDescriptors,
    UINT32                             numInputs,
    CHIPIPELINEINPUTOPTIONS*           pInputBufferOptions)
{
    CDKResult result = CDKResultSuccess;

    CAMX_ASSERT(NULL != hChiContext);
    CAMX_ASSERT(NULL != pCreateDescriptor);
    CAMX_ASSERT(NULL != pOutputBufferDescriptors);
    CAMX_ASSERT(NULL != pInputBufferOptions);

    ChiNode* pChiNode = &pCreateDescriptor->pNodes[0];

    // Number of input can not be Zero for offline case.
    // Ignore this check for Torch widget node.
    /// @todo (CAMX-3119) remove Torch check below and handle this in generic way.
    if ((NULL != pCreateDescriptor && FALSE == pCreateDescriptor->isRealTime && 0 == numInputs) &&
        ((NULL != pChiNode) && (Torch != pChiNode->nodeId)))
    {
        result = CDKResultEInvalidArg;
        CAMX_LOG_ERROR(CamxLogGroupHAL, "Number of Input cannot be zero for offline use cases!");
    }

    PipelineDescriptor* pPipelineDescriptor = NULL;

    if ((CDKResultSuccess == result)                    &&
        (NULL             != hChiContext)               &&
        (NULL             != pCreateDescriptor)         &&
        (NULL             != pPipelineName)             &&
        (NULL             != pOutputBufferDescriptors)  &&
        (NULL             != pInputBufferOptions))
    {
        ChiContext* pChiContext = GetChiContext(hChiContext);

        pPipelineDescriptor = pChiContext->CreatePipelineDescriptor(pPipelineName,
                                                                    pCreateDescriptor,
                                                                    numOutputs,
                                                                    pOutputBufferDescriptors,
                                                                    numInputs,
                                                                    pInputBufferOptions);
    }
    else
    {
        CAMX_LOG_ERROR(CamxLogGroupHAL, "Invalid input parameters!");
    }

    return reinterpret_cast<CHIPIPELINEDESCRIPTOR>(pPipelineDescriptor);
}

camxchicontext.cpp–>CreatePipelineDescriptor


// ChiContext::CreatePipelineDescriptor

PipelineDescriptor* ChiContext::CreatePipelineDescriptor(
    const CHAR*                        pPipelineName,
    const ChiPipelineCreateDescriptor* pPipelineCreateDescriptor,
    UINT32                             numOutputs,
    ChiPortBufferDescriptor*           pOutputBufferDescriptor,
    UINT32                             numInputs,
    CHIPIPELINEINPUTOPTIONS*           pPipelineInputOptions)
{
    CamxResult               result                    = CamxResultSuccess;
    PipelineCreateInputData  pipelineCreateInputData   = { 0 };
    PipelineCreateOutputData pipelineCreateOutputData  = { 0 };
    PipelineDescriptor*      pPipelineDescriptor       = NULL;

    if ((NULL == pPipelineName)                                     ||
        (NULL == pPipelineCreateDescriptor)                         ||
        ((0   != numOutputs) && (NULL == pOutputBufferDescriptor)))
    {
        CAMX_LOG_ERROR(CamxLogGroupChi, "Invalid input arg pPipelineName=%s pPipelineCreateDescriptor=%p",
                       pPipelineName, pPipelineCreateDescriptor);
        result = CamxResultEInvalidArg;
    }

    if (CamxResultSuccess == result)
    {
        pPipelineDescriptor = static_cast<PipelineDescriptor*>(CAMX_CALLOC(sizeof(PipelineDescriptor)));
    }

    if (NULL != pPipelineDescriptor)
    {
        pPipelineDescriptor->flags.isRealTime = pPipelineCreateDescriptor->isRealTime;

        UINT                  numBatchedFrames           = pPipelineCreateDescriptor->numBatchedFrames;
        BOOL                  HALOutputBufferCombined    = pPipelineCreateDescriptor->HALOutputBufferCombined;
        UINT                  maxFPSValue                = pPipelineCreateDescriptor->maxFPSValue;
        OverrideOutputFormat  overrideImpDefinedFormat   = { {0} };

        CAMX_LOG_INFO(CamxLogGroupHAL, "numBatchedFrames:%d HALOutputBufferCombined:%d maxFPSValue:%d",
                      numBatchedFrames, HALOutputBufferCombined, maxFPSValue);

        pPipelineDescriptor->numBatchedFrames        = numBatchedFrames;
        pPipelineDescriptor->HALOutputBufferCombined = HALOutputBufferCombined;

        pPipelineDescriptor->maxFPSValue      = maxFPSValue;
        pPipelineDescriptor->cameraId         = pPipelineCreateDescriptor->cameraId;
        pPipelineDescriptor->pPrivData        = NULL;
        pPipelineDescriptor->pSessionMetadata = reinterpret_cast<MetaBuffer*>(pPipelineCreateDescriptor->hPipelineMetadata);

        OsUtils::StrLCpy(pPipelineDescriptor->pipelineName, pPipelineName, MaxStringLength256);

        for (UINT streamId = 0; streamId < numOutputs; streamId++)
        {
            ChiStream*          pChiStream          = pOutputBufferDescriptor[streamId].pStream;
            if (NULL != pChiStream)
            {
                pChiStream->pHalStream = NULL;
            }
            GrallocUsage64      grallocUsage        = GetGrallocUsage(pChiStream);
            BOOL                isVideoHwEncoder    = (GrallocUsageHwVideoEncoder ==
                                                      (GrallocUsageHwVideoEncoder & grallocUsage));

            // Override preview output format to UBWCTP10 if the session has video HDR
            if (TRUE == isVideoHwEncoder)
            {
                BOOL isFormatUBWCTP10 = ((grallocUsage & GrallocUsageUBWC) == GrallocUsageUBWC) &&
                    ((grallocUsage & GrallocUsage10Bit) == GrallocUsage10Bit);

                if (TRUE == isFormatUBWCTP10)
                {
                    overrideImpDefinedFormat.isHDR = 1;
                }
            }
        }

        for (UINT streamId = 0; streamId < numOutputs; streamId++)
        {
            if (NULL == pOutputBufferDescriptor[streamId].pStream)
            {
                CAMX_LOG_ERROR(CamxLogGroupChi, "Invalid input pStream for streamId=%d", streamId);
                result = CamxResultEInvalidArg;
                break;
            }

            /// @todo (CAMX-1797) Need to fix the reinterpret_cast
            ChiStream*          pChiStream          = pOutputBufferDescriptor[streamId].pStream;
            Camera3Stream*      pHAL3Stream         = reinterpret_cast<Camera3Stream*>(pChiStream);
            ChiStreamWrapper*   pChiStreamWrapper   = NULL;
            GrallocProperties   grallocProperties;
            Format              selectedFormat;

            overrideImpDefinedFormat.isRaw = pOutputBufferDescriptor[streamId].bIsOverrideImplDefinedWithRaw;

            if (0 != (GetGrallocUsage(pChiStream) & GrallocUsageProtected))
            {
                pPipelineDescriptor->flags.isSecureMode = TRUE;
            }

            if (numBatchedFrames > 1)
            {
                pPipelineDescriptor->flags.isHFRMode = TRUE;
            }

            // override preview dataspace if the session has video hdr10
            if ((GrallocUsageHwComposer == (GetGrallocUsage(pChiStream) & GrallocUsageHwComposer)) &&
                (TRUE == overrideImpDefinedFormat.isHDR))
            {
                pChiStream->dataspace = DataspaceStandardBT2020_PQ;
            }

            grallocProperties.colorSpace         = static_cast<ColorSpace>(pChiStream->dataspace);
            grallocProperties.pixelFormat        = pChiStream->format;
            grallocProperties.grallocUsage       = GetGrallocUsage(pChiStream);
            grallocProperties.isInternalBuffer   = TRUE;
            grallocProperties.isRawFormat        = pOutputBufferDescriptor[streamId].bIsOverrideImplDefinedWithRaw;
            grallocProperties.staticFormat       = HwEnvironment::GetInstance()->GetStaticSettings()->outputFormat;
            grallocProperties.isMultiLayerFormat = ((TRUE == HALOutputBufferCombined) &&
                                                    (GrallocUsageHwVideoEncoder ==
                                                     (GrallocUsageHwVideoEncoder & GetGrallocUsage(pChiStream))));

            result = ImageFormatUtils::GetFormat(grallocProperties, selectedFormat);

            if (CamxResultSuccess == result)
            {
                CAMX_LOG_VERBOSE(CamxLogGroupCore,
                    "GetFormat: pixelFormat %d, outputFormat %d, rawformat %d, selectedFormat %d usage 0x%llx",
                    grallocProperties.pixelFormat, grallocProperties.staticFormat,
                    grallocProperties.isRawFormat, selectedFormat, grallocProperties.grallocUsage);
                pChiStreamWrapper = CAMX_NEW ChiStreamWrapper(pHAL3Stream, streamId, selectedFormat);
            }
            else
            {
                CAMX_LOG_ERROR(CamxLogGroupCore,
                    "GetFormat failed, pixelFormat %d, outputFormat %d, rawformat %d usage %llu",
                    grallocProperties.pixelFormat, grallocProperties.staticFormat,
                    grallocProperties.isRawFormat, grallocProperties.grallocUsage);
            }

            CAMX_ASSERT(NULL != pChiStreamWrapper);

            if (NULL != pChiStreamWrapper)
            {
                auto* pOutputDesc = &pOutputBufferDescriptor[streamId];
                if (TRUE == pOutputDesc->hasValidBufferNegotiationOptions)
                {
                    pChiStreamWrapper->SetBufferNegotiationOptions(pOutputDesc->pBufferNegotiationsOptions);
                }

                UINT32 maxBuffer;
                if (pPipelineCreateDescriptor->HALOutputBufferCombined == TRUE)
                {
                    maxBuffer = 1;
                }
                else
                {
                    maxBuffer = numBatchedFrames;
                }

                if (ChiExternalNode == pOutputBufferDescriptor[streamId].pNodePort[0].nodeId)
                {

                    CAMX_LOG_INFO(CamxLogGroupChi, "StreamId=%d is associated with external Node %d and portID:%d",
                        streamId,
                        pOutputBufferDescriptor[streamId].pNodePort[0].nodeId,
                        pOutputBufferDescriptor[streamId].pNodePort[0].nodePortId);

                    SetChiStreamInfo(pChiStreamWrapper, maxBuffer, TRUE);
                }
                else
                {
                    SetChiStreamInfo(pChiStreamWrapper, maxBuffer, FALSE);
                }

                pChiStream->pPrivateInfo = pChiStreamWrapper;
            }
            else
            {
                CAMX_LOG_ERROR(CamxLogGroupCore, "Can't allocate StreamWrapper");
                result = CamxResultENoMemory;
                break;
            }
        }
    }
    else
    {
        CAMX_LOG_ERROR(CamxLogGroupChi, "Out of memory");
        result = CamxResultENoMemory;
    }

    if (CamxResultSuccess == result)
    {
        // Unfortunately, we don't know the lifetime of the objects being pointed to, so we have to assume they will not
        // exist after this function call, and certainly not by the call to CamX::Session::Initialize, so we might as well
        // perform the conversion here, and keep the data in the format we expect
        result = ProcessPipelineCreateDesc(pPipelineCreateDescriptor,
                                           numOutputs,
                                           pOutputBufferDescriptor,
                                           pPipelineDescriptor);
    }

    if (result == CamxResultSuccess)
    {
        SetPipelineDescriptorOutput(pPipelineDescriptor, numOutputs, pOutputBufferDescriptor);

        pipelineCreateInputData.pPipelineDescriptor    = pPipelineDescriptor;
        pipelineCreateInputData.pChiContext            = this;
        pipelineCreateInputData.isSecureMode           = pPipelineDescriptor->flags.isSecureMode;
        pipelineCreateOutputData.pPipelineInputOptions = pPipelineInputOptions;

        result = Pipeline::Create(&pipelineCreateInputData, &pipelineCreateOutputData);

        if (CamxResultSuccess != result)
        {
            if (NULL != pipelineCreateOutputData.pPipeline)
            {
                pipelineCreateOutputData.pPipeline->Destroy();
            }
        }
        else
        {
            pPipelineDescriptor->pPrivData = pipelineCreateOutputData.pPipeline;
        }
    }

    if (CamxResultSuccess == result)
    {
        if ((FALSE == pPipelineCreateDescriptor->isRealTime) && (numInputs < pipelineCreateOutputData.numInputs))
        {
            CAMX_LOG_ERROR(CamxLogGroupHAL, "Number inputs %d are not matching per pipeline descriptor", numInputs);
        }
        SetPipelineDescriptorInputOptions(pPipelineDescriptor, pipelineCreateOutputData.numInputs, pPipelineInputOptions);
    }
    else
    {
        if (NULL != pPipelineDescriptor)
        {
            DestroyPipelineDescriptor(pPipelineDescriptor);
            pPipelineDescriptor = NULL;
        }
        CAMX_LOG_ERROR(CamxLogGroupChi, "Pipeline descriptor creation failed");
    }

    return pPipelineDescriptor;
}

camxsession.cpp–>Create

camxpipeline.cpp–>Initialize


/// Pipeline::Initialize

CamxResult Pipeline::Initialize(
    PipelineCreateInputData*  pPipelineCreateInputData,
    PipelineCreateOutputData* pPipelineCreateOutputData)
{
    CamxResult result = CamxResultEFailed;

    m_pChiContext                   = pPipelineCreateInputData->pChiContext;
    m_flags.isSecureMode            = pPipelineCreateInputData->isSecureMode;
    m_flags.isHFRMode               = pPipelineCreateInputData->pPipelineDescriptor->flags.isHFRMode;
    m_flags.isInitialConfigPending  = TRUE;
    m_pThreadManager                = pPipelineCreateInputData->pChiContext->GetThreadManager();
    m_pPipelineDescriptor           = pPipelineCreateInputData->pPipelineDescriptor;
    m_pipelineIndex                 = pPipelineCreateInputData->pipelineIndex;
    m_cameraId                      = m_pPipelineDescriptor->cameraId;
    m_hCSLLinkHandle                = CSLInvalidHandle;
    m_numConfigDoneNodes            = 0;
    m_lastRequestId                 = 0;
    m_configDoneCount               = 0;
    m_hCSLLinkHandle                = 0;
    m_HALOutputBufferCombined       = m_pPipelineDescriptor->HALOutputBufferCombined;
    m_lastSubmittedShutterRequestId = 0;
    m_pTuningManager                = HwEnvironment::GetInstance()->GetTuningDataManager(m_cameraId);
    m_sensorSyncMode                = NoSync;

    // Create lock and condition for config done
    m_pConfigDoneLock       = Mutex::Create("PipelineConfigDoneLock");
    m_pWaitForConfigDone    = Condition::Create("PipelineWaitForConfigDone");

    // Resource lock, used to syncronize acquire resources and release resources
    m_pResourceAcquireReleaseLock = Mutex::Create("PipelineResourceAcquireReleaseLock");
    if (NULL == m_pResourceAcquireReleaseLock)
    {
        CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory!!");
        return CamxResultENoMemory;
    }

    m_pWaitForStreamOnDone = Condition::Create("PipelineWaitForStreamOnDone");
    if (NULL == m_pWaitForStreamOnDone)
    {
        CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory!!");
        return CamxResultENoMemory;
    }

    m_pStreamOnDoneLock = Mutex::Create("PipelineStreamOnDoneLock");
    if (NULL == m_pStreamOnDoneLock)
    {
        CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory!!");
        return CamxResultENoMemory;
    }

    m_pNodesRequestDoneLock = Mutex::Create("PipelineAllNodesRequestDone");
    if (NULL == m_pNodesRequestDoneLock)
    {
        CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory!!");
        return CamxResultENoMemory;
    }

    m_pWaitAllNodesRequestDone = Condition::Create("PipelineWaitAllNodesRequestDone");
    if (NULL == m_pWaitAllNodesRequestDone)
    {
        CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory!!");
        return CamxResultENoMemory;
    }

    // Create external Sensor when sensor module is enabled
    // External Sensor Module is created so as to test CAMX ability to work with OEMs
    // who has external sensor (ie they do all sensor configuration outside of driver
    // and there is no sensor node in the pipeline )
    HwContext* pHwcontext = pPipelineCreateInputData->pChiContext->GetHwContext();
    if (TRUE == pHwcontext->GetStaticSettings()->enableExternalSensorModule)
    {
        m_pExternalSensor = ExternalSensor::Create();
        CAMX_ASSERT(NULL != m_pExternalSensor);
    }

    CAMX_ASSERT(NULL != m_pConfigDoneLock);
    CAMX_ASSERT(NULL != m_pWaitForConfigDone);
    CAMX_ASSERT(NULL != m_pResourceAcquireReleaseLock);

    OsUtils::SNPrintF(m_pipelineIdentifierString, sizeof(m_pipelineIdentifierString), "%s_%d",
        GetPipelineName(), GetPipelineId());

    // We can't defer UsecasePool since we are publishing preview dimension to it.
    m_pUsecasePool  = MetadataPool::Create(PoolType::PerUsecase, m_pipelineIndex, NULL, 1, GetPipelineIdentifierString(), 0);

    if (NULL != m_pUsecasePool)
    {
        m_pUsecasePool->UpdateRequestId(0); // Usecase pool created, mark the slot as valid
    }
    else
    {
        CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory!!");
        return CamxResultENoMemory;
    }

    SetLCRRawformatPorts();

    SetNumBatchedFrames(m_pPipelineDescriptor->numBatchedFrames, m_pPipelineDescriptor->maxFPSValue);

    m_pCSLSyncIDToRequestId = static_cast<UINT64*>(CAMX_CALLOC(sizeof(UINT64) * MaxPerRequestInfo * GetBatchedHALOutputNum()));

    if (NULL == m_pCSLSyncIDToRequestId)
    {
        CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory!!");
        return CamxResultENoMemory;
    }

    m_pStreamBufferBlob = static_cast<StreamBufferInfo*>(CAMX_CALLOC(sizeof(StreamBufferInfo) * GetBatchedHALOutputNum() *
                                                                     MaxPerRequestInfo));
    if (NULL == m_pStreamBufferBlob)
    {
        CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory!!");
        return CamxResultENoMemory;
    }

    for (UINT i = 0; i < MaxPerRequestInfo; i++)
    {
        m_perRequestInfo[i].pSequenceId = static_cast<UINT32*>(CAMX_CALLOC(sizeof(UINT32) * GetBatchedHALOutputNum()));

        if (NULL == m_perRequestInfo[i].pSequenceId)
        {
            CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory!!");
            return CamxResultENoMemory;
        }
        m_perRequestInfo[i].request.pStreamBuffers = &m_pStreamBufferBlob[i * GetBatchedHALOutputNum()];
    }

    MetadataSlot*    pMetadataSlot                = m_pUsecasePool->GetSlot(0);
    MetaBuffer*      pInitializationMetaBuffer    = m_pPipelineDescriptor->pSessionMetadata;
    MetaBuffer*      pMetadataSlotDstBuffer       = NULL;

    // Copy metadata published by the Chi Usecase to this pipeline's UsecasePool
    if (NULL != pInitializationMetaBuffer)
    {
        result = pMetadataSlot->GetMetabuffer(&pMetadataSlotDstBuffer);

        if (CamxResultSuccess == result)
        {
            pMetadataSlotDstBuffer->Copy(pInitializationMetaBuffer, TRUE);
        }
        else
        {
            CAMX_LOG_ERROR(CamxLogGroupMeta, "Cannot copy! Error Code: %u", result);
        }
    }
    else
    {
        CAMX_LOG_WARN(CamxLogGroupMeta, "No init metadata found!");
    }

    if (CamxResultSuccess == result)
    {
        UINT32      metaTag             = 0;
        UINT        sleepStaticSetting  = HwEnvironment::GetInstance()->GetStaticSettings()->induceSleepInChiNode;

        result                          = VendorTagManager::QueryVendorTagLocation(
                                                "org.quic.camera.induceSleepInChiNode",
                                                "InduceSleep",
                                                &metaTag);

        if (CamxResultSuccess == result)
        {
            result  = pMetadataSlot->SetMetadataByTag(metaTag, &sleepStaticSetting, 1, "camx_session");

            if (CamxResultSuccess != result)
            {
                CAMX_LOG_ERROR(CamxLogGroupCore, "Failed to set Induce sleep result %d", result);
            }
        }
    }

    GetCameraRunningOnBPS(pMetadataSlot);

    ConfigureMaxPipelineDelay(m_pPipelineDescriptor->maxFPSValue,
        (FALSE == m_flags.isCameraRunningOnBPS) ? DefaultMaxIFEPipelineDelay : DefaultMaxBPSPipelineDelay);

    QueryEISCaps();

    PublishOutputDimensions();
    PublishTargetFPS();

    if (CamxResultSuccess == result)
    {
        result = PopulatePSMetadataSet();
    }

    result = CreateNodes(pPipelineCreateInputData, pPipelineCreateOutputData);

    // set frame delay in session metadata
    if (CamxResultSuccess == result)
    {
        UINT32 metaTag    = 0;
        UINT32 frameDelay = DetermineFrameDelay();
        result            = VendorTagManager::QueryVendorTagLocation(
                            "org.quic.camera.eislookahead", "FrameDelay", &metaTag);
        if (CamxResultSuccess == result)
        {
            MetaBuffer* pSessionMetaBuffer = m_pPipelineDescriptor->pSessionMetadata;
            if (NULL != pSessionMetaBuffer)
            {
                result = pSessionMetaBuffer->SetTag(metaTag, &frameDelay, 1, sizeof(UINT32));
            }
            else
            {
                result = CamxResultEInvalidPointer;
                CAMX_LOG_ERROR(CamxLogGroupCore, "Session metadata pointer null");
            }
        }
    }

    // set EIS enabled flag in session metadata
    if (CamxResultSuccess == result)
    {
        UINT32  metaTag     = 0;
        BOOL    bEnabled    = IsEISEnabled();
        result              = VendorTagManager::QueryVendorTagLocation("org.quic.camera.eisrealtime", "Enabled", &metaTag);

        // write the enabled flag only if it's set to TRUE. IsEISEnabled may return FALSE when vendor tag is not published too
        if ((TRUE == bEnabled) && (CamxResultSuccess == result))
        {
            MetaBuffer* pSessionMetaBuffer = m_pPipelineDescriptor->pSessionMetadata;
            if (NULL != pSessionMetaBuffer)
            {
                result = pSessionMetaBuffer->SetTag(metaTag, &bEnabled, 1, sizeof(BYTE));
            }
            else
            {
                result = CamxResultEInvalidPointer;
                CAMX_LOG_ERROR(CamxLogGroupCore, "Session metadata pointer null");
            }
        }
    }

    // set EIS minimal total margin in session metadata
    if (CamxResultSuccess == result)
    {
        UINT32 metaTag = 0;
        MarginRequest margin = { 0 };

        result = DetermineEISMiniamalTotalMargin(&margin);

        if (CamxResultSuccess == result)
        {
            result = VendorTagManager::QueryVendorTagLocation("org.quic.camera.eisrealtime", "MinimalTotalMargins", &metaTag);
        }

        if (CamxResultSuccess == result)
        {
            MetaBuffer* pSessionMetaBuffer = m_pPipelineDescriptor->pSessionMetadata;
            if (NULL != pSessionMetaBuffer)
            {
                result = pSessionMetaBuffer->SetTag(metaTag, &margin, 1, sizeof(MarginRequest));
            }
            else
            {
                result = CamxResultEInvalidPointer;
                CAMX_LOG_ERROR(CamxLogGroupCore, "Session metadata pointer null");
            }
        }
    }

    if (CamxResultSuccess == result)
    {
        for (UINT i = 0; i < m_nodeCount; i++)
        {
            result = FilterAndUpdatePublishSet(m_ppNodes[i]);
        }
    }

    if (HwEnvironment::GetInstance()->GetStaticSettings()->numMetadataResults > SingleMetadataResult)
    {
        m_bPartialMetadataEnabled = TRUE;
    }

    if ((TRUE == m_bPartialMetadataEnabled) && (TRUE == m_flags.isHFRMode))
    {
        CAMX_LOG_CONFIG(CamxLogGroupCore, "Disable partial metadata in HFR mode");
        m_bPartialMetadataEnabled = FALSE;
    }

    if (CamxResultSuccess == result)
    {
        m_pPerRequestInfoLock = Mutex::Create("PipelineRequestInfo");
        if (NULL != m_pPerRequestInfoLock)
        {
            if (IsRealTime())
            {
                m_metaBufferDelay = Utils::MaxUINT32(
                    GetMaxPipelineDelay(),
                    DetermineFrameDelay());
            }
            else
            {
                m_metaBufferDelay = 0;
            }
        }
        else
        {
            result = CamxResultENoMemory;
        }
    }

    if (CamxResultSuccess == result)
    {
        if (IsRealTime())
        {
            m_metaBufferDelay = Utils::MaxUINT32(
                GetMaxPipelineDelay(),
                DetermineFrameDelay());
        }
        else
        {
            m_metaBufferDelay = 0;
        }

        UpdatePublishTags();
    }

    if (CamxResultSuccess == result)
    {
        pPipelineCreateOutputData->pPipeline = this;
        SetPipelineStatus(PipelineStatus::INITIALIZED);
        auto& rPipelineName = m_pipelineIdentifierString;
        UINT  pipelineId    = GetPipelineId();
        BOOL  isRealtime    = IsRealTime();
        auto  hPipeline     = m_pPipelineDescriptor;
        BINARY_LOG(LogEvent::Pipeline_Initialize, rPipelineName, pipelineId, hPipeline);
    }

    return result;
}

camxpipeline.cpp–>CreateNodes


/// Pipeline::CreateNodes

CamxResult Pipeline::CreateNodes(
    PipelineCreateInputData*  pCreateInputData,
    PipelineCreateOutputData* pCreateOutputData)
{
    /// @todo (CAMX-423) Break it into smaller functions

    CAMX_UNREFERENCED_PARAM(pCreateOutputData);

    CamxResult                result                    = CamxResultSuccess;
    const PipelineDescriptor* pPipelineDescriptor       = pCreateInputData->pPipelineDescriptor;
    const PerPipelineInfo*    pPipelineInfo             = &pPipelineDescriptor->pipelineInfo;
    UINT                      numInPlaceSinkBufferNodes = 0;
    Node*                     pInplaceSinkBufferNode[MaxNodeType];
    UINT                      numBypassableNodes        = 0;
    Node*                     pBypassableNodes[MaxNodeType];
    ExternalComponentInfo*    pExternalComponentInfo    = HwEnvironment::GetInstance()->GetExternalComponent();
    UINT                      numExternalComponents     = HwEnvironment::GetInstance()->GetNumExternalComponent();

    CAMX_ASSERT(NULL == m_ppNodes);

    m_nodeCount                        = pPipelineInfo->numNodes;
    m_ppNodes                          = static_cast<Node**>(CAMX_CALLOC(sizeof(Node*) * m_nodeCount));
    m_ppOrderedNodes = static_cast<Node**>(CAMX_CALLOC(sizeof(Node*) * m_nodeCount));

    CAMX_ASSERT(NULL != m_ppOrderedNodes);

    if ((NULL != m_ppNodes) &&
        (NULL != m_ppOrderedNodes))
    {
        NodeCreateInputData createInputData  = { 0 };

        createInputData.pPipeline    = this;
        createInputData.pChiContext  = pCreateInputData->pChiContext;

        UINT nodeIndex = 0;

        CAMX_LOG_CONFIG(CamxLogGroupCore,
                      "Topology: Creating Pipeline %s, numNodes %d isSensorInput %d isRealTime %d",
                      GetPipelineIdentifierString(),
                      m_nodeCount,
                      IsSensorInput(),
                      IsRealTime());

        for (UINT numNodes = 0; numNodes < m_nodeCount; numNodes++)
        {
            NodeCreateOutputData createOutputData = { 0 };
            createInputData.pNodeInfo         = &(pPipelineInfo->pNodeInfo[numNodes]);
            createInputData.pipelineNodeIndex = numNodes;

            for (UINT propertyIndex = 0; propertyIndex < createInputData.pNodeInfo->nodePropertyCount; propertyIndex++)
            {
                for (UINT index = 0; index < numExternalComponents; index++)
                {
                    if ((pExternalComponentInfo[index].nodeAlgoType == ExternalComponentNodeAlgo::COMPONENTALGORITHM) &&
                        (NodePropertyCustomLib == createInputData.pNodeInfo->pNodeProperties[propertyIndex].id))
                    {
                        CHAR matchString[FILENAME_MAX] = {0};
                        OsUtils::SNPrintF(matchString, FILENAME_MAX, "%s.%s",
                            static_cast<CHAR*>(createInputData.pNodeInfo->pNodeProperties[propertyIndex].pValue),
                            SharedLibraryExtension);

                        if (OsUtils::StrNICmp(pExternalComponentInfo[index].pComponentName,
                            matchString,
                            OsUtils::StrLen(pExternalComponentInfo[index].pComponentName)) == 0)
                        {
                            if (pExternalComponentInfo[index].statsAlgo == ExternalComponentStatsAlgo::ALGOAF)
                            {
                                createInputData.pAFAlgoCallbacks = &pExternalComponentInfo[index].AFAlgoCallbacks;
                            }
                            else if (pExternalComponentInfo[index].statsAlgo == ExternalComponentStatsAlgo::ALGOAEC)
                            {
                                createInputData.pAECAlgoCallbacks = &pExternalComponentInfo[index].AECAlgoCallbacks;
                            }
                            else if (pExternalComponentInfo[index].statsAlgo == ExternalComponentStatsAlgo::ALGOAWB)
                            {
                                createInputData.pAWBAlgoCallbacks = &pExternalComponentInfo[index].AWBAlgoCallbacks;
                            }
                            else if (pExternalComponentInfo[index].statsAlgo == ExternalComponentStatsAlgo::ALGOAFD)
                            {
                                createInputData.pAFDAlgoCallbacks = &pExternalComponentInfo[index].AFDAlgoCallbacks;
                            }
                            else if (pExternalComponentInfo[index].statsAlgo == ExternalComponentStatsAlgo::ALGOASD)
                            {
                                createInputData.pASDAlgoCallbacks = &pExternalComponentInfo[index].ASDAlgoCallbacks;
                            }
                            else if (pExternalComponentInfo[index].statsAlgo == ExternalComponentStatsAlgo::ALGOPD)
                            {
                                createInputData.pPDLibCallbacks = &pExternalComponentInfo[index].PDLibCallbacks;
                            }
                            else if (pExternalComponentInfo[index].statsAlgo == ExternalComponentStatsAlgo::ALGOHIST)
                            {
                                createInputData.pHistAlgoCallbacks = &pExternalComponentInfo[index].histAlgoCallbacks;
                            }
                            else if (pExternalComponentInfo[index].statsAlgo == ExternalComponentStatsAlgo::ALGOTRACK)
                            {
                                createInputData.pTrackerAlgoCallbacks = &pExternalComponentInfo[index].trackerAlgoCallbacks;
                            }
                        }
                    }
                    else if ((pExternalComponentInfo[index].nodeAlgoType == ExternalComponentNodeAlgo::COMPONENTHVX) &&
                        (NodePropertyCustomLib == createInputData.pNodeInfo->pNodeProperties[propertyIndex].id) &&
                        (OsUtils::StrStr(pExternalComponentInfo[index].pComponentName,
                        static_cast<CHAR*>(createInputData.pNodeInfo->pNodeProperties[propertyIndex].pValue)) != NULL))
                    {
                        createInputData.pHVXAlgoCallbacks = &pExternalComponentInfo[index].HVXAlgoCallbacks;
                    }
                }
            }

            result = Node::Create(&createInputData, &createOutputData);

            if (CamxResultSuccess == result)
            {
                CAMX_LOG_CONFIG(CamxLogGroupCore,
                              "Topology::%s Node::%s Type %d numInputPorts %d numOutputPorts %d",
                              GetPipelineIdentifierString(),
                              createOutputData.pNode->NodeIdentifierString(),
                              createOutputData.pNode->Type(),
                              createInputData.pNodeInfo->inputPorts.numPorts,
                              createInputData.pNodeInfo->outputPorts.numPorts);

                if (CamxResultSuccess != result)
                {
                    CAMX_LOG_WARN(CamxLogGroupCore, "[%s] Cannot get publish list for %s",
                                  GetPipelineIdentifierString(), createOutputData.pNode->NodeIdentifierString());
                }

                if (StatsProcessing == createOutputData.pNode->Type())
                {
                    m_flags.hasStatsNode = TRUE;
                }

                if (0x10000 == createOutputData.pNode->Type())
                {
                    m_flags.hasIFENode = TRUE;
                }

                if ((JPEGAggregator == createOutputData.pNode->Type()) || (0x10001 == createOutputData.pNode->Type()))
                {
                    m_flags.hasJPEGNode = TRUE;
                }

                m_ppNodes[nodeIndex] = createOutputData.pNode;

                if ((TRUE == createOutputData.createFlags.isSinkBuffer) ||
                    (TRUE == createOutputData.createFlags.isSinkNoBuffer))
                {
                    m_nodesSinkOutPorts.nodeIndices[m_nodesSinkOutPorts.numNodes] = nodeIndex;
                    m_nodesSinkOutPorts.numNodes++;
                }

                if ((TRUE == createOutputData.createFlags.isSinkBuffer) && (TRUE == createOutputData.createFlags.isInPlace))
                {
                    pInplaceSinkBufferNode[numInPlaceSinkBufferNodes] = createOutputData.pNode;
                    numInPlaceSinkBufferNodes++;
                }

                if (TRUE == createOutputData.createFlags.isBypassable)
                {
                    pBypassableNodes[numBypassableNodes] = createOutputData.pNode;
                    numBypassableNodes++;
                }

                if ((TRUE == createOutputData.createFlags.isSourceBuffer) || (Sensor == m_ppNodes[nodeIndex]->Type()))
                {
                    m_nodesSourceInPorts.nodeIndices[m_nodesSourceInPorts.numNodes] = nodeIndex;
                    m_nodesSourceInPorts.numNodes++;
                }

                if (TRUE == createOutputData.createFlags.willNotifyConfigDone)
                {
                    m_numConfigDoneNodes++;
                }

                if (TRUE == createOutputData.createFlags.hasDelayedNotification)
                {
                    m_isDelayedPipeline = TRUE;
                }

                nodeIndex++;
            }
            else
            {
                break;
            }
        }

        if (CamxResultSuccess == result)
        {
            // Set the input link of the nodes - basically connects output port of one node to input port of another
            for (UINT nodeIndexInner = 0; nodeIndexInner < m_nodeCount; nodeIndexInner++)
            {
                const PerNodeInfo* pXMLNode = &pPipelineInfo->pNodeInfo[nodeIndexInner];

                for (UINT inputPortIndex = 0; inputPortIndex < pXMLNode->inputPorts.numPorts; inputPortIndex++)
                {
                    const InputPortInfo* pInputPortInfo = &pXMLNode->inputPorts.pPortInfo[inputPortIndex];

                    if (FALSE == m_ppNodes[nodeIndexInner]->IsSourceBufferInputPort(inputPortIndex))
                    {
                        m_ppNodes[nodeIndexInner]->SetInputLink(inputPortIndex,
                                                           pInputPortInfo->portId,
                                                           m_ppNodes[pInputPortInfo->parentNodeIndex],
                                                           pInputPortInfo->parentOutputPortId);

                        m_ppNodes[nodeIndexInner]->SetUpLoopBackPorts(inputPortIndex);

                        /// In the parent node's output port, Save this node as one of the output node connected to it.
                        m_ppNodes[pInputPortInfo->parentNodeIndex]->AddOutputNodes(pInputPortInfo->parentOutputPortId,
                                                                                   m_ppNodes[nodeIndexInner]);

                        /// Update access device index list for the source port based on current nodes device index list
                        /// At this point the source node which maintains the output buffer manager have the access information
                        /// required for buffer manager creation.
                        m_ppNodes[pInputPortInfo->parentNodeIndex]->AddOutputDeviceIndices(
                            pInputPortInfo->parentOutputPortId,
                            m_ppNodes[nodeIndexInner]->DeviceIndices(),
                            m_ppNodes[nodeIndexInner]->DeviceIndexCount());

                        const ImageFormat* pImageFormat = m_ppNodes[nodeIndexInner]->GetInputPortImageFormat(inputPortIndex);
                        if (NULL != pImageFormat)
                        {
                            CAMX_LOG_CONFIG(CamxLogGroupCore,
                                          "Topology: Pipeline[%s] "
                                          "Link: Node::%s(outPort %d) --> (inPort %d) Node::%s using format %d",
                                          GetPipelineIdentifierString(),
                                          m_ppNodes[pInputPortInfo->parentNodeIndex]->NodeIdentifierString(),
                                          pInputPortInfo->parentOutputPortId,
                                          pInputPortInfo->portId,
                                          m_ppNodes[nodeIndexInner]->NodeIdentifierString(),
                                          pImageFormat->format);
                        }
                        else
                        {
                            CAMX_LOG_ERROR(CamxLogGroupCore, "Node::%s Invalid pImageFormat",
                                           m_ppNodes[nodeIndexInner]->NodeIdentifierString());
                        }
                    }
                    else
                    {
                        m_ppNodes[nodeIndexInner]->SetupSourcePort(inputPortIndex, pInputPortInfo->portId);
                    }
                }
                if (TRUE == m_ppNodes[nodeIndexInner]->IsLoopBackNode())
                {
                    m_ppNodes[nodeIndexInner]->EnableParentOutputPorts();
                }
            }
        }

        /// @todo (CAMX-1015) Look into non recursive implementation
        if (CamxResultSuccess == result)
        {
            for (UINT index = 0; index < m_nodesSinkOutPorts.numNodes; index++)
            {
                if (NULL != m_ppNodes[m_nodesSinkOutPorts.nodeIndices[index]])
                {
                    m_ppNodes[m_nodesSinkOutPorts.nodeIndices[index]]->TriggerOutputPortStreamIdSetup();
                }
            }
        }
    }
    else
    {
        CAMX_LOG_ERROR(CamxLogGroupCore, "m_ppNodes or m_ppOrderedNodes is Null");
        result = CamxResultENoMemory;
    }

    // Bypass node processing
    if (CamxResultSuccess == result)
    {
        for (UINT index = 0; index < numBypassableNodes; index++)
        {
            pBypassableNodes[index]->BypassNodeProcessing();
        }
    }

    if (CamxResultSuccess == result)
    {
        for (UINT index = 0; index < numInPlaceSinkBufferNodes; index++)
        {
            pInplaceSinkBufferNode[index]->TriggerInplaceProcessing();
        }
    }

    if (CamxResultSuccess == result)
    {
        for (UINT index = 0; index < m_nodesSinkOutPorts.numNodes; index++)
        {
            CAMX_ASSERT((NULL != m_ppNodes) && (NULL != m_ppNodes[m_nodesSinkOutPorts.nodeIndices[index]]));

            Node* pNode = m_ppNodes[m_nodesSinkOutPorts.nodeIndices[index]];

            result = pNode->TriggerBufferNegotiation();

            if (CamxResultSuccess != result)
            {
                CAMX_LOG_WARN(CamxLogGroupCore, "Unable to satisfy node input buffer requirements, retrying with NV12");
                break;
            }
        }
        if (CamxResultSuccess != result)
        {
            result = RenegotiateInputBufferRequirement(pCreateInputData, pCreateOutputData);
        }
    }

    if (CamxResultSuccess != result)
    {
        CAMX_ASSERT_ALWAYS();
        CAMX_LOG_ERROR(CamxLogGroupCore, "%s Creating Nodes Failed. Going to Destroy sequence", GetPipelineIdentifierString());
        DestroyNodes();
    }
    else
    {
        UINT numInputs = 0;

        for (UINT index = 0; index < m_nodesSourceInPorts.numNodes; index++)
        {
            Node*                    pNode         = m_ppNodes[m_nodesSourceInPorts.nodeIndices[index]];
            ChiPipelineInputOptions* pInputOptions = &pCreateOutputData->pPipelineInputOptions[numInputs];

            numInputs += pNode->FillPipelineInputOptions(pInputOptions);
        }

        pCreateOutputData->numInputs = numInputs;
    }

    return result;
}

camxnode.cpp–>Create


// Node::Create

CamxResult Node::Create(
    const NodeCreateInputData* pCreateInputData,
    NodeCreateOutputData*      pCreateOutputData)
{
    CAMX_ENTRYEXIT_NAME(CamxLogGroupCore, "NodeCreate");
    CamxResult result = CamxResultSuccess;

    const HwFactory* pFactory = HwEnvironment::GetInstance()->GetHwFactory();
    Node*            pNode    = pFactory->CreateNode(pCreateInputData, pCreateOutputData);

    if (pNode != NULL)
    {
        result = pNode->Initialize(pCreateInputData, pCreateOutputData);

        if (CamxResultSuccess != result)
        {
            pNode->Destroy();
            pNode = NULL;
        }
    }
    else
    {
        result = CamxResultENoMemory;
    }

    pCreateOutputData->pNode = pNode;

    return result;
}

camxnode.cpp–>Initialize


// Node::Initialize

CamxResult Node::Initialize(
    const NodeCreateInputData* pCreateInputData,
    NodeCreateOutputData*      pCreateOutputData)
{
    CamxResult         result          = CamxResultSuccess;
    const PerNodeInfo* pNodeCreateInfo = NULL;

    if ((NULL == pCreateInputData)              ||
        (NULL == pCreateInputData->pNodeInfo)   ||
        (NULL == pCreateInputData->pPipeline)   ||
        (NULL == pCreateInputData->pChiContext) ||
        (NULL == pCreateOutputData))
    {
        CAMX_LOG_ERROR(CamxLogGroupCore, "pCreateInputData, pCreateOutputData - %p, %p is NULL",
                       pCreateInputData, pCreateOutputData);
        result = CamxResultEInvalidArg;
    }

    if (CamxResultSuccess == result)
    {
        pNodeCreateInfo = pCreateInputData->pNodeInfo;

        m_nodeType    = pNodeCreateInfo->nodeId;
        m_instanceId  = pNodeCreateInfo->instanceId;
        m_maxjpegsize = 0;

        OsUtils::SNPrintF(m_nodeIdentifierString, sizeof(m_nodeIdentifierString), "%s_%s%d",
                          pCreateInputData->pPipeline->GetPipelineName(), Name(), InstanceID());

        CAMX_TRACE_SYNC_BEGIN_F(CamxLogGroupCore, "NodeInitialize: %s", m_nodeIdentifierString);

        m_pPipeline               = pCreateInputData->pPipeline;
        m_pUsecasePool            = m_pPipeline->GetPerFramePool(PoolType::PerUsecase);
        m_pipelineNodeIndex       = pCreateInputData->pipelineNodeIndex;
        m_inputPortsData.numPorts = pNodeCreateInfo->inputPorts.numPorts;
        m_pHwContext              = pCreateInputData->pChiContext->GetHwContext();
        m_pChiContext             = pCreateInputData->pChiContext;
        m_seqIdNodeStartTime      = 0;

        memset(m_responseTimeSum, 0, sizeof(m_responseTimeSum));
        memset(m_numberOfNodeResponses, 0, sizeof(m_numberOfNodeResponses));

        if (NodeClass::Bypass == pNodeCreateInfo->nodeClass)
        {
            m_nodeFlags.isInplace                       = FALSE;
            pCreateOutputData->createFlags.isInPlace    = FALSE;
            m_nodeFlags.isBypassable                    = TRUE;
            pCreateOutputData->createFlags.isBypassable = TRUE;
        }
        else if (NodeClass::Inplace == pNodeCreateInfo->nodeClass)
        {
            m_nodeFlags.isInplace                       = TRUE;
            pCreateOutputData->createFlags.isInPlace    = TRUE;
            m_nodeFlags.isBypassable                    = FALSE;
            pCreateOutputData->createFlags.isBypassable = FALSE;
        }
        else
        {
            m_nodeFlags.isInplace                       = FALSE;
            m_nodeFlags.isBypassable                    = FALSE;
            pCreateOutputData->createFlags.isBypassable = FALSE;
            pCreateOutputData->createFlags.isInPlace    = FALSE;
        }

        CAMX_LOG_INFO(CamxLogGroupCore, "[%s] Is node identifier %d(:%s):%d bypassable = %d inplace = %d",
                      m_pPipeline->GetPipelineName(), m_nodeType, m_pNodeName, m_instanceId,
                      m_nodeFlags.isBypassable, m_nodeFlags.isInplace);

        m_nodeFlags.isRealTime   = pCreateInputData->pPipeline->IsRealTime();
        m_nodeFlags.isSecureMode = pCreateInputData->pPipeline->IsSecureMode();
        pCreateOutputData->createFlags.isDeferNotifyPipelineCreate = FALSE;

        if (0 != m_nodeExtComponentCount)
        {
            result = HwEnvironment::GetInstance()->SearchExternalComponent(&m_nodeExtComponents[0], m_nodeExtComponentCount);
        }

        if (m_inputPortsData.numPorts > 0)
        {
            m_inputPortsData.pInputPorts = static_cast<InputPort*>(CAMX_CALLOC(sizeof(InputPort) * m_inputPortsData.numPorts));
            m_bufferNegotiationData.pInputPortNegotiationData =
                static_cast<InputPortNegotiationData*>(
                    CAMX_CALLOC(sizeof(InputPortNegotiationData) * m_inputPortsData.numPorts));

            if ((NULL == m_inputPortsData.pInputPorts) || (NULL == m_bufferNegotiationData.pInputPortNegotiationData))
            {
                result = CamxResultENoMemory;
            }
        }
    }

    if (CamxResultSuccess == result)
    {
        m_outputPortsData.numPorts = pNodeCreateInfo->outputPorts.numPorts;

        if (m_outputPortsData.numPorts > 0)
        {
            m_outputPortsData.pOutputPorts =
                static_cast<OutputPort*>(CAMX_CALLOC(sizeof(OutputPort) * m_outputPortsData.numPorts));

            if (NULL == m_outputPortsData.pOutputPorts)
            {
                CAMX_ASSERT_ALWAYS_MESSAGE("Node::Initialize Cannot allocate memory for output ports");

                result = CamxResultENoMemory;
            }
            else
            {
                m_bufferNegotiationData.pOutputPortNegotiationData =
                    static_cast<OutputPortNegotiationData*>(
                        CAMX_CALLOC(sizeof(OutputPortNegotiationData) * m_outputPortsData.numPorts));

                if (NULL == m_bufferNegotiationData.pOutputPortNegotiationData)
                {
                    CAMX_ASSERT_ALWAYS_MESSAGE("Node::Initialize Cannot allocate memory for buffer negotiation data");

                    result = CamxResultENoMemory;
                }
            }
        }
    }
    else
    {
        CAMX_ASSERT_ALWAYS_MESSAGE("Node::Initialize Cannot allocate memory for input ports");

        result = CamxResultENoMemory;
    }

    if (CamxResultSuccess == result)
    {
        for (UINT outputPortIndex = 0; outputPortIndex < m_outputPortsData.numPorts; outputPortIndex++)
        {
            /// @todo (CAMX-359) - Have a set of utility functions to extract information from the Usecase structure instead
            ///                    of this multi-level-indirection
            const OutputPortInfo* pOutputPortCreateInfo = &pNodeCreateInfo->outputPorts.pPortInfo[outputPortIndex];
            OutputPort*           pOutputPort           = &m_outputPortsData.pOutputPorts[outputPortIndex];

            pOutputPort->portId                         = pOutputPortCreateInfo->portId;
            pOutputPort->numInputPortsConnected         = pOutputPortCreateInfo->portLink.numInputPortsConnected;
            pOutputPort->flags.isSecurePort             = m_nodeFlags.isSecureMode;
            pOutputPort->portSourceTypeId               = pOutputPortCreateInfo->portSourceTypeId;
            pOutputPort->numSourcePortsMapped           = pOutputPortCreateInfo->numSourceIdsMapped;
            pOutputPort->pMappedSourcePortIds           = pOutputPortCreateInfo->pMappedSourcePortIds;
            pOutputPort->flags.isSinkInplaceBuffer      = (TRUE == m_nodeFlags.isInplace) ?
                                                              IsSinkInplaceBufferPort(pOutputPortCreateInfo->portLink) : FALSE;

            OutputPortNegotiationData* pOutputPortNegotiationData    =
                &m_bufferNegotiationData.pOutputPortNegotiationData[outputPortIndex];

            pOutputPortNegotiationData->outputPortIndex              = outputPortIndex;
            // This will be filled in by the derived node at the end of buffer negotiation
            pOutputPortNegotiationData->pFinalOutputBufferProperties = &pOutputPort->bufferProperties;

            /// @note If an output port has a SINK destination, it can be the only destination for that link
            if ((TRUE == pOutputPortCreateInfo->flags.isSinkBuffer) || (TRUE == pOutputPortCreateInfo->flags.isSinkNoBuffer))
            {
                m_outputPortsData.sinkPortIndices[m_outputPortsData.numSinkPorts] = outputPortIndex;

                if (TRUE == pOutputPortCreateInfo->flags.isSinkBuffer)
                {
                    ChiStreamWrapper* pChiStreamWrapper =
                        m_pPipeline->GetOutputStreamWrapper(static_cast<UINT32>(Type()),
                                                            static_cast<UINT32>(InstanceID()),
                                                            static_cast<UINT32>(pOutputPort->portId));

                    if (NULL != pChiStreamWrapper)
                    {
                        pOutputPort->flags.isSinkBuffer = TRUE;
                        /// @todo (CAMX-1797) sinkTarget, sinkTargetStreamId - need to remove
                        pOutputPort->sinkTargetStreamId = pChiStreamWrapper->GetStreamIndex();

                        ChiStream* pStream              = reinterpret_cast<ChiStream*>(pChiStreamWrapper->GetNativeStream());
                        pOutputPort->streamData.portFPS = pStream->streamParams.streamFPS;

                        if ((TRUE == pChiStreamWrapper->IsVideoStream()) || (TRUE == pChiStreamWrapper->IsPreviewStream()))
                        {
                            pOutputPort->streamData.streamTypeBitmask |= RealtimeStream;
                        }
                        else
                        {
                            pOutputPort->streamData.streamTypeBitmask |= OfflineStream;
                        }
                        m_streamTypeBitmask |= pOutputPort->streamData.streamTypeBitmask;

                        pCreateOutputData->createFlags.isSinkBuffer = TRUE;

                        CAMX_ASSERT(CamxInvalidStreamId != pOutputPort->sinkTargetStreamId);

                        pOutputPort->enabledInStreamMask = (1 << pOutputPort->sinkTargetStreamId);

                        InitializeSinkPortBufferProperties(outputPortIndex, pCreateInputData, pOutputPortCreateInfo);
                    }
                    else
                    {
                        CAMX_LOG_ERROR(CamxLogGroupCore, "Node::%s pChiStreamWrapper for Port Index at %d is null",
                                       NodeIdentifierString(), outputPortIndex);
                        result = CamxResultEInvalidPointer;
                        break;
                    }
                }
                else
                {
                    pCreateOutputData->createFlags.isSinkNoBuffer = TRUE;
                    pOutputPort->flags.isSinkNoBuffer             = TRUE;
                    /// @note SinkNoBuffer is enabled in all streams
                    pOutputPort->enabledInStreamMask              = ((1 << MaxNumStreams) - 1);
                }

                /// @note If an output port has a SINK destination (input port), it can be its only destination
                CAMX_ASSERT(0 == pOutputPortNegotiationData->numInputPortsNotification);

                pOutputPortNegotiationData->numInputPortsNotification++;

                if (pOutputPortNegotiationData->numInputPortsNotification == pOutputPort->numInputPortsConnected)
                {
                    // When all the input ports connected to the output port have notified the output port, it means the
                    // output port has all the buffer requirements it needs to make a decision for the buffer on that output
                    // port
                    m_bufferNegotiationData.numOutputPortsNotified++;
                }

                m_outputPortsData.numSinkPorts++;
            }
            else
            {
                InitializeNonSinkPortBufferProperties(outputPortIndex, &pOutputPortCreateInfo->portLink);
            }
        }
    }

    if (CamxResultSuccess == result)
    {
        for (UINT inputPortIndex = 0; inputPortIndex < m_inputPortsData.numPorts; inputPortIndex++)
        {
            const InputPortInfo* pInputPortInfo = &pNodeCreateInfo->inputPorts.pPortInfo[inputPortIndex];
            InputPort*           pInputPort     = &m_inputPortsData.pInputPorts[inputPortIndex];

            pInputPort->portSourceTypeId = pInputPortInfo->portSourceTypeId;
            pInputPort->portId           = pInputPortInfo->portId;

            if (TRUE == pInputPortInfo->flags.isSourceBuffer)
            {
                pCreateOutputData->createFlags.isSourceBuffer = TRUE;
                pInputPort->flags.isSourceBuffer              = TRUE;

                pInputPort->ppImageBuffers    = static_cast<ImageBuffer**>(CAMX_CALLOC(sizeof(ImageBuffer*) *
                                                                               MaxRequestQueueDepth));
                pInputPort->phFences          = static_cast<CSLFence*>(CAMX_CALLOC(sizeof(CSLFence) *
                                                                           MaxRequestQueueDepth));
                pInputPort->pIsFenceSignaled  = static_cast<UINT*>(CAMX_CALLOC(sizeof(UINT) *
                                                                       MaxRequestQueueDepth));
                pInputPort->pFenceSourceFlags = static_cast<CamX::InputPort::FenceSourceFlags*>(CAMX_CALLOC(sizeof(UINT) *
                                                                                                    MaxRequestQueueDepth));

                if ((NULL == pInputPort->ppImageBuffers)    ||
                    (NULL == pInputPort->phFences)          ||
                    (NULL == pInputPort->pIsFenceSignaled))
                {
                    result = CamxResultENoMemory;
                    break;
                }
                else
                {
                    BufferManagerCreateData createData = { };

                    createData.deviceIndices[0]                             = 0;
                    createData.deviceCount                                  = 0;
                    createData.maxBufferCount                               = MaxRequestQueueDepth;
                    createData.immediateAllocBufferCount                    = MaxRequestQueueDepth;
                    createData.allocateBufferMemory                         = FALSE;
                    createData.numBatchedFrames                             = 1;
                    createData.bufferManagerType                            = BufferManagerType::CamxBufferManager;
                    createData.linkProperties.pNode                         = this;
                    createData.linkProperties.isPartOfRealTimePipeline      = m_pPipeline->HasIFENode();
                    createData.linkProperties.isPartOfPreviewVideoPipeline  = m_pPipeline->HasIFENode();
                    createData.linkProperties.isPartOfSnapshotPipeline      = m_pPipeline->HasJPEGNode();
                    createData.linkProperties.isFromIFENode                 = (0x10000 == m_nodeType) ? TRUE : FALSE;

                    CHAR bufferManagerName[MaxStringLength256];
                    OsUtils::SNPrintF(bufferManagerName, sizeof(bufferManagerName), "%s_InputPort%d_%s",
                                      NodeIdentifierString(),
                                      pInputPort->portId, GetInputPortName(pInputPort->portId));

                    result = CreateImageBufferManager(bufferManagerName, &createData, &pInputPort->pImageBufferManager);

                    if (CamxResultSuccess != result)
                    {
                        CAMX_LOG_ERROR(CamxLogGroupCore, "[%s] Create ImageBufferManager failed", bufferManagerName);
                        break;
                    }

                    for (UINT bufferIndex = 0; bufferIndex < MaxRequestQueueDepth; bufferIndex++)
                    {
                        pInputPort->ppImageBuffers[bufferIndex] = pInputPort->pImageBufferManager->GetImageBuffer();

                        if (NULL == pInputPort->ppImageBuffers[bufferIndex])
                        {
                            result = CamxResultENoMemory;
                            break;
                        }
                    }

                    if (CamxResultSuccess != result)
                    {
                        break;
                    }
                }
            }
        }

        if (CamxResultSuccess == result)
        {
            // Initialize the derived hw/sw node objects
            result = ProcessingNodeInitialize(pCreateInputData, pCreateOutputData);

            CAMX_ASSERT(MaxDependentFences >= pCreateOutputData->maxInputPorts);

            if (MaxDependentFences < pCreateOutputData->maxInputPorts)
            {
                CAMX_LOG_ERROR(CamxLogGroupCore, "Node::%s ERROR: Need to increase the value of MaxDependentFences",
                               NodeIdentifierString());

                result = CamxResultEFailed;
            }
        }

        if (CamxResultSuccess == result)
        {
            m_maxOutputPorts                         = pCreateOutputData->maxOutputPorts;
            m_maxInputPorts                          = pCreateOutputData->maxInputPorts;
            m_nodeFlags.isInplace                    = pCreateOutputData->createFlags.isInPlace;
            m_nodeFlags.callNotifyConfigDone         = pCreateOutputData->createFlags.willNotifyConfigDone;
            m_nodeFlags.canDRQPreemptOnStopRecording = pCreateOutputData->createFlags.canDRQPreemptOnStopRecording;
            m_nodeFlags.hasDelayedNotification       = pCreateOutputData->createFlags.hasDelayedNotification;
            m_nodeFlags.isDeferNotifyPipelineCreate  = pCreateOutputData->createFlags.isDeferNotifyPipelineCreate;

            // Cache the buffer composite info of the derived node
            Utils::Memcpy(&m_bufferComposite, &pCreateOutputData->bufferComposite, sizeof(BufferGroup));

            for (UINT request = 0; request < MaxRequestQueueDepth; request++)
            {
                if (0 != m_inputPortsData.numPorts)
                {
                    m_perRequestInfo[request].activePorts.pInputPorts =
                        static_cast<PerRequestInputPortInfo*>(CAMX_CALLOC(sizeof(PerRequestInputPortInfo) *
                                                                          m_inputPortsData.numPorts));

                    if (NULL == m_perRequestInfo[request].activePorts.pInputPorts)
                    {
                        result = CamxResultENoMemory;
                        break;
                    }
                }

                if (m_outputPortsData.numPorts > 0)
                {
                    m_perRequestInfo[request].activePorts.pOutputPorts =
                        static_cast<PerRequestOutputPortInfo*>(CAMX_CALLOC(sizeof(PerRequestOutputPortInfo) *
                                                                           m_outputPortsData.numPorts));

                    if (NULL == m_perRequestInfo[request].activePorts.pOutputPorts)
                    {
                        result = CamxResultENoMemory;
                        break;
                    }

                    for (UINT i = 0; i < m_outputPortsData.numPorts; i++)
                    {
                        m_perRequestInfo[request].activePorts.pOutputPorts[i].ppImageBuffer =
                            static_cast<ImageBuffer**>(CAMX_CALLOC(sizeof(ImageBuffer*) *
                                                                   m_pPipeline->GetBatchedHALOutputNum()));

                        if (NULL == m_perRequestInfo[request].activePorts.pOutputPorts[i].ppImageBuffer)
                        {
                            result = CamxResultENoMemory;
                            break;
                        }
                    }
                }
            }
        }
    }

    if (CamxResultSuccess == result)
    {
        CHAR  nodeMutexResource[Mutex::MaxResourceNameSize];

        OsUtils::SNPrintF(nodeMutexResource,
                          Mutex::MaxResourceNameSize,
                          "EPRLock_%s_%d",
                          Name(),
                          InstanceID());

        m_pProcessRequestLock = Mutex::Create(nodeMutexResource);
        m_pBufferReleaseLock  = Mutex::Create("BufferReleaseLock");
        if (NULL == m_pProcessRequestLock)
        {
            result = CamxResultENoMemory;
            CAMX_ASSERT("Node process request mutex creation failed");
        }

        if (NULL == m_pBufferReleaseLock)
        {
            result = CamxResultENoMemory;
            CAMX_ASSERT("Buffer release mutex creation failed");
        }

        m_pFenceCreateReleaseLock = Mutex::Create("Fence_Create_Release");
        if (NULL == m_pFenceCreateReleaseLock)
        {
            result = CamxResultENoMemory;
            CAMX_ASSERT("Node fence mutex creation failed");
        }

        OsUtils::SNPrintF(nodeMutexResource,
            Mutex::MaxResourceNameSize,
            "NodeBufferRequestLock_%s_%d",
            Name(),
            InstanceID());

        m_pBufferRequestLock = Mutex::Create(nodeMutexResource);
        if (NULL == m_pBufferRequestLock)
        {
            result = CamxResultENoMemory;
            CAMX_ASSERT("Node buffer request mutex creation failed");
        }

        m_pCmdBufferManagersLock = Mutex::Create("CmdBufferManagersLock");
        if (NULL == m_pCmdBufferManagersLock)
        {
            result = CamxResultENoMemory;
        }
    }

    if (CamxResultSuccess == result)
    {
        if (TRUE == HwEnvironment::GetInstance()->GetStaticSettings()->watermarkImage)
        {
            const StaticSettings* pSettings         = HwEnvironment::GetInstance()->GetStaticSettings();

            switch (Type())
            {
                /// @todo (CAMX-2875) Need a good way to do comparisons with HWL node types and ports.
                case IFE: // IFE  Only watermark on IFE output
                    m_pWatermarkPattern = static_cast<WatermarkPattern*>(CAMX_CALLOC(sizeof(WatermarkPattern)));

                    if (NULL != m_pWatermarkPattern)
                    {
                        const CHAR* pOffset        = pSettings->watermarkOffset;
                        const UINT  length         = CAMX_ARRAY_SIZE(pSettings->watermarkOffset);
                        CHAR        offset[length] = { '\0' };
                        CHAR*       pContext       = NULL;
                        const CHAR* pXOffsetToken  = NULL;
                        const CHAR* pYOffsetToken  = NULL;

                        if (NULL != pOffset && 0 != pOffset[0])
                        {
                            OsUtils::StrLCpy(offset, pOffset, length);
                            pXOffsetToken = OsUtils::StrTokReentrant(offset, "xX", &pContext);
                            pYOffsetToken = OsUtils::StrTokReentrant(NULL, "xX", &pContext);
                            if ((NULL != pXOffsetToken) && (NULL != pYOffsetToken))
                            {
                                m_pWatermarkPattern->watermarkOffsetX = OsUtils::StrToUL(pXOffsetToken, NULL, 0);
                                m_pWatermarkPattern->watermarkOffsetY = OsUtils::StrToUL(pYOffsetToken, NULL, 0);
                            }
                        }
                        result = ImageDump::InitializeWatermarkPattern(m_pWatermarkPattern);
                    }
                    else
                    {
                        CAMX_LOG_ERROR(CamxLogGroupCore, "Node::%s Unable to allocate watermark structure. Out of memory",
                                       NodeIdentifierString());
                        result = CamxResultENoMemory;
                    }
                    break;
                default:
                    break;
            }
        }
    }

    result = CacheVendorTagLocation();

#if CAMX_CONTINGENCY_INDUCER_ENABLE
    if (CamxResultSuccess == result)
    {
        m_pContingencyInducer = CAMX_NEW ContingencyInducer();
        if (NULL != m_pContingencyInducer)
        {
            m_pContingencyInducer->Initialize(pCreateInputData->pChiContext,
                                          pCreateInputData->pPipeline->GetPipelineName(),
                                          Name());
        }
        else
        {
            CAMX_LOG_ERROR(CamxLogGroupCore, "Node::%s Unable to allocate Inducer. Out of memory", NodeIdentifierString());
        }
    }
#endif // CONTINGENCY_INDUCER_ENABLE

    CAMX_TRACE_SYNC_END(CamxLogGroupCore);
    CAMX_ASSERT(CamxResultSuccess == result);

    HwCameraInfo    cameraInfo;
    HwEnvironment::GetInstance()->GetCameraInfo(GetPipeline()->GetCameraId(), &cameraInfo);

    if (NULL != cameraInfo.pSensorCaps)
    {
        // store active pixel array info to base node
        m_activePixelArrayWidth  = cameraInfo.pSensorCaps->activeArraySize.width;
        m_activePixelArrayHeight = cameraInfo.pSensorCaps->activeArraySize.height;
    }
    else
    {
        CAMX_LOG_ERROR(CamxLogGroupCore, "Node::%s NULL pSensorCaps pointer", NodeIdentifierString());
        result = CamxResultEInvalidPointer;
    }

    if (NULL != pCreateOutputData)
    {
        NodeCreateFlags& rNodeCreateFlags = pCreateOutputData->createFlags;
        UINT             nodeType         = Type();
        UINT             nodeInstanceId   = InstanceID();
        auto             hPipeline        = m_pPipeline->GetPipelineDescriptor();
        BINARY_LOG(LogEvent::Node_Initialize, this, hPipeline, nodeType, nodeInstanceId, result, rNodeCreateFlags);
    }

    return result;
}

camxhwfactory.cpp–>CreateNode

HwFactory对象是在Configurestream的时候就创建的


// HwFactory::CreateNode

Node* HwFactory::CreateNode(
    const NodeCreateInputData* pCreateInputData,
    NodeCreateOutputData*      pCreateOutputData
    ) const
{
    // Virtual call to derived HWL node creation function
    return HwCreateNode(pCreateInputData, pCreateOutputData);

camxtitan17xfactory.cpp–>HwCreateNode

HwCreateNode函数中创建了各种Node,这些Node在后续的流处理中非常重要


/// Titan17xFactory::HwCreateNode

Node* Titan17xFactory::HwCreateNode(
    const NodeCreateInputData* pCreateInputData,
    NodeCreateOutputData*      pCreateOutputData
    ) const
{
    Node* pNode = NULL;

    switch (pCreateInputData->pNodeInfo->nodeId)
    {
        case AutoFocus:
            pNode = AutoFocusNode::Create(pCreateInputData, pCreateOutputData);
            break;
        case AutoWhiteBalance:
            pNode = AWBNode::Create(pCreateInputData, pCreateOutputData);
            break;
        case BPS:
            pNode = BPSNode::Create(pCreateInputData, pCreateOutputData);
            break;
        case IFE:
            pNode = IFENode::Create(pCreateInputData, pCreateOutputData);
            break;
        case IPE:
            pNode = IPENode::Create(pCreateInputData, pCreateOutputData);
            break;
        case Sensor:
            pNode = SensorNode::Create(pCreateInputData, pCreateOutputData);
            break;
        case StatsProcessing:
            pNode = StatsProcessingNode::Create(pCreateInputData, pCreateOutputData);
            break;
        case JPEG:
            pNode = JPEGEncNode::Create(pCreateInputData, pCreateOutputData);
            break;
        case JPEGAggregator:
            pNode = JPEGAggrNode::Create(pCreateInputData, pCreateOutputData);
            break;
        case StatsParse:
            pNode = StatsParseNode::Create(pCreateInputData, pCreateOutputData);
            break;
        case ChiExternalNode:
            pNode = ChiNodeWrapper::Create(pCreateInputData, pCreateOutputData);
            break;
#if (!defined(LE_CAMERA)) // FD disabled LE_CAMERA
        case FDHw:
            pNode = FDHwNode::Create(pCreateInputData, pCreateOutputData);
            break;
        case FDManager:
            pNode = FDManagerNode::Create(pCreateInputData, pCreateOutputData);
            break;
#endif // FD disabled LE_CAMERA
        case Tracker:
            pNode = TrackerNode::Create(pCreateInputData, pCreateOutputData);
            break;
        case OfflineStats:
            pNode = OfflineStatsNode::Create(pCreateInputData, pCreateOutputData);
            break;
        case Torch:
            pNode = TorchNode::Create(pCreateInputData, pCreateOutputData);
            break;
        case LRME:
            pNode = LRMENode::Create(pCreateInputData, pCreateOutputData);
            break;
        case RANSAC:
            pNode = RANSACNode::Create(pCreateInputData, pCreateOutputData);
            break;
        case HistogramProcess:
            pNode = HistogramProcessNode::Create(pCreateInputData, pCreateOutputData);
            break;
#if CVPENABLED
        case CVP:
            pNode = CVPNode::Create(pCreateInputData, pCreateOutputData);
            break;
#endif // CVPENABLED
        default:
            CAMX_ASSERT_ALWAYS_MESSAGE("Unexpected node type");
            break;
    }

    return pNode;
}

create pipeline ok

CameraUsecaseBase::StartDeferThread()

参考链接:https://blog.csdn.net/shangbolei/article/details/106653371

你可能感兴趣的:(Camera,Hal)