Android P之Camera HAL3流程分析(2)

我们使用TextureView显示相机预览数据,Camera2的预览和拍照数据都是使用CameraCaptureSession会话来请求的
    private void startPreview() {
        SurfaceTexture mSurfaceTexture = mTextureView.getSurfaceTexture();
        mSurfaceTexture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
        //通过view创建surface对象
        Surface previewSurface = new Surface(mSurfaceTexture);
        try {
            mCaptureRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
            //绑定请求和surface
            mCaptureRequestBuilder.addTarget(previewSurface);
            //预览数据会同时输出到previewSurface和mImageReader
            mCameraDevice.createCaptureSession(Arrays.asList(previewSurface, mImageReader.getSurface()), new
                                                                            CameraCaptureSession.StateCallback()
{
                @Override
                public void onConfigured(CameraCaptureSession session) {
                    try {
                        //创建请求
                        mCaptureRequest = mCaptureRequestBuilder.build();
                        //保存相机会话对象
                        mCameraCaptureSession = session;
                        //开始预览
                        mCameraCaptureSession.setRepeatingRequest(mCaptureRequest, null, mCameraHandler);
                    }
                }
                @Override
                public void onConfigureFailed(CameraCaptureSession session) {

                }
            }, mCameraHandler);
        }
    }

createCaptureSession阶段

当成功打开相机后,会通过CameraDevice.StateCallback回调接口的onOpened(@NonNull CameraDevice camera)方法返回一个CameraDevice对象给我们应用层,而这个CameraDevice对象真正是一个CameraDeviceImpl,那么接下来的createCaptureSession就是调用它来实现的。
frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java
    public void createCaptureSession(List outputs,
            CameraCaptureSession.StateCallback callback, Handler handler) {
        List outConfigurations = new ArrayList<>(outputs.size());
        for (Surface surface : outputs) {
            outConfigurations.add(new OutputConfiguration(surface));
        }
        createCaptureSessionInternal(null, outConfigurations, callback,
                checkAndWrapHandler(handler), /*operatingMode*/ICameraDeviceUser.NORMAL_MODE,
                /*sessionParams*/ null);
    }

这里我们来看一下该方法的几个参数:

第一个参数是一个范型为Surface的List,这里的Surface就是我们用来创建流的,一般如果没有特殊的要求,那我们只需要下两个Surface,一个提供预览,一个提供拍照。
预览的Surface就是相机预览区域,buffer轮转时,预览区的buffer就是要从这个预览Surface当中获取的,这个Surface一定要正确,否则就会导致session创建失败,预览区就会黑屏了,我们在平时的工作中经常碰到这样的情况;
而至于拍照Surface,我们一般使用ImageReader对象来获取,ImageReader是系统提供的一个类,它的创建过程已经为我们创建好了一个Surface,我们直接使用它来当作拍照Surface,当拍照成功后,我们就可以从ImageReader.OnImageAvailableListener内部类的onImageAvailable回调方法中获取到一个ImageReader对象,再调用getPlanes()获取到Plane数组,一般取第一个Plane,继续调用getBuffer()就可以获取到拍摄的照片的byte数组了。

第二个参数callback的类型为CameraCaptureSession.java类的内部类StateCallback,和openCamera一样,当session创建成功后,framework也会通过这个回调接口的onConfigured(@NonNull CameraCaptureSession session)方法返回一个CameraCaptureSession对象给我们,而真正的实现是一个CameraCaptureSessionImpl对象,我们可以使用它来作很多的工作,比如断开session连接调用abortCaptures();拍照调用capture()方法;下预览调用setRepeatingRequest;停预览调用stopRepeating(),这里的设计和openCamera是完全一样的。

第三个参数Handler的作用和openCamera也一样,还是为了保证线程不发生切换,我们在应用进程的哪个工作线程中执行createCaptureSession,那么framework回调我们时,也会通过这个handler把回调消息发送到当前handler线程的Looper循环上。

好了,参数分析完了,我们继续往下看代码,它实际就是调用createCaptureSessionInternal方法进一步处理的,这里的会把我们传入的surface列表进行一下转换,转换为OutputConfiguration对象,调用createCaptureSessionInternal方法时的第一个参数inputConfig一般为空,我们只关注outputConfig。

frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java
    private void createCaptureSessionInternal(InputConfiguration inputConfig,
            List outputConfigurations,
            CameraCaptureSession.StateCallback callback, Executor executor,
            int operatingMode, CaptureRequest sessionParams) {
        synchronized(mInterfaceLock) {
            boolean isConstrainedHighSpeed =
                    (operatingMode == ICameraDeviceUser.CONSTRAINED_HIGH_SPEED_MODE);
            mCurrentSession.replaceSessionClose();
            boolean configureSuccess = true;
            Surface input = null;
            configureSuccess = configureStreamsChecked(inputConfig, outputConfigurations,
                    operatingMode, sessionParams);

            // Fire onConfigured if configureOutputs succeeded, fire onConfigureFailed otherwise.
            CameraCaptureSessionCore newSession = null;
            if (isConstrainedHighSpeed) {
                ArrayList surfaces = new ArrayList<>(outputConfigurations.size());
                for (OutputConfiguration outConfig : outputConfigurations) {
                    surfaces.add(outConfig.getSurface());
                }
                StreamConfigurationMap config =
                    getCharacteristics().get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
                SurfaceUtils.checkConstrainedHighSpeedSurfaces(surfaces, /*fpsRange*/null, config);

                newSession = new CameraConstrainedHighSpeedCaptureSessionImpl(mNextSessionId++,
                        callback, executor, this, mDeviceExecutor, configureSuccess,
                        mCharacteristics);
            } else {
                newSession = new CameraCaptureSessionImpl(mNextSessionId++, input,
                        callback, executor, this, mDeviceExecutor, configureSuccess);
            }
            mCurrentSession = newSession;
            mSessionStateCallback = mCurrentSession.getDeviceStateCallback();
        }
    }

这个方法的作用就是配置surface了。该方法中最重要的就是configureSuccess = configureStreamsChecked(inputConfig, outputConfigurations, operatingMode)这句了,它会执行surface配置,如果配置成功,则configureSuccess值为true,否则为false,接下来会创建session的实现类对象,一般是执行else分支,创建CameraCaptureSessionImpl对象
-----------------------------------------------------------------------------------------------------------------------------------

CameraCaptureSessionImpl的实现
frameworks/base/core/java/android/hardware/camera2/impl/CameraCaptureSessionImpl.java
    CameraCaptureSessionImpl(int id, Surface input,
            CameraCaptureSession.StateCallback callback, Executor stateExecutor,
            android.hardware.camera2.impl.CameraDeviceImpl deviceImpl,
            Executor deviceStateExecutor, boolean configureSuccess) {
        mInput = input;
        mStateExecutor = stateExecutor;
        mStateCallback = createUserStateCallbackProxy(mStateExecutor, callback);//创建回调函数的代理对象

        mDeviceExecutor = deviceStateExecutor;
        mDeviceImpl = deviceImpl;

        if (configureSuccess) {
            mStateCallback.onConfigured(this);//回调函数
            mConfigureSuccess = true;
        }
    }
    private StateCallback createUserStateCallbackProxy(Executor executor, StateCallback callback) {
        return new CallbackProxies.SessionStateCallbackProxy(executor, callback);
    }
frameworks/base/core/java/android/hardware/camera2/impl/CallbackProxies.java
public class CallbackProxies {
    public static class SessionStateCallbackProxy
            extends CameraCaptureSession.StateCallback {
        public SessionStateCallbackProxy(Executor executor,
                CameraCaptureSession.StateCallback callback) {
            mExecutor = executor;
            mCallback = callback;
        }
        public void onConfigured(CameraCaptureSession session) {
            final long ident = Binder.clearCallingIdentity();
            mExecutor.execute(() -> mCallback.onConfigured(session));//将session通过回调函数传递给应用
        }
    }
}
通过以上流程,上层应用可以成功获得CameraCaptureSession,后续就可以调用CameraCaptureSession进行拍照或者预览了。
-----------------------------------------------------------------------------------------------------------------------------------

configureStreamsChecked的实现
frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java
    public boolean configureStreamsChecked(InputConfiguration inputConfig,
            List outputs, int operatingMode, CaptureRequest sessionParams) {
        checkInputConfiguration(inputConfig);//当前inputConfig为null,所以这部分不执行
        synchronized(mInterfaceLock) {
            //检查当前缓存的输出流数据列表,如果当前的输出流信息已经在列表中,则不必要重新创建流,如果没有则需要创建流

            //要创建输出流的集合列表
            HashSet addSet = new HashSet(outputs);
            //要删除的streamId列表,保证当前mConfiguredOutputs列表中的输出流数据是最新可用的
            List deleteList = new ArrayList();
            //创建输出流,mConfiguredOutputs是内存中的输出流缓存列表,会保存输出流streamId和输出流
            for (int i = 0; i < mConfiguredOutputs.size(); ++i) {
                int streamId = mConfiguredOutputs.keyAt(i);
                OutputConfiguration outConfig = mConfiguredOutputs.valueAt(i);
                if (!outputs.contains(outConfig) || outConfig.isDeferredConfiguration()) {
                    deleteList.add(streamId);
                } else {
                    addSet.remove(outConfig);
                }
            }

            mDeviceExecutor.execute(mCallOnBusy);//表示接下来要开始配置surface,处理繁忙状态了
            stopRepeating();//停预览

            try {
                mRemoteDevice.beginConfigure();//通知CameraServer进程中的binder服务端对象,开始配置
                InputConfiguration currentInputConfig = mConfiguredInput.getValue();
                if (inputConfig != currentInputConfig &&
                        (inputConfig == null || !inputConfig.equals(currentInputConfig))) {
                    if (currentInputConfig != null) {
                        mRemoteDevice.deleteStream(mConfiguredInput.getKey());
                        mConfiguredInput = new SimpleEntry(
                                REQUEST_ID_NONE, null);
                    }
                    if (inputConfig != null) {
                        int streamId = mRemoteDevice.createInputStream(inputConfig.getWidth(),
                                inputConfig.getHeight(), inputConfig.getFormat());
                        mConfiguredInput = new SimpleEntry(
                                streamId, inputConfig);
                    }
                }
                //删除过期输出流
                for (Integer streamId : deleteList) {
                    mRemoteDevice.deleteStream(streamId);
                    mConfiguredOutputs.delete(streamId);
                }

                for (OutputConfiguration outConfig : outputs) {//配置surface列表
                    if (addSet.contains(outConfig)) {
                        int streamId = mRemoteDevice.createStream(outConfig);
                        mConfiguredOutputs.put(streamId, outConfig);
                    }
                }

                if (sessionParams != null) {
                    mRemoteDevice.endConfigure(operatingMode, sessionParams.getNativeCopy());
                } else {
                    mRemoteDevice.endConfigure(operatingMode, null);//通知CameraDeviceClient结束配置
                }
                success = true;
            } finally {
                if (success && outputs.size() > 0) {
                    mDeviceExecutor.execute(mCallOnIdle);
                } else {
                    mDeviceExecutor.execute(mCallOnUnconfigured);
                }
            }
        }
        return success;
    }
        下面这张图详细列出配置输入输出流函数中执行的主要步骤,由于当前的inputConfig为null,所以核心的执行就是下面粉红色框中的过程——创建输出流
Android P之Camera HAL3流程分析(2)_第1张图片

        mRemoteDevice.beginConfigure();与mRemoteDevice.endConfigure(operatingMode, null);中间的过程是IPC通知service端告知当前正在处理输入输出流。执行完mRemoteDevice.endConfigure(operatingMode, null);返回success = true;如果中间被终端了,那么success肯定不为true。

        mRemoteDevice.createStream(outConfig)的实现,这个IPC调用直接调用到CameraDeviceClient.h中的virtual binder::Status createStream( const hardware::camera2::params::OutputConfiguration &outputConfiguration, /*out*/ int32_t* newStreamId = NULL) override;第一个参数outputConfiguration表示输出surface,第2个参数是out属性的,表示IPC执行之后返回的参数。

        for循环中使用outputConfiguration.getGraphicBufferProducers()得到的GraphicBufferProducers创建出对应的surface,同时会对这些surface对象进行判断,检查它们的合法性,合法的话就会将它们加入到surfaces集合中,然后调用mDevice->createStream进一步执行流的创建。这里就要说一说Android显示系统的一些知识了,大家要清楚,Android上最终绘制在屏幕上的buffer都是在显存中分配的,而除了这部分外,其他都是在内存中分配的,buffer管理的模块有两个,一个是framebuffer,一个是gralloc,framebuffer用来将渲染好的buffer显示到屏幕上,而gralloc用于分配buffer,我们相机预览的buffer轮转也不例外,它所申请的buffer根上也是由gralloc来分配的,在native层的描述是一个private_handle_t指针,而中间会经过多层的封装,这些buffer都是共享的。只不过它的生命周期中的某个时刻只能属于一个所有者,而这些所有者的角色在不断的变换,这也就是Android中最经典的生产者--消费者的循环模型了,生产者就是BufferProducer,消费者就是BufferConsumer,每一个buffer在它的生命周期过程中转换时都会被锁住,这样它的所有者角色发生变化,而其他对象想要修改它就不可能了,这样就保证了buffer同步。

frameworks/base/core/java/android/hardware/camera2/impl/ICameraDeviceUserWrapper.java
private final ICameraDeviceUser mRemoteDevice;
    public int createStream(OutputConfiguration outputConfiguration){
        return mRemoteDevice.createStream(outputConfiguration);
    }
frameworks/av/services/camera/libcameraservice/api2/CameraDeviceClient.cpp
binder::Status CameraDeviceClient::createStream(
        const hardware::camera2::params::OutputConfiguration &outputConfiguration,
        /*out*/
        int32_t* newStreamId) {

    //创建出对应的surface
    const std::vector>& bufferProducers =
            outputConfiguration.getGraphicBufferProducers();
    for (auto& bufferProducer : bufferProducers) {
        sp binder = IInterface::asBinder(bufferProducer);
        ssize_t index = mStreamMap.indexOfKey(binder);
        sp surface;
        res = createSurfaceFromGbp(streamInfo, isStreamInfoValid, surface, bufferProducer);
        binders.push_back(IInterface::asBinder(bufferProducer));
        surfaces.push_back(surface);
    }

    //执行流的创建
    int streamId = camera3::CAMERA3_STREAM_ID_INVALID;
    std::vector surfaceIds;
    err = mDevice->createStream(surfaces, deferredConsumer, streamInfo.width,
            streamInfo.height, streamInfo.format, streamInfo.dataSpace,
            static_cast(outputConfiguration.getRotation()),
            &streamId, physicalCameraId, &surfaceIds, outputConfiguration.getSurfaceSetID(),
            isShared);

     {
        for (auto& binder : binders) {
            mStreamMap.add(binder, StreamSurfaceId(streamId, surfaceIds[i]));
            i++;
        }
        mConfiguredOutputs.add(streamId, outputConfiguration);
        mStreamInfoMap[streamId] = streamInfo;
        *newStreamId = streamId;
    }
    return res;
}
binder::Status CameraDeviceClient::createSurfaceFromGbp(
        OutputStreamInfo& streamInfo, bool isStreamInfoValid,
        sp& surface, const sp& gbp) {
   /*在native层创建surface, 并赋值OutputStreamInfo*/
   surface = new Surface(gbp, useAsync);
    if (!isStreamInfoValid) {
        streamInfo.width = width;
        streamInfo.height = height;
        streamInfo.format = format;
        streamInfo.dataSpace = dataSpace;
        streamInfo.consumerUsage = consumerUsage;
        return binder::Status::ok();
    }
}

createStream阶段

alps/frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp
status_t Camera3Device::createStream(const std::vector>& consumers,
        bool hasDeferredConsumer, uint32_t width, uint32_t height, int format,
        android_dataspace dataSpace, camera3_stream_rotation_t rotation, int *id,
        const String8& physicalCameraId,
        std::vector *surfaceIds, int streamSetId, bool isShared, uint64_t consumerUsage) {
    status_t res;
    bool wasActive = false;

    switch (mStatus) {
        case STATUS_ERROR:
            return INVALID_OPERATION;
        case STATUS_UNINITIALIZED:
            return INVALID_OPERATION;
        case STATUS_UNCONFIGURED:
        case STATUS_CONFIGURED:
            // OK
            break;
        case STATUS_ACTIVE:
            ALOGV("%s: Stopping activity to reconfigure streams", __FUNCTION__);
            res = internalPauseAndWaitLocked(maxExpectedDuration);
            wasActive = true;
            break;
    }
    sp newStream;

    if (format == HAL_PIXEL_FORMAT_BLOB) { //拍照
        ssize_t blobBufferSize;
        if (dataSpace != HAL_DATASPACE_DEPTH) {
            blobBufferSize = getJpegBufferSize(width, height);
        } else {
            blobBufferSize = getPointCloudBufferSize();
        }
        newStream = new Camera3OutputStream(mNextStreamId, consumers[0],
                width, height, blobBufferSize, format, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, streamSetId);
    } else if (format == HAL_PIXEL_FORMAT_RAW_OPAQUE) {
        ssize_t rawOpaqueBufferSize = getRawOpaqueBufferSize(width, height);
        newStream = new Camera3OutputStream(mNextStreamId, consumers[0],
                width, height, rawOpaqueBufferSize, format, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, streamSetId);
    } else if (isShared) {
        newStream = new Camera3SharedOutputStream(mNextStreamId, consumers,
                width, height, format, consumerUsage, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, streamSetId);
    } else if (consumers.size() == 0 && hasDeferredConsumer) { //预览
        newStream = new Camera3OutputStream(mNextStreamId,
                width, height, format, consumerUsage, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, streamSetId);
    } else { //回调
        newStream = new Camera3OutputStream(mNextStreamId, consumers[0],
                width, height, format, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, streamSetId);
    }

    size_t consumerCount = consumers.size();
    for (size_t i = 0; i < consumerCount; i++) {
        int id = newStream->getSurfaceId(consumers[i]);
        if (surfaceIds != nullptr) {
            surfaceIds->push_back(id);
        }
    }

    newStream->setStatusTracker(mStatusTracker);
    newStream->setBufferManager(mBufferManager);
    res = mOutputStreams.add(mNextStreamId, newStream);

    *id = mNextStreamId++;
    mNeedConfig = true;
    return OK;
}

binder::Status CameraDeviceClient::endConfigure(int operatingMode,
        const hardware::camera2::impl::CameraMetadataNative& sessionParams) {
    mDevice->configureStreams(sessionParams, operatingMode);
}
alps/frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp
status_t Camera3Device::configureStreams(const CameraMetadata& sessionParams, int operatingMode) {
    if (sessionParams.isEmpty() &&
            ((mLastTemplateId > 0) && (mLastTemplateId < CAMERA3_TEMPLATE_COUNT)) &&
            (!mRequestTemplateCache[mLastTemplateId].isEmpty())) {
        return filterParamsAndConfigureLocked(mRequestTemplateCache[mLastTemplateId],
                operatingMode);
    }

    return filterParamsAndConfigureLocked(sessionParams, operatingMode);
}
status_t Camera3Device::filterParamsAndConfigureLocked(const CameraMetadata& sessionParams,
        int operatingMode) {
    //Filter out any incoming session parameters
    const CameraMetadata params(sessionParams);
    camera_metadata_entry_t availableSessionKeys = mDeviceInfo.find(
            ANDROID_REQUEST_AVAILABLE_SESSION_KEYS);
    CameraMetadata filteredParams(availableSessionKeys.count);
    camera_metadata_t *meta = const_cast(
            filteredParams.getAndLock());
    set_camera_metadata_vendor_id(meta, mVendorTagId);
    if (availableSessionKeys.count > 0) {
        for (size_t i = 0; i < availableSessionKeys.count; i++) {
            camera_metadata_ro_entry entry = params.find(
                    availableSessionKeys.data.i32[i]);
            if (entry.count > 0) {
                filteredParams.update(entry);
            }
        }
    }
    return configureStreamsLocked(operatingMode, filteredParams);
}

status_t Camera3Device::configureStreamsLocked(int operatingMode,
        const CameraMetadata& sessionParams, bool notifyRequestThread) {
    bool isConstrainedHighSpeed =
            static_cast(StreamConfigurationMode::CONSTRAINED_HIGH_SPEED_MODE) ==
            operatingMode;

    if (mOperatingMode != operatingMode) {
        mNeedConfig = true;
        mIsConstrainedHighSpeedConfiguration = isConstrainedHighSpeed;
        mOperatingMode = operatingMode;
    }

    if (mOutputStreams.size() == 0) {
        addDummyStreamLocked();
    } else {
        tryRemoveDummyStreamLocked();
    }

    mPreparerThread->pause();

    camera3_stream_configuration config;
    config.operation_mode = mOperatingMode;
    config.num_streams = (mInputStream != NULL) + mOutputStreams.size();

    Vector streams;
    streams.setCapacity(config.num_streams);
    std::vector bufferSizes(config.num_streams, 0);


    if (mInputStream != NULL) {
        camera3_stream_t *inputStream;
        inputStream = mInputStream->startConfiguration();
        streams.add(inputStream);
    }

    for (size_t i = 0; i < mOutputStreams.size(); i++) {
        // Don't configure bidi streams twice, nor add them twice to the list
        if (mOutputStreams[i].get() ==
            static_cast(mInputStream.get())) {
            config.num_streams--;
            continue;
        }

        camera3_stream_t *outputStream;
        outputStream = mOutputStreams.editValueAt(i)->startConfiguration();
        streams.add(outputStream);

        if (outputStream->format == HAL_PIXEL_FORMAT_BLOB &&
                outputStream->data_space == HAL_DATASPACE_V0_JFIF) {
            size_t k = i + ((mInputStream != nullptr) ? 1 : 0); // Input stream if present should
                                                                // always occupy the initial entry.
            bufferSizes[k] = static_cast(
                    getJpegBufferSize(outputStream->width, outputStream->height));
        }
    }

    config.streams = streams.editArray();

    const camera_metadata_t *sessionBuffer = sessionParams.getAndLock();
    res = mInterface->configureStreams(sessionBuffer, &config, bufferSizes);//调用HalInterface对象进行配置
    sessionParams.unlock(sessionBuffer);

    if (mInputStream != NULL && mInputStream->isConfiguring()) {
        res = mInputStream->finishConfiguration();
    }

    for (size_t i = 0; i < mOutputStreams.size(); i++) {
        sp outputStream =
            mOutputStreams.editValueAt(i);
        if (outputStream->isConfiguring() && !outputStream->isConsumerConfigurationDeferred()) {
            res = outputStream->finishConfiguration();
        }
    }
    if (notifyRequestThread) {
        mRequestThread->configurationComplete(mIsConstrainedHighSpeedConfiguration, sessionParams);
    }

    // Update device state
    const camera_metadata_t *newSessionParams = sessionParams.getAndLock();
    const camera_metadata_t *currentSessionParams = mSessionParams.getAndLock();
    bool updateSessionParams = (newSessionParams != currentSessionParams) ? true : false;

    if (updateSessionParams)  {
        mSessionParams = sessionParams;
    }

    mNeedConfig = false;
    internalUpdateStatusLocked((mDummyStreamId == NO_STREAM) ?
            STATUS_CONFIGURED : STATUS_UNCONFIGURED);
    auto rc = mPreparerThread->resume();
    return OK;
}
status_t Camera3Device::HalInterface::configureStreams(const camera_metadata_t *sessionParams,
        camera3_stream_configuration *config, const std::vector& bufferSizes) {
    // Convert stream config to HIDL
    std::set activeStreams;
    device::V3_2::StreamConfiguration requestedConfiguration3_2;
    device::V3_4::StreamConfiguration requestedConfiguration3_4;
    requestedConfiguration3_2.streams.resize(config->num_streams);
    requestedConfiguration3_4.streams.resize(config->num_streams);
    for (size_t i = 0; i < config->num_streams; i++) {
        device::V3_2::Stream &dst3_2 = requestedConfiguration3_2.streams[i];
        device::V3_4::Stream &dst3_4 = requestedConfiguration3_4.streams[i];
        camera3_stream_t *src = config->streams[i];

        Camera3Stream* cam3stream = Camera3Stream::cast(src);
        cam3stream->setBufferFreedListener(this);
        int streamId = cam3stream->getId();
        StreamType streamType;
        switch (src->stream_type) {
            case CAMERA3_STREAM_OUTPUT:
                streamType = StreamType::OUTPUT;
                break;
            case CAMERA3_STREAM_INPUT:
                streamType = StreamType::INPUT;
                break;
        }
        dst3_2.id = streamId;
        dst3_2.streamType = streamType;
        dst3_2.width = src->width;
        dst3_2.height = src->height;
        dst3_2.format = mapToPixelFormat(src->format);
        dst3_2.usage = mapToConsumerUsage(cam3stream->getUsage());
        dst3_2.dataSpace = mapToHidlDataspace(src->data_space);
        dst3_2.rotation = mapToStreamRotation((camera3_stream_rotation_t) src->rotation);
        dst3_4.v3_2 = dst3_2;
        dst3_4.bufferSize = bufferSizes[i];
        if (src->physical_camera_id != nullptr) {
            dst3_4.physicalCameraId = src->physical_camera_id;
        }

        activeStreams.insert(streamId);
        // Create Buffer ID map if necessary
        if (mBufferIdMaps.count(streamId) == 0) {
            mBufferIdMaps.emplace(streamId, BufferIdMap{});
        }
    }

    // remove BufferIdMap for deleted streams
    for(auto it = mBufferIdMaps.begin(); it != mBufferIdMaps.end();) {
        int streamId = it->first;
        bool active = activeStreams.count(streamId) > 0;
        if (!active) {
            it = mBufferIdMaps.erase(it);
        } else {
            ++it;
        }
    }

    StreamConfigurationMode operationMode;
    res = mapToStreamConfigurationMode(
            (camera3_stream_configuration_mode_t) config->operation_mode,
            /*out*/ &operationMode);

    requestedConfiguration3_2.operationMode = operationMode;
    requestedConfiguration3_4.operationMode = operationMode;
    requestedConfiguration3_4.sessionParams.setToExternal(
            reinterpret_cast(const_cast(sessionParams)),
            get_camera_metadata_size(sessionParams));

    // Invoke configureStreams
    device::V3_3::HalStreamConfiguration finalConfiguration;
    common::V1_0::Status status;

    // See if we have v3.4 or v3.3 HAL
    if (mHidlSession_3_4 != nullptr) {
        device::V3_4::HalStreamConfiguration finalConfiguration3_4;
        auto err = mHidlSession_3_4->configureStreams_3_4(requestedConfiguration3_4,
            [&status, &finalConfiguration3_4]
            (common::V1_0::Status s, const device::V3_4::HalStreamConfiguration& halConfiguration) {
                finalConfiguration3_4 = halConfiguration;
                status = s;
            });

        finalConfiguration.streams.resize(finalConfiguration3_4.streams.size());
        for (size_t i = 0; i < finalConfiguration3_4.streams.size(); i++) {
            finalConfiguration.streams[i] = finalConfiguration3_4.streams[i].v3_3;
        }
    }

    // And convert output stream configuration from HIDL
    for (size_t i = 0; i < config->num_streams; i++) {
        camera3_stream_t *dst = config->streams[i];
        int streamId = Camera3Stream::cast(dst)->getId();

        // Start scan at i, with the assumption that the stream order matches
        size_t realIdx = i;
        bool found = false;
        for (size_t idx = 0; idx < finalConfiguration.streams.size(); idx++) {
            if (finalConfiguration.streams[realIdx].v3_2.id == streamId) {
                found = true;
                break;
            }
            realIdx = (realIdx >= finalConfiguration.streams.size()) ? 0 : realIdx + 1;
        }
        device::V3_3::HalStream &src = finalConfiguration.streams[realIdx];

        Camera3Stream* dstStream = Camera3Stream::cast(dst);
        dstStream->setFormatOverride(false);
        dstStream->setDataSpaceOverride(false);
        int overrideFormat = mapToFrameworkFormat(src.v3_2.overrideFormat);
        android_dataspace overrideDataSpace = mapToFrameworkDataspace(src.overrideDataSpace);

        {
            dstStream->setFormatOverride((dst->format != overrideFormat) ? true : false);
            dstStream->setDataSpaceOverride((dst->data_space != overrideDataSpace) ? true : false);
            dst->format = overrideFormat;
            dst->data_space = overrideDataSpace;
        }

        if (dst->stream_type == CAMERA3_STREAM_INPUT) {
            dstStream->setUsage(
                    mapConsumerToFrameworkUsage(src.v3_2.consumerUsage));
        } else {
            dstStream->setUsage(
                    mapProducerToFrameworkUsage(src.v3_2.producerUsage));
        }
        dst->max_buffers = src.v3_2.maxBuffers;
    }
    return res;
}
alps/vendor/mediatek/proprietary/hardware/mtkcam3/main/hal/device/3.x/device/CameraDevice3SessionImpl.cpp
ThisNamespace::
configureStreams_3_4(const V3_4::StreamConfiguration& requestedConfiguration, configureStreams_3_4_cb _hidl_cb)
{
    WrappedHalStreamConfiguration halStreamConfiguration;
    {
        int err = NO_INIT;
        status = tryRunCommandLocked(getWaitCommandTimeout(), "onConfigureStreamsLocked", [&, this](){
            err = onConfigureStreamsLocked(requestedConfiguration, halStreamConfiguration);
        });
    }
    _hidl_cb(mapToHidlCameraStatus(status), halStreamConfiguration);
    return Void();
}
alps/vendor/mediatek/proprietary/hardware/mtkcam3/main/hal/device/3.x/device/CameraDevice3SessionImpl.cpp
onConfigureStreamsLocked(
    const WrappedStreamConfiguration& requestedConfiguration,
    WrappedHalStreamConfiguration& halConfiguration
){
    IAppStreamManager::ConfigAppStreams appStreams;
    auto pAppStreamManager = getSafeAppStreamManager(); //获得CameraDevice3SessionImpl打开的 mAppStreamManager
    pAppStreamManager->beginConfigureStreams(requestedConfiguration, halConfiguration, appStreams);
    auto pPipelineModel = getSafePipelineModel(); //获得CameraDevice3SessionImpl打开的 mPipelineModel

    auto pParams = std::make_shared();
#define _CLONE_(dst, src) \
            do { \
                dst.clear(); \
                for ( size_t j=0; j                     dst.emplace( std::make_pair(src.keyAt(j), src.valueAt(j) ) ); \
                } \
            } while (0) \

        _CLONE_(pParams->vImageStreams,         appStreams.vImageStreams);
        _CLONE_(pParams->vMetaStreams,          appStreams.vMetaStreams);
        _CLONE_(pParams->vMinFrameDuration,     appStreams.vMinFrameDuration);
        _CLONE_(pParams->vStallFrameDuration,   appStreams.vStallFrameDuration);

    pPipelineModel->configure(pParams);
    pAppStreamManager->endConfigureStreams(halConfiguration);
    return OK;
}
alps/vendor/mediatek/proprietary/hardware/mtkcam3/main/hal/device/3.x/app/AppStreamMgr.cpp
AppStreamMgr::
beginConfigureStreams(
    const V3_4::StreamConfiguration& requestedConfiguration,
    V3_4::HalStreamConfiguration& halConfiguration,
    ConfigAppStreams& rStreams
){
    mConfigHandler->beginConfigureStreams(requestedConfiguration, halConfiguration, rStreams);
}
alps/vendor/mediatek/proprietary/hardware/mtkcam3/main/hal/device/3.x/app/AppStreamMgr.ConfigHandler.cpp
ThisNamespace::
beginConfigureStreams(
    const V3_4::StreamConfiguration& requestedConfiguration,
    V3_4::HalStreamConfiguration& halConfiguration,
    ConfigAppStreams& rStreams
){
    auto addFrameDuration = [this](auto& rStreams, auto const pStreamInfo) { //定义addFrameDuration函数
        for (size_t j = 0; j < mEntryMinDuration.count(); j+=4) {
            if (mEntryMinDuration.itemAt(j    , Type2Type()) == (MINT64)pStreamInfo->getOriImgFormat() &&
                mEntryMinDuration.itemAt(j + 1, Type2Type()) == (MINT64)pStreamInfo->getLandscapeSize().w &&
                mEntryMinDuration.itemAt(j + 2, Type2Type()) == (MINT64)pStreamInfo->getLandscapeSize().h)
            {
                rStreams.vMinFrameDuration.add(
                    pStreamInfo->getStreamId(),
                    mEntryMinDuration.itemAt(j + 3, Type2Type())
                );
                rStreams.vStallFrameDuration.add(
                    pStreamInfo->getStreamId(),
                    mEntryStallDuration.itemAt(j + 3, Type2Type())
                );
                break;
            }
        }
        return;
    };
    err = checkStreams(requestedConfiguration.streams);
    mFrameHandler->setOperationMode((uint32_t)requestedConfiguration.operationMode);

    //创建AppMetaStreamInfo
    {
        StreamId_T const streamId = eSTREAMID_END_OF_FWK;
        auto pStreamInfo = createMetaStreamInfo(streamId);
        mFrameHandler->addConfigStream(pStreamInfo);
        rStreams.vMetaStreams.add(streamId, pStreamInfo);
    }

    //创建AppImageStreamInfo
    halConfiguration.streams.resize(requestedConfiguration.streams.size());
    rStreams.vImageStreams.setCapacity(requestedConfiguration.streams.size());
    for ( size_t i = 0; i < requestedConfiguration.streams.size(); i++ )
    {
        const auto& srcStream = requestedConfiguration.streams[i];
              auto& dstStream = halConfiguration.streams[i];
        StreamId_T streamId = srcStream.v3_2.id;
        //
        sp pStreamInfo = mFrameHandler->getConfigImageStream(streamId);
        if ( pStreamInfo == nullptr )
        {
            pStreamInfo = createImageStreamInfo(srcStream, dstStream);
            mFrameHandler->addConfigStream(pStreamInfo.get(), false/*keepBufferCache*/);
        }
        else{
            // Create a new stream to override the old one, since usage/rotation might need to change.
            pStreamInfo = createImageStreamInfo(srcStream, dstStream);
            mFrameHandler->addConfigStream(pStreamInfo.get(), true/*keepBufferCache*/);
        }
        rStreams.vImageStreams.add(streamId, pStreamInfo);
        addFrameDuration(rStreams, pStreamInfo);
    }
    return OK;
}
ThisNamespace::
createImageStreamInfo(
    const V3_4::Stream& rStream,
    V3_4::HalStream& rOutStream
){
    MUINT64 const usageForHal = (GRALLOC_USAGE_SW_READ_OFTEN|GRALLOC_USAGE_SW_WRITE_OFTEN) |
                          GRALLOC1_PRODUCER_USAGE_CAMERA ;
    MUINT64 const usageForHalClient = rStream.v3_2.usage;
    MUINT64 usageForAllocator = usageForHal | usageForHalClient;
    MINT32  const formatToAllocate  = static_cast(rStream.v3_2.format);
    //
    usageForAllocator = (rStream.v3_2.streamType==StreamType::OUTPUT) ? usageForAllocator : usageForAllocator | GRALLOC_USAGE_HW_CAMERA_ZSL;
    //
    IGrallocHelper* pGrallocHelper = mCommonInfo->mGrallocHelper;
    GrallocStaticInfo   grallocStaticInfo;
    GrallocRequest      grallocRequest;
    grallocRequest.usage  = usageForAllocator;
    grallocRequest.format = formatToAllocate;
    MY_LOGD("grallocRequest.format=%d, grallocRequest.usage = 0x%x ", grallocRequest.format, grallocRequest.usage);

    if  ( HAL_PIXEL_FORMAT_BLOB == formatToAllocate ) {
        auto const dataspace = (Dataspace)rStream.v3_2.dataSpace;
        auto const bufferSz  = rStream.bufferSize;

        // For BLOB format with dataSpace Dataspace::JFIF, this must be non-zero and represent the
        // maximal size HAL can lock using android.hardware.graphics.mapper lock API.
        if ( Dataspace::V0_JFIF == dataspace ) {
            if ( CC_UNLIKELY(bufferSz==0) ) {
                MY_LOGW("V0_JFIF with bufferSize(0)");
                IMetadata::IEntry const& entry = mCommonInfo->mMetadataProvider->getMtkStaticCharacteristics().entryFor(MTK_JPEG_MAX_SIZE);
                if  ( entry.isEmpty() ) {
                    MY_LOGW("no static JPEG_MAX_SIZE");
                    grallocRequest.widthInPixels = rStream.v3_2.width * rStream.v3_2.height * 2;
                }
                else {
                    grallocRequest.widthInPixels = entry.itemAt(0, Type2Type());
                }
            } else {
                grallocRequest.widthInPixels = bufferSz;
            }
            grallocRequest.heightInPixels = 1;
            MY_LOGI("BLOB with widthInPixels(%d), heightInPixels(%d), bufferSize(%u)",
                    grallocRequest.widthInPixels, grallocRequest.heightInPixels, rStream.bufferSize);
        }
        else {
            if ( bufferSz!=0 )
                grallocRequest.widthInPixels = bufferSz;
            else
                grallocRequest.widthInPixels = rStream.v3_2.width * rStream.v3_2.height * 2;
            grallocRequest.heightInPixels = 1;
            MY_LOGW("undefined dataspace(0x%x) with bufferSize(%u) in BLOB format -> %dx%d",
                    static_cast(dataspace), bufferSz, grallocRequest.widthInPixels, grallocRequest.heightInPixels);
        }
    }
    else {
        grallocRequest.widthInPixels  = rStream.v3_2.width;
        grallocRequest.heightInPixels = rStream.v3_2.height;
    }
    //
    err = pGrallocHelper->query(&grallocRequest, &grallocStaticInfo);

    //  stream name = s:d:App::
    String8 s8StreamName = String8::format("s%d:d%d:App:", rStream.v3_2.id, mCommonInfo->mInstanceId);
    String8 const s8FormatAllocated  = pGrallocHelper->queryPixelFormatName(grallocStaticInfo.format);
    switch  (grallocStaticInfo.format)
    {
    case HAL_PIXEL_FORMAT_BLOB:
    case HAL_PIXEL_FORMAT_YV12:
    case HAL_PIXEL_FORMAT_YCRCB_420_SP:
    case HAL_PIXEL_FORMAT_YCBCR_422_I:
    case HAL_PIXEL_FORMAT_RAW16:
    case HAL_PIXEL_FORMAT_RAW_OPAQUE:
    case HAL_PIXEL_FORMAT_CAMERA_OPAQUE:
        s8StreamName += s8FormatAllocated;
        break;
    }
    //
    s8StreamName += ":";
    s8StreamName += pGrallocHelper->queryGrallocUsageName(usageForHalClient);
    //
    IImageStreamInfo::BufPlanes_t bufPlanes;
    bufPlanes.resize(grallocStaticInfo.planes.size());
    for (size_t i = 0; i < bufPlanes.size(); i++)
    {
        IImageStreamInfo::BufPlane& plane = bufPlanes[i];
        plane.sizeInBytes      = grallocStaticInfo.planes[i].sizeInBytes;
        plane.rowStrideInBytes = grallocStaticInfo.planes[i].rowStrideInBytes;
    }
    //
    rOutStream.v3_3.v3_2.id = rStream.v3_2.id;
    rOutStream.physicalCameraId = rStream.physicalCameraId;
    rOutStream.v3_3.v3_2.overrideFormat =
        (  PixelFormat::IMPLEMENTATION_DEFINED == rStream.v3_2.format
        //[ALPS03443045] Don't override it since there's a bug in API1 -> HAL3.
        //StreamingProcessor::recordingStreamNeedsUpdate always return true for video stream.
        && (GRALLOC_USAGE_HW_VIDEO_ENCODER & rStream.v3_2.usage) == 0
        //we don't have input stream's producer usage to determine the real format.
        && StreamType::OUTPUT == rStream.v3_2.streamType  )
            ? static_cast(grallocStaticInfo.format)
            : rStream.v3_2.format;
    rOutStream.v3_3.v3_2.producerUsage = (rStream.v3_2.streamType==StreamType::OUTPUT) ? usageForHal : 0;
    rOutStream.v3_3.v3_2.consumerUsage = (rStream.v3_2.streamType==StreamType::OUTPUT) ? 0 : usageForHal;
    rOutStream.v3_3.v3_2.maxBuffers    = 1;
    rOutStream.v3_3.overrideDataSpace = rStream.v3_2.dataSpace;

    auto const& pStreamInfo = mFrameHandler->getConfigImageStream(rStream.v3_2.id);
    MINT imgFormat = (grallocStaticInfo.format == HAL_PIXEL_FORMAT_BLOB
                    && rStream.v3_2.dataSpace == static_cast(Dataspace::V0_JFIF)) ?
                    eImgFmt_JPEG : grallocStaticInfo.format;

    imgFormat = ( grallocStaticInfo.format == HAL_PIXEL_FORMAT_RAW16 ? eImgFmt_BAYER10_UNPAK : imgFormat );

    NSCam::ImageBufferInfo imgBufferInfo;
    imgBufferInfo.bufOffset.push_back(0);
    imgBufferInfo.bufPlanes = bufPlanes;
    imgBufferInfo.imgFormat = imgFormat;
    imgBufferInfo.imgSize.w = rStream.v3_2.width;
    imgBufferInfo.imgSize.h = rStream.v3_2.height;

    AppImageStreamInfo::CreationInfo creationInfo =
    {
        .mStreamName        = s8StreamName,
        .mvbufPlanes        = bufPlanes,                 /* alloc stage, TBD if it's YUV format for batch mode SMVR */
        .mImgFormat         = grallocStaticInfo.format,  /* alloc stage, TBD if it's YUV format for batch mode SMVR */
        .mOriImgFormat      = (pStreamInfo.get())? pStreamInfo->getOriImgFormat() : formatToAllocate,
        .mStream            = rStream,
        .mHalStream         = rOutStream,
        .mImageBufferInfo   = imgBufferInfo,
    };
    AppImageStreamInfo* pStream = new AppImageStreamInfo(creationInfo);
    return pStream;
}

-----------------------------------------------------------------------------------------------------------------------------------
alps/vendor/mediatek/proprietary/hardware/mtkcam3/main/hal/device/3.x/app/AppStreamMgr.cpp
AppStreamMgr::
endConfigureStreams(
    V3_4::HalStreamConfiguration& halConfiguration
){
    mConfigHandler->endConfigureStreams(halConfiguration);
}
alps/vendor/mediatek/proprietary/hardware/mtkcam3/main/hal/device/3.x/app/AppStreamMgr.ConfigHandler.cpp
ThisNamespace::
endConfigureStreams(
    V3_4::HalStreamConfiguration& halConfiguration
){
    //clear old BatchStreamId
    mBatchHandler->resetBatchStreamId();
    std::unordered_set usedStreamIds;
    usedStreamIds.reserve(halConfiguration.streams.size());
    for (size_t i = 0; i < halConfiguration.streams.size(); i++) {
        auto& halStream = halConfiguration.streams[i];
        StreamId_T const streamId = halStream.v3_3.v3_2.id;
        auto pStreamInfo = mFrameHandler->getConfigImageStream(streamId);

        mBatchHandler->checkStreamUsageforBatchMode(pStreamInfo);
        usedStreamIds.insert(streamId);

        // a stream in demand ? => set its maxBuffers
        halStream.v3_3.v3_2.maxBuffers = pStreamInfo->getMaxBufNum();
    }
    mFrameHandler->removeUnusedConfigStream(usedStreamIds);
    return OK;
}
-----------------------------------------------------------------------------------------------------------------------------------

alps/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/model/PipelineModelImpl.cpp
PipelineModelImpl::
configure(
    std::shared_ptrconst& params
){
    IPipelineModelSessionFactory::CreationParams sessionCfgParams;
    sessionCfgParams.pPipelineStaticInfo      = mPipelineStaticInfo;
    sessionCfgParams.pUserConfigurationParams = params;
    sessionCfgParams.pPipelineModelCallback   = mCallback.promote();
    mSession = IPipelineModelSessionFactory::createPipelineModelSession(sessionCfgParams);
}
alps/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/model/session/PipelineModelSessionFactory.cpp
IPipelineModelSessionFactory::
createPipelineModelSession(
    CreationParams const& params __unused
){
    //  (2) convert to UserConfiguration
    auto pUserConfiguration = convertToUserConfiguration(
        *params.pPipelineStaticInfo,
        *params.pUserConfigurationParams
    );
    //  (3) pipeline policy
    auto pSettingPolicy = IPipelineSettingPolicyFactory::createPipelineSettingPolicy(
        IPipelineSettingPolicyFactory::CreationParams{
            .pPipelineStaticInfo        = params.pPipelineStaticInfo,
            .pPipelineUserConfiguration = pUserConfiguration,
    });
    //  (4) pipeline session
    auto pSession = decidePipelineModelSession(params, pUserConfiguration, pSettingPolicy);
}

构造pipeline policy
alps/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/policy/

alps/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/policy/PipelineSettingPolicyFactoryImpl.cpp
IPipelineSettingPolicyFactory::
createPipelineSettingPolicy(
    CreationParams const& params __unused
){
    return decidePolicyAndMake(params, pPolicyTable, pMediatorTable);
}
decidePolicyAndMake(
    IPipelineSettingPolicyFactory::CreationParams const& params __unused,
    std::shared_ptr pPolicyTable __unused,
    std::shared_ptr pMediatorTable __unused
){
        return MAKE_PIPELINE_POLICY(PipelineSettingPolicyImpl);
    }
}
#define MAKE_PIPELINE_POLICY(_class_, ...) \
    std::make_shared<_class_>( \
        PipelineSettingPolicyImpl::CreationParams{ \
            .pPipelineStaticInfo        = params.pPipelineStaticInfo, \
            .pPipelineUserConfiguration = params.pPipelineUserConfiguration, \
            .pPolicyTable               = pPolicyTable, \
            .pMediatorTable             = pMediatorTable, \
        })
alps/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/policy/PipelineSettingPolicyImpl.cpp
PipelineSettingPolicyImpl::
PipelineSettingPolicyImpl(
    CreationParams const& creationParams
)
    : IPipelineSettingPolicy()
    , mPipelineStaticInfo(creationParams.pPipelineStaticInfo)
    , mPipelineUserConfiguration(creationParams.pPipelineUserConfiguration)
    , mPolicyTable(creationParams.pPolicyTable)
    , mMediatorTable(creationParams.pMediatorTable)
{
}

构造pipeline session
alps/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/model/session/PipelineModelSessionFactory.cpp
decidePipelineModelSession(
    IPipelineModelSessionFactory::CreationParams const& creationParams,
    std::shared_ptrconst& pUserConfiguration,
    std::shared_ptrconst& pSettingPolicy
){
    auto convertTo_CtorParams = [=]() {
        return PipelineModelSessionBase::CtorParams{
            .staticInfo = {
                .pPipelineStaticInfo    = creationParams.pPipelineStaticInfo,
                .pUserConfiguration     = pUserConfiguration,
            },
            .pPipelineModelCallback     = creationParams.pPipelineModelCallback,
            .pPipelineSettingPolicy     = pSettingPolicy,
        };
    };
    return PipelineModelSessionDefault::makeInstance("Default/", convertTo_CtorParams());
}

alps/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/model/session/PipelineModelSessionDefault.cpp
makeInstance(
    std::string const& name,
    CtorParams const& rCtorParams __unused
){
    android::sp pSession = new ThisNamespace(name, rCtorParams);
    int const err = pSession->configure();
    return pSession;
}
alps/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/model/session/PipelineModelSessionBasic.cpp
#define ThisNamespace   PipelineModelSessionBasic
ThisNamespace::
ThisNamespace(
    std::string const& name,
    CtorParams const& rCtorParams)
    : PipelineModelSessionBase(
        {name + std::to_string(rCtorParams.staticInfo.pPipelineStaticInfo->openId)},
        rCtorParams)

alps/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/model/session/PipelineModelSessionBase.cpp
PipelineModelSessionBase::
PipelineModelSessionBase(
    std::string const&& sessionName,
    CtorParams const& rCtorParams
)
    , mStaticInfo(rCtorParams.staticInfo)
    , mDebugInfo(rCtorParams.debugInfo)
    , mPipelineModelCallback(rCtorParams.pPipelineModelCallback)
    , mPipelineSettingPolicy(rCtorParams.pPipelineSettingPolicy)

alps/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/model/session/PipelineModelSessionDefault.cpp
configure(){
    return PipelineModelSessionBasic::configure();
}
alps/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/model/session/PipelineModelSessionBasic.cpp
configure(){
    onConfig_ConfigInfo2()
    onConfig_Capture()
    onConfig_BuildingPipelineContext()
}
onConfig_ConfigInfo2(){
    mConfigInfo2 = std::make_shared();
    {
        pipelinesetting::ConfigurationOutputParams out{
            .pStreamingFeatureSetting   = &mConfigInfo2->mStreamingFeatureSetting,
            .pCaptureFeatureSetting     = &mConfigInfo2->mCaptureFeatureSetting,
            .pPipelineTopology          = &mConfigInfo2->mPipelineTopology,
        };
        mPipelineSettingPolicy->evaluateConfiguration(out, {});//通过 PipelineSettingPolicyImpl 配置处理节点和Feature
    }
}
alps/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/policy/ConfigSettingPolicyMediator.cpp
evaluateConfiguration(
    ConfigurationOutputParams& out,
    ConfigurationInputParams const& in __unused
){
    mPolicyTable->mFeaturePolicy->evaluateConfiguration
    mPolicyTable->fConfigPipelineNodesNeed
    mPolicyTable->fConfigPipelineTopology
    mPolicyTable->fConfigSensorSetting
    mPolicyTable->fConfigP1HwSetting
    mPolicyTable->fConfigP1DmaNeed
    mPolicyTable->fConfigStreamInfo_P1
    mPolicyTable->fConfigStreamInfo_NonP1
}
创建PipelineContext对象 mCurrentPipelineContext
alps/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/model/session/PipelineModelSessionBasic.cpp
onConfig_BuildingPipelineContext(){
    BuildPipelineContextInputParams const in{
        .pipelineName               = getSessionName(),
        .pPipelineTopology          = &mConfigInfo2->mPipelineTopology,
        .pStreamingFeatureSetting   = &mConfigInfo2->mStreamingFeatureSetting,
        .pCaptureFeatureSetting     = &mConfigInfo2->mCaptureFeatureSetting,
    };
    buildPipelineContext(mCurrentPipelineContext, in)
}
alps/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/model/utils/PipelineContextBuilder.cpp
auto buildPipelineContext(
    android::sp& out,
    BuildPipelineContextInputParams const& in
){
    android::sp pNewPipelineContext = PipelineContext::create(in.pipelineName.c_str());
    调用PipelineContextImpl的waitUntilDrained
    pNewPipelineContext->beginConfigure(
                      in.pOldPipelineContext)
    配置Streams
    configContextLocked_Streams(
                    pNewPipelineContext,
                    in.pParsedStreamInfo_P1,
                    in.pZSLProvider,
                    in.pParsedStreamInfo_NonP1,
                    &common)
    配置Nodes
    configContextLocked_Nodes(
                    pNewPipelineContext,
                    in.pOldPipelineContext,
                    in.pStreamingFeatureSetting,
                    in.pCaptureFeatureSetting,
                    in.pParsedStreamInfo_P1,
                    in.pParsedStreamInfo_NonP1,
                    in.pPipelineNodesNeed,
                    in.pSensorSetting,
                    in.pvP1HwSetting,
                    in.batchSize,
                    &common)
    配置Pipeline
    configContextLocked_Pipeline(
                    pNewPipelineContext,
                    in.pPipelineTopology)

    调用PipelineContextImpl的config
    pNewPipelineContext->endConfigure(
                          bUsingMultiThreadToBuildPipelineContext)
    out = pNewPipelineContext;
}

配置Streams
alps/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/pipeline/PipelineContextBuilders.cpp
configContextLocked_Streams(
    sp pContext,
    std::vector const* pParsedStreamInfo_P1,
    android::sp   pZSLProvider,
    ParsedStreamInfo_NonP1 const* pParsedStreamInfo_NonP1,
    InternalCommonInputParams const* pCommon
)
StreamBuilder::
StreamBuilder(
    eStreamType const type,
    sp pStreamInfo
) : mpImpl( new StreamBuilderImpl() ){
    mpImpl->mType             = type;
    mpImpl->mpImageStreamInfo = pStreamInfo;
}
StreamBuilder::
StreamBuilder(
    eStreamType const type,
    sp pStreamInfo
) : mpImpl( new StreamBuilderImpl() ){
    mpImpl->mType            = type;
    mpImpl->mpMetaStreamInfo = pStreamInfo;
}
StreamBuilder::
build(sp pContext){
    typedef PipelineContext::PipelineContextImpl        PipelineContextImplT;
    PipelineContextImplT* pContextImpl = pContext->getImpl();
    pContextImpl->updateConfig(mpImpl.get());
}

配置Nodes
configContextLocked_Nodes(
    sp pContext,
    android::sp const& pOldPipelineContext,
    StreamingFeatureSetting const* pStreamingFeatureSetting,
    CaptureFeatureSetting const* pCaptureFeatureSetting,
    std::vector const* pParsedStreamInfo_P1,
    ParsedStreamInfo_NonP1 const* pParsedStreamInfo_NonP1,
    PipelineNodesNeed const* pPipelineNodesNeed,
    std::vector const* pSensorSetting,
    std::vector const* pvP1HwSetting,
    uint32_t batchSize,
    InternalCommonInputParams const* pCommon
)
{
    for(size_t i = 0; i < pPipelineNodesNeed->needP1Node.size(); i++) {
        if (pPipelineNodesNeed->needP1Node[i]) {
            configContextLocked_P1Node(pContext,
                            pOldPipelineContext,
                            pStreamingFeatureSetting,
                            pPipelineNodesNeed,
                            &(*pParsedStreamInfo_P1)[i],
                            pParsedStreamInfo_NonP1,
                            &(*pSensorSetting)[i],
                            &(*pvP1HwSetting)[i],
                            i,
                            batchSize,
                            useP1NodeCount > 1,
                            bMultiCam_CamSvPath,
                            pCommon,
                            isReConfig);
        }
    }
    if( pPipelineNodesNeed->needP2StreamNode ) {
        bool hasMonoSensor = false;
        for(auto const v : pPipelineStaticInfo->sensorRawType) {
            if(SENSOR_RAW_MONO == v) {
                hasMonoSensor = true;
                break;
            }
        }
        configContextLocked_P2SNode(pContext,
                            pStreamingFeatureSetting,
                            pParsedStreamInfo_P1,
                            pParsedStreamInfo_NonP1,
                            batchSize,
                            useP1NodeCount,
                            hasMonoSensor,
                            pCommon);
    }
    if( pPipelineNodesNeed->needP2CaptureNode ) {
        configContextLocked_P2CNode(pContext,
                            pCaptureFeatureSetting,
                            pParsedStreamInfo_P1,
                            pParsedStreamInfo_NonP1,
                            useP1NodeCount,
                            pCommon);
    }
    if( pPipelineNodesNeed->needFDNode ) {
        configContextLocked_FdNode(pContext,
                            pParsedStreamInfo_P1,
                            pParsedStreamInfo_NonP1,
                            useP1NodeCount,
                            pCommon);
    }
    if( pPipelineNodesNeed->needJpegNode ) {
        configContextLocked_JpegNode(pContext,
                            pParsedStreamInfo_NonP1,
                            useP1NodeCount,
                            pCommon);
    }
    if( pPipelineNodesNeed->needRaw16Node ) {
        configContextLocked_Raw16Node(pContext,
                            pParsedStreamInfo_P1,
                            pParsedStreamInfo_NonP1,
                            useP1NodeCount,
                            pCommon);
    }
    if( pPipelineNodesNeed->needPDENode ) {
        configContextLocked_PDENode(pContext,
                            pParsedStreamInfo_P1,
                            pParsedStreamInfo_NonP1,
                            useP1NodeCount,
                            pCommon);
    }
}

配置Pipeline
alps/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/pipeline/PipelineContext.cpp
PipelineContext::
PipelineContext(char const* name)
    : mpImpl( new PipelineContextImpl(name) )

configContextLocked_Pipeline(
    sp pContext,
    PipelineTopology const* pPipelineTopology
){
    PipelineBuilder()
    .setRootNode(pPipelineTopology->roots)  //设置RootNode
    .setNodeEdges(pPipelineTopology->edges) //设置NodeEdges
    .build(pContext)
}
alps/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/pipeline/PipelineContextBuilders.cpp
PipelineBuilder::
PipelineBuilder()
    : mpImpl( new PipelineBuilderImpl() )
{
}
PipelineBuilder::
build(
    sp pContext
){
    typedef PipelineContext::PipelineContextImpl        PipelineContextImplT;
    PipelineContextImplT* pContextImpl = pContext->getImpl();
    pContextImpl->updateConfig(mpImpl.get())
}
alps/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/pipeline/PipelineContextImpl.cpp
PipelineContext::PipelineContextImpl::
updateConfig(PipelineBuilderImpl* pBuilder)
{
    NodeSet const& rootNodes = pBuilder->mRootNodes;
    NodeEdgeSet const& edges = pBuilder->mNodeEdges;

    // update to context
    mpPipelineConfig->setRootNode(rootNodes);  //设置RootNode
    mpPipelineConfig->setNodeEdges(edges); //设置NodeEdges
}

alps/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/pipeline/PipelineContext.cpp
PipelineContext::
endConfigure(MBOOL const parallelConfig){
    getImpl()->config(mpOldContext.get() ? mpOldContext->getImpl() : NULL, parallelConfig);
}
alps/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/pipeline/PipelineContextImpl.cpp
PipelineContext::PipelineContextImpl::
config(
    PipelineContextImpl* pOldContext,
    MBOOL const isAsync
)

你可能感兴趣的:(Camera)