Android Camera fw学习(五)-takepicutre(STILL_TAKEPICTURE)流程分析

备注:博文仍然是分析Android5.1的代码所写的学习笔记。
感兴趣可以加QQ群85486140,大家一起交流相互学习下!
  前面已经了解过API1大概过程,这里直奔主题。与TakePicture息息相关的主要有4个线程CaptureSequencer,JpegProcessor,Camera3Device::RequestThread,FrameProcessorBase如下面的代码可以发现,在Camera2client对象初始化后,已经有3个线程已经run起来了,还有有一个RequestThread线程会在Camera3Device初始化时创建的。他们工作非常密切,如下大概画了一个他们的工作机制,4个线程都是通过Conditon条件变量来同步的。

![](https://imgconvert.csdnimg.cn/aHR0cDovL2ltZy5ibG9nLmNzZG4ubmV0LzIwMTcwODA2MTAxNDU2NDc2?x-oss-process=image/format,png)
针对该图有下面几点注意的(**这里拍照状态机只针对是STILL_TAKEPICTURE**)
  • 1.所有事件驱动源都是在HAL3回帧动作激活的。当hal3回帧后,会激活mResultSignal以及在onFrameAvailable激活mCaptureAvailableSignal条件。如果没有回帧,所以拍照线程都在阻塞等待状态。
  • 2.STLL_TakePicture状态机在standardStart状态时,会注册一个帧监听对象给FrameProcessor线程,如图所示。
  • 3.在准备抓取拍照帧时,首先要判断AE是否收敛,如果没有收敛,要等待AE收敛才能进行图片捕获状态。
  • 4.当进行图片捕获动作后,要阻塞等待拍照数据回传上来,这就是我们开始子第一条中强调的。如果等待超时会循环等待下去(100ms为一单位,等待3.5s应用就会报错)。

###1、拍照TakePicture准备工作
####1).创建拍照有关的各个线程

status_t Camera2Client::initialize(camera_module_t *module)
{
//-----此处省略N行
    mFrameProcessor = new FrameProcessor(mDevice, this);
    threadName = String8::format("C2-%d-FrameProc",
            mCameraId);
    mFrameProcessor->run(threadName.string());
//拍照状态机处理线程
    mCaptureSequencer = new CaptureSequencer(this);
    threadName = String8::format("C2-%d-CaptureSeq",
            mCameraId);
    mCaptureSequencer->run(threadName.string());
//拍照请求发送,流更新,以及一些事件都是通过这个线程通知的。
    mJpegProcessor = new JpegProcessor(this, mCaptureSequencer);
    threadName = String8::format("C2-%d-JpegProc",
            mCameraId);
    mJpegProcessor->run(threadName.string());
}

  上面可以看到我们在创建CaptureSequencer对象时,传入了当前的Camera2client对象。这里是由于后面状态机中的方法中,都需要使用Camera2Client对象来获取一些参数,以及一些其它操作,这里就先不深入分析了,等到分析状态机方法时,在来好好分析吧。
  创建JpegProcessor对象时,传入的有当前Camera2Client对象和刚才我们创建的CaptureSequencer对象,这里就有必要看一下它的构造函数了。

JpegProcessor::JpegProcessor(
    sp client,
    wp sequencer):
        Thread(false),
        mDevice(client->getCameraDevice()),
        mSequencer(sequencer),
        mId(client->getCameraId()),
        mCaptureAvailable(false),
        mCaptureStreamId(NO_STREAM) {
}

上面可以看到传入的CaptureSequencer直接保存下来了,但是Camera2Client对象用于获取Camera3Device对象和当前CameraId.
####2)、创建jpeg_Stream
  下面是创建和更新jpeg流的方法,我们知道在前面创建预览流的时候,系统已经创建了一组默认的jpeg流了,当时那个流的size是默认情况下的size。当我们在应用层选择不同picture-size时,这里在拍照时就会删除之前的流对象,重新创建一个jpeg流对象。

status_t JpegProcessor::updateStream(const Parameters ¶ms) {
    ATRACE_CALL();
    ALOGV("%s", __FUNCTION__);
    status_t res;

    Mutex::Autolock l(mInputMutex);

    sp device = mDevice.promote();
    // Find out buffer size for JPEG
    ssize_t maxJpegSize = device->getJpegBufferSize(params.pictureWidth, params.pictureHeight);
    if (mCaptureConsumer == 0) {
        // Create CPU buffer queue endpoint
        sp producer;
        sp consumer;
        BufferQueue::createBufferQueue(&producer, &consumer);
        //下面注意buffer数量是1
        mCaptureConsumer = new CpuConsumer(consumer, 1);
        mCaptureConsumer->setFrameAvailableListener(this);
        mCaptureConsumer->setName(String8("Camera2Client::CaptureConsumer"));
        mCaptureWindow = new Surface(producer);
    }

    // Since ashmem heaps are rounded up to page size, don't reallocate if
    // the capture heap isn't exactly the same size as the required JPEG buffer
    const size_t HEAP_SLACK_FACTOR = 2;
    if (mCaptureHeap == 0 ||
            (mCaptureHeap->getSize() < static_cast(maxJpegSize)) ||
            (mCaptureHeap->getSize() >
                    static_cast(maxJpegSize) * HEAP_SLACK_FACTOR) ) {
        // Create memory for API consumption
        mCaptureHeap.clear();
        mCaptureHeap =
                new MemoryHeapBase(maxJpegSize, 0, "Camera2Client::CaptureHeap");
    }
    ALOGV("%s: Camera %d: JPEG capture heap now %d bytes; requested %d bytes",
            __FUNCTION__, mId, mCaptureHeap->getSize(), maxJpegSize);

    if (mCaptureStreamId != NO_STREAM) {
        // Check if stream parameters have to change
        uint32_t currentWidth, currentHeight;
        res = device->getStreamInfo(mCaptureStreamId,
                ¤tWidth, ¤tHeight, 0);
        if (res != OK) {
            ALOGE("%s: Camera %d: Error querying capture output stream info: "
                    "%s (%d)", __FUNCTION__,
                    mId, strerror(-res), res);
            return res;
        }//如果流的size变化了,则需要删除之前的jpeg stream,创建新的steam
        //这经常发生在我们在app选择了不同的picture-size。
        if (currentWidth != (uint32_t)params.pictureWidth ||
                currentHeight != (uint32_t)params.pictureHeight) {
            ALOGV("%s: Camera %d: Deleting stream %d since the buffer dimensions changed",
                __FUNCTION__, mId, mCaptureStreamId);
            res = device->deleteStream(mCaptureStreamId);
      //省去错误检查机制
            mCaptureStreamId = NO_STREAM;
        }
    }
  //这发生在第一创建jpeg steram,拍照请求。
    if (mCaptureStreamId == NO_STREAM) {
        // Create stream for HAL production
        res = device->createStream(mCaptureWindow,
                params.pictureWidth, params.pictureHeight,
                HAL_PIXEL_FORMAT_BLOB, &mCaptureStreamId);
        if (res != OK) {
            return res;
        }

    }
    return OK;
}

上面值得注意的是当是第一次创建jpegStream时,会创建一个BufferQueue对象,注意mCaptureConsumer = new CpuConsumer(consumer, 1);这里设置buffer数量是1,STLL_CAPTURE只有一张图嘛。
###二、拍照之-STILL_CAPTURE
APP点击拍照按键后,ICamera代理对象就会调到这里了。这里可以看到它最后启动拍照状态机。

status_t Camera2Client::takePicture(int msgType) {
    ATRACE_CALL();
    Mutex::Autolock icl(mBinderSerializationLock);
    status_t res;
    if ( (res = checkPid(__FUNCTION__) ) != OK) return res;
    //------------------------------------------------------
    //这里干了一些事情,会重新check picture-size是否和当前的
    //的jpeg流是一样的size,如果不一样的话,会删除之前jpeg流对象
    //重新根据新的picture-size创建jpeg流对象。创建好后就启动
    //拍照状态机了。
    res = mCaptureSequencer->startCapture(msgType);
    return res;
}

上面startCapture方法调动之后,就会发送一个信号激活CaptureSequencer线程。

  • 1.一开始CaptureSequencer线程运行之后,默认是在IDEL状态,这个时候线程会在IDEL状态机方法中等待。
  • 2.当调用了startCapture方法后,会激活CaptureSequencer线程。将状态机切换到START状态。状态机处理方法就是下面这个方法。

####1.拍照状态机-manageStart()

CaptureSequencer::CaptureState CaptureSequencer::manageStart(
        sp &client) {
    ALOGV("%s", __FUNCTION__);
    status_t res;
    ATRACE_CALL();
    SharedParameters::Lock l(client->getParameters());
    CaptureState nextState = DONE;
   //下面这个方法就是用来创建拍照的metadata的接口。
   //如果状态机就会从hal中获取一个类型为CAMERA2_TEMPLATE_STILL_CAPTURE的metadata对象,并根据参数更新metadata相应的缩略图、flash状态、拍照质量等信息。它的主要作用就是确保状态机有一个可以使用的Medatadata数据包。
    res = updateCaptureRequest(l.mParameters, client);
//连拍模式状态机
    if(l.mParameters.lightFx != Parameters::LIGHTFX_NONE &&
            l.mParameters.state == Parameters::STILL_CAPTURE) {
        nextState = BURST_CAPTURE_START;
    }
    else if (l.mParameters.zslMode &&
            l.mParameters.state == Parameters::STILL_CAPTURE &&
            l.mParameters.flashMode != Parameters::FLASH_MODE_ON) {//ZSL拍照模式,注意ZSL是不开闪光灯的。
        nextState = ZSL_START;
    } else { //正常模式-静态拍照
        nextState = STANDARD_START;
    }
    mShutterNotified = false;

    return nextState;
}

上面我们先分析STANDARD_START的状态机情况。
####2.拍照状态机-manageStandardStart()

CaptureSequencer::CaptureState CaptureSequencer::manageStandardStart(
        sp &client) {
    ATRACE_CALL();

    bool isAeConverged = false;
    // Get the onFrameAvailable callback when the requestID == mCaptureId
    // We don't want to get partial results for normal capture, as we need
    // Get ANDROID_SENSOR_TIMESTAMP from the capture result, but partial
    // result doesn't have to have this metadata available.
    // TODO: Update to use the HALv3 shutter notification for remove the
    // need for this listener and make it faster. see bug 12530628.
    client->registerFrameListener(mCaptureId, mCaptureId + 1,
            this,
            /*sendPartials*/false);
    {
        Mutex::Autolock l(mInputMutex);
        isAeConverged = (mAEState == ANDROID_CONTROL_AE_STATE_CONVERGED);
    }
    {
        SharedParameters::Lock l(client->getParameters());
        // Skip AE precapture when it is already converged and not in force flash mode.
        if (l.mParameters.flashMode != Parameters::FLASH_MODE_ON && isAeConverged) {
            return STANDARD_CAPTURE;
        }

        mTriggerId = l.mParameters.precaptureTriggerCounter++;
    }
    client->getCameraDevice()->triggerPrecaptureMetering(mTriggerId);

    mAeInPrecapture = false;
    mTimeoutCount = kMaxTimeoutsForPrecaptureStart;
    return STANDARD_PRECAPTURE_WAIT;
}

上面做了下面几件事情。

    1. 注册拍照图片可用监听对象
  • 2.判断当前AE状态是否收敛,如果收敛了,拍照状态机为STANDARD_CAPTURE。如果AE已经收敛那么拍照状态机就是STANDARD_PRECAPTURE_WAIT,就会下发AE收敛消息,等待HAL AE收敛完成。
    #####1).代码片段1-注册拍照帧可用监听对象
status_t Camera2Client::registerFrameListener(int32_t minId, int32_t maxId,
        wp listener, bool sendPartials) {
    return mFrameProcessor->registerListener(minId, maxId, listener, sendPartials);
}
//----------------------------------------------------------
struct FilteredListener: virtual public RefBase {
    virtual void onResultAvailable(const CaptureResult &result) = 0;
};

上面我们根据参数可以发现监听对象是FilteredListener,必须实现onResultAvailable()方法,这里CaptureSequencer类已经实现了这个方法,如下所示。

void CaptureSequencer::onResultAvailable(const CaptureResult &result) {
    ATRACE_CALL();
    ALOGV("%s: New result available.", __FUNCTION__);
    Mutex::Autolock l(mInputMutex);
    mNewFrameId = result.mResultExtras.requestId;
    mNewFrame = result.mMetadata;
    if (!mNewFrameReceived) {
        mNewFrameReceived = true;
        mNewFrameSignal.signal();
    }
}

上面onResultAvailable()方法中将mNewFrameReceived标志设置为true,并发送条件mNewFrameSignal,激活正在等待这个条件的方法。

//这里由于FrameProcessor继承了FrameProcessorBase类,所以我们直接
//在FrameProcessor中查不到registerListener方法。
status_t FrameProcessorBase::registerListener(int32_t minId,
        int32_t maxId, wp listener, bool sendPartials) {
    Mutex::Autolock l(mInputMutex);
    //这里会检查监听对象是否之前已经注册过,如果已经注册过就不会
    //在注册了。
    List::iterator item = mRangeListeners.begin();
    while (item != mRangeListeners.end()) {
        if (item->minId == minId && item->maxId == maxId && item->listener == listener) {
            return OK;
        }
        item++;
    }
    //将当前监听对象存放到队列中。
    RangeListener rListener = { minId, maxId, listener, sendPartials };
    mRangeListeners.push_back(rListener);
    return OK;
}

上面是注册拍照帧监听对象实现方法,注意这里是将拍照状态保存到了mRangeListeners列表中。这里为什么会注册多个拍照帧可用监听对象呢?
####3.拍照状态机-manageStandardCapture()

CaptureSequencer::CaptureState CaptureSequencer::manageStandardCapture(
        sp &client) {
    status_t res;
    ATRACE_CALL();
    SharedParameters::Lock l(client->getParameters());
    Vector outputStreams;
    uint8_t captureIntent = static_cast(ANDROID_CONTROL_CAPTURE_INTENT_STILL_CAPTURE);

    /**
     * Set up output streams in the request
     *  - preview
     *  - capture/jpeg
     *  - callback (if preview callbacks enabled)
     *  - recording (if recording enabled)
     */
     //下面默认会将preview和capture流id保存到临时数组outputStreams中。
    outputStreams.push(client->getPreviewStreamId());
    outputStreams.push(client->getCaptureStreamId());

    if (l.mParameters.previewCallbackFlags &
            CAMERA_FRAME_CALLBACK_FLAG_ENABLE_MASK) {
        outputStreams.push(client->getCallbackStreamId());
    }
  //这里如果是video_snapshot模式,则会修改捕获意图为ANDROID_CONTROL_CAPTURE_INTENT_VIDEO_SNAPSHOT
    if (l.mParameters.state == Parameters::VIDEO_SNAPSHOT) {
        outputStreams.push(client->getRecordingStreamId());
        captureIntent = static_cast(ANDROID_CONTROL_CAPTURE_INTENT_VIDEO_SNAPSHOT);
    }
  //将各种需要的流ID,保存到metadata中,为了在Camera3Device中查找到对应流对象。
    res = mCaptureRequest.update(ANDROID_REQUEST_OUTPUT_STREAMS,
            outputStreams);
    if (res == OK) {//保存到请求ID
        res = mCaptureRequest.update(ANDROID_REQUEST_ID,
                &mCaptureId, 1);
    }
    if (res == OK) {//保存捕获意图到metadata中。
        res = mCaptureRequest.update(ANDROID_CONTROL_CAPTURE_INTENT,
                &captureIntent, 1);
    }
    if (res == OK) {
        res = mCaptureRequest.sort();
    }

    if (res != OK) {//如果出问题,拍照结束,一般不会走到这里。
        ALOGE("%s: Camera %d: Unable to set up still capture request: %s (%d)",
                __FUNCTION__, client->getCameraId(), strerror(-res), res);
        return DONE;
    }

    // Create a capture copy since CameraDeviceBase#capture takes ownership
    CameraMetadata captureCopy = mCaptureRequest;
  //此处省略一些metadata检查代码,无碍于分析。
    /**
     * Clear the streaming request for still-capture pictures
     *   (as opposed to i.e. video snapshots)
     */
    if (l.mParameters.state == Parameters::STILL_CAPTURE) {
        // API definition of takePicture() - stop preview before taking pic
        res = client->stopStream();//如果是静态拍照,注意是非ZSL模式,则会停止preview预览流。
        if (res != OK) {
            ALOGE("%s: Camera %d: Unable to stop preview for still capture: "
                    "%s (%d)",
                    __FUNCTION__, client->getCameraId(), strerror(-res), res);
            return DONE;
        }
    }
    // TODO: Capture should be atomic with setStreamingRequest here
    //注意这里会启动捕获了,使用拷贝的metadata数据,而且上面的google注释也说了,这是一个原子操作。
    //注意这里会在Camera3Device中根据metadata中的数据,创建请求,并将请求发送给hal.
    res = client->getCameraDevice()->capture(captureCopy);
    if (res != OK) {
        ALOGE("%s: Camera %d: Unable to submit still image capture request: "
                "%s (%d)",
                __FUNCTION__, client->getCameraId(), strerror(-res), res);
        return DONE;
    }

    mTimeoutCount = kMaxTimeoutsForCaptureEnd;
    return STANDARD_CAPTURE_WAIT;
}

####4.拍照状态机-manageStandardCaptureWait()

CaptureSequencer::CaptureState CaptureSequencer::manageStandardCaptureWait(
        sp &client) {
    status_t res;
    ATRACE_CALL();
    Mutex::Autolock l(mInputMutex);

    // Wait for new metadata result (mNewFrame)
    // 这里mNewFrameReceived在拍照帧没准备好之前为false,一旦拍照帧准备好了,FrameProcessor会回调之前注册的onResultAvailable()方法,在该方法中将mNewFrameReceived设置为true,并激活mNewFrameSignal条件,下面的等待操作就会继续往下走了。
    while (!mNewFrameReceived) {
      //这里会等待100ms
        res = mNewFrameSignal.waitRelative(mInputMutex, kWaitDuration);
        if (res == TIMED_OUT) {
            mTimeoutCount--;
            break;
        }
    }

    // Approximation of the shutter being closed
    // - TODO: use the hal3 exposure callback in Camera3Device instead
    //下面是通知shutter事件,即播放拍照音,并将CAMERA_MSG_SHUTTER,CAMERA_MSG_RAW_IMAGE_NOTIFY回传给上层。
    if (mNewFrameReceived && !mShutterNotified) {
        SharedParameters::Lock l(client->getParameters());
        /* warning: this also locks a SharedCameraCallbacks */
        shutterNotifyLocked(l.mParameters, client, mMsgType);
        mShutterNotified = true;//已经通知shutter事件了。
    }

    // Wait until jpeg was captured by JpegProcessor
    //mNewCaptureSignal该条件是在jpeg 图片数据enqueue到bufferqueue时激活的。这里做一下小结
    //这里会用到两个contiion对象mNewFrameSignal和mNewCaptureSignal,其中当hal回传jpeg帧时,会先
    //return buffer即先激活mNewCaptureSignal,才会激活mNewFrameSignal。所以到了下面条件,直接
    //为真。
    while (mNewFrameReceived && !mNewCaptureReceived) {
        res = mNewCaptureSignal.waitRelative(mInputMutex, kWaitDuration);
        if (res == TIMED_OUT) {
            mTimeoutCount--;
            break;
        }
    }
    if (mTimeoutCount <= 0) {
        ALOGW("Timed out waiting for capture to complete");
        return DONE;
    }
    //如果mNewFrameReceived && mNewCaptureReceived为真,说明真的收到jepg帧了。
    if (mNewFrameReceived && mNewCaptureReceived) {
    //这里主要做了2件事情
    //1.检查捕获ID是否一样,否则就直接报错。
    //2.检查事件戳是否正确,这个一般是不会错的。
     
        client->removeFrameListener(mCaptureId, mCaptureId + 1, this);

        mNewFrameReceived = false;
        mNewCaptureReceived = false;
        return DONE;//返回状态机DONE.
    }
    //如果还没准备好,则会继续等待下去,一般发生在等待100ms超时。
    return STANDARD_CAPTURE_WAIT;
}

####5.拍照状态机-manageDone()

CaptureSequencer::CaptureState CaptureSequencer::manageDone(sp &client) {
    status_t res = OK;
    ATRACE_CALL();
    mCaptureId++;
    if (mCaptureId >= Camera2Client::kCaptureRequestIdEnd) {
        mCaptureId = Camera2Client::kCaptureRequestIdStart;
    }
    {
        Mutex::Autolock l(mInputMutex);
        mBusy = false;
    }

    int takePictureCounter = 0;
    {
        SharedParameters::Lock l(client->getParameters());
        switch (l.mParameters.state) {
            case Parameters::DISCONNECTED:
                ALOGW("%s: Camera %d: Discarding image data during shutdown ",
                        __FUNCTION__, client->getCameraId());
                res = INVALID_OPERATION;
                break;
            case Parameters::STILL_CAPTURE:
                res = client->getCameraDevice()->waitUntilDrained();
                if (res != OK) {
                    ALOGE("%s: Camera %d: Can't idle after still capture: "
                            "%s (%d)", __FUNCTION__, client->getCameraId(),
                            strerror(-res), res);
                }
                l.mParameters.state = Parameters::STOPPED;
                break;
            case Parameters::VIDEO_SNAPSHOT:
                l.mParameters.state = Parameters::RECORD;
                break;
            default:
                ALOGE("%s: Camera %d: Still image produced unexpectedly "
                        "in state %s!",
                        __FUNCTION__, client->getCameraId(),
                        Parameters::getStateName(l.mParameters.state));
                res = INVALID_OPERATION;
        }
        takePictureCounter = l.mParameters.takePictureCounter;
    }
    sp processor = mZslProcessor.promote();
    if (processor != 0) {
        ALOGV("%s: Memory optimization, clearing ZSL queue",
              __FUNCTION__);
        processor->clearZslQueue();
    }

    /**
     * Fire the jpegCallback in Camera#takePicture(..., jpegCallback)
     */
    if (mCaptureBuffer != 0 && res == OK) {
        ATRACE_ASYNC_END(Camera2Client::kTakepictureLabel, takePictureCounter);

        Camera2Client::SharedCameraCallbacks::Lock
            l(client->mSharedCameraCallbacks);
        ALOGV("%s: Sending still image to client", __FUNCTION__);
        if (l.mRemoteCallback != 0) {//将图片数据回传到上层,上层会去做一下拷贝。
            l.mRemoteCallback->dataCallback(CAMERA_MSG_COMPRESSED_IMAGE,
                    mCaptureBuffer, NULL);
        } else {
            ALOGV("%s: No client!", __FUNCTION__);
        }
    }
    mCaptureBuffer.clear();

    return IDLE;//拍照状态机返回到IDLE.
}

####6.拍照状态机列表:
其中可以看到有ZSL的拍照状态机处理函数,具体ZSL的状态机放到下篇博客分析吧。

const CaptureSequencer::StateManager
        CaptureSequencer::kStateManagers[CaptureSequencer::NUM_CAPTURE_STATES-1] = {
    &CaptureSequencer::manageIdle,
    &CaptureSequencer::manageStart,
    &CaptureSequencer::manageZslStart,
    &CaptureSequencer::manageZslWaiting,
    &CaptureSequencer::manageZslReprocessing,
    &CaptureSequencer::manageStandardStart,
    &CaptureSequencer::manageStandardPrecaptureWait,
    &CaptureSequencer::manageStandardCapture,
    &CaptureSequencer::manageStandardCaptureWait,
    &CaptureSequencer::manageBurstCaptureStart,
    &CaptureSequencer::manageBurstCaptureWait,
    &CaptureSequencer::manageDone,
};

拍照状态机如下:

![](https://imgconvert.csdnimg.cn/aHR0cDovL2ltZy5ibG9nLmNzZG4ubmV0LzIwMTcwODA2MTEzMDU4NTE2?x-oss-process=image/format,png)

###三、帧可用监听对象何时被调用
####1.结果可用通知onResultAvailable何时调用
这里倒着分析

void CaptureSequencer::onResultAvailable(const CaptureResult &result) {
    ATRACE_CALL();
    ALOGV("%s: New result available.", __FUNCTION__);
    Mutex::Autolock l(mInputMutex);
    mNewFrameId = result.mResultExtras.requestId;
    mNewFrame = result.mMetadata;
    if (!mNewFrameReceived) {
        mNewFrameReceived = true;
        mNewFrameSignal.signal();
    }
}

下面可以看到processListeners会去找到帧监听对象,然后会调用相应的onResultAvailable回调。

status_t FrameProcessorBase::processListeners(const CaptureResult &result,
        const sp &device) {
//------------
 entry = result.mMetadata.find(ANDROID_REQUEST_ID);
 int32_t requestId = entry.data.i32[0];
//这里会根据hal返回来的数据,查找到requestId,并根据requestId查找到当前帧的监听对象。
//找到后,就调用了onResultAvailable()方法,如下。
     List >::iterator item = listeners.begin();
    for (; item != listeners.end(); item++) {
        (*item)->onResultAvailable(result);
    }       
processSingleFrame()方法中直接调用了processListeners()方法。
bool FrameProcessorBase::processSingleFrame(CaptureResult &result,
                                            const sp &device) {
    ALOGV("%s: Camera %d: Process single frame (is empty? %d)",
          __FUNCTION__, device->getId(), result.mMetadata.isEmpty());
    return processListeners(result, device) == OK;
}
//而
void FrameProcessorBase::processNewFrames(const sp &device) {
    status_t res;
    ATRACE_CALL();
    CaptureResult result;
  //从Device对象中获取到result对象,即hal返回来的帧数据。
    while ( (res = device->getNextResult(&result)) == OK) {
        // TODO: instead of getting frame number from metadata, we should read
        // this from result.mResultExtras when CameraDeviceBase interface is fixed.
        camera_metadata_entry_t entry;
        entry = result.mMetadata.find(ANDROID_REQUEST_FRAME_COUNT);
    //此处省略一些错误检查代码,不影响分析代码。
        if (!processSingleFrame(result, device)) {
            break;
        }
    }
    return;
}

上面2段代码只是体现出调用流程。

bool FrameProcessorBase::threadLoop() {
    status_t res;
    sp device;
    {
        device = mDevice.promote();
        if (device == 0) return false;
    }

    res = device->waitForNextFrame(kWaitDuration);
    if (res == OK) {
        processNewFrames(device);
    } else if (res != TIMED_OUT) {
        ALOGE("FrameProcessorBase: Error waiting for new "
                "frames: %s (%d)", strerror(-res), res);
    }

    return true;
}

上面代码可以看到直接调用了Camera3Device的waitForNextFrame()方法,用来等待帧结果。

status_t Camera3Device::waitForNextFrame(nsecs_t timeout) {
    status_t res;
    Mutex::Autolock l(mOutputLock);

    while (mResultQueue.empty()) {
        res = mResultSignal.waitRelative(mOutputLock, timeout);
        if (res == TIMED_OUT) {
            return res;
        } else if (res != OK) {
            ALOGW("%s: Camera %d: No frame in %" PRId64 " ns: %s (%d)",
                    __FUNCTION__, mId, timeout, strerror(-res), res);
            return res;
        }
    }
    return OK;
}

上面可以看到这里在等待mResultSignal条件,那它什么时候会会为真呢。这两个函数都会在hal callback回来数据时调动。

bool Camera3Device::processPartial3AResult(
        uint32_t frameNumber,
        const CameraMetadata& partial, const CaptureResultExtras& resultExtras) {
    //-------------此处省略N行代码-------------
    // We only send the aggregated partial when all 3A related metadata are available
    // For both API1 and API2.
    // TODO: we probably should pass through all partials to API2 unconditionally.
    mResultSignal.signal();       
}

void Camera3Device::sendCaptureResult(CameraMetadata &pendingMetadata,
        CaptureResultExtras &resultExtras,
        CameraMetadata &collectedPartialResult,
        uint32_t frameNumber) {
    //-------------此处省略N行代码-------------
    mResultSignal.signal();
}

由此可以看出,当hal3有帧过来后,就会调用onResultAvailable()回调。
####2.捕获帧可用通知onCaptureAvailable何时调用

void CaptureSequencer::onCaptureAvailable(nsecs_t timestamp,
        sp captureBuffer) {
    ATRACE_CALL();
    ALOGV("%s", __FUNCTION__);
    Mutex::Autolock l(mInputMutex);
    mCaptureTimestamp = timestamp;
    mCaptureBuffer = captureBuffer;
    if (!mNewCaptureReceived) {
        mNewCaptureReceived = true;
        mNewCaptureSignal.signal();
    }
}

首先来看看捕获可用通知方法中干了什么,做了4件事情,也是我们上面看到的。

  • 1.获取当前帧事件戳
  • 2.保存当前帧buffer.
  • 3.设置mNewCaptureReceived标志位为true.
  • 4.激活mNewCaptureSignal条件,拍照状态机会一直会等待这个条件。

该方法是在JpegProcessor处理线程中被调用的。

status_t JpegProcessor::processNewCapture() {
    ATRACE_CALL();
    status_t res;
    sp captureHeap;
    sp captureBuffer;

    CpuConsumer::LockedBuffer imgBuffer;
    //由于之前我们创建bufferQueue时,我们只允许有一个buffer,所以
    //下面ACQUIRE buffer操作,获取到的肯定是jpeg图片buffer.
    res = mCaptureConsumer->lockNextBuffer(&imgBuffer);//
    //此处省略部分功能代码,不过不影响分析代码
    //下面mCaptureHeap就是在更新jpeg流的时候创建的一个匿名共享内存,
    // TODO: Optimize this to avoid memcopy
    captureBuffer = new MemoryBase(mCaptureHeap, 0, jpegSize);
    void* captureMemory = mCaptureHeap->getBase();
    // 下面将图片数据拷贝到匿名共享内存中,不知道为什么还要有这一步,
    // ION buffer也可以直接用的。
    memcpy(captureMemory, imgBuffer.data, jpegSize);
    //release buffer操作,将buffer归还给bufferQueue.
    mCaptureConsumer->unlockBuffer(imgBuffer);
        sp sequencer = mSequencer.promote();
    if (sequencer != 0) {
      //下面看到了吧,onCaptureAvailable()方法就是在这里调用的,同时将buffer的
      //时间戳以及buffer传入了进去。
        sequencer->onCaptureAvailable(imgBuffer.timestamp, captureBuffer);
    }
    return OK;
}

详细的注释都在代码中,这里只是将jpeg buffer拷贝了一下,然后子传给上层。下面看看processNewCapture()
方法是在哪里调用的。下面可以看到是在JpegProcessor线程处理函数中循环处理的。

bool JpegProcessor::threadLoop() {
    status_t res;
    {
        Mutex::Autolock l(mInputMutex);
        while (!mCaptureAvailable) {
        //这里会等待mCaptureAvailableSignal条件100ms,
            res = mCaptureAvailableSignal.waitRelative(mInputMutex,
                    kWaitDuration);
            if (res == TIMED_OUT) return true;
        }
        mCaptureAvailable = false;
    }
    do {//是在这里循环处理的
        res = processNewCapture();
    } while (res == OK);

    return true;
}

  上面JpegProcessor处理线程一直在等待mCaptureAvailableSignal条件可用,但是它在哪里被激活的呢。
由于JpegProcessor线程在更新jpeg流信息时,创建了一个bufferQueue,这样的话当buffer进行ENQUENU操作时
都会调用onFrameAvailable()方法。进行ENQUEUE操作时,说明帧数据已经归还到BufferQueue中了。消费者可以去拿了。

void JpegProcessor::onFrameAvailable(const BufferItem& /*item*/) {
    Mutex::Autolock l(mInputMutex);
    if (!mCaptureAvailable) {
        mCaptureAvailable = true;
        mCaptureAvailableSignal.signal();//这里看到激活了mCaptureAvailableSignal条件。
    }
}

小总结:

  • 1.当hal将jpeg数据返回到framework时,就会调用onCaptureAvailable(),设置一些状态以及激活一些条件(Condition)

你可能感兴趣的:(android系统)