显示设备探寻(3)

回顾

我们回顾一下前面两节的内容:

  1. init进程创建了SurfaceFlinger服务进程,然后将SurfaceFlinger服务添加到ServiceManager中管理
  2. SurfaceFliger的继承关系
template
class BnInterface : public INTERFACE, public BBinder{}
class BnSurfaceComposer: public BnInterface {}

class BpSurfaceComposer : public BpInterface{}
class SurfaceFlinger : public BnSurfaceComposer,
                       private IBinder::DeathRecipient,
                       private HWComposer::EventHandler{}

主要反映SurfaceFlinger是一个Binder服务,同时继承了死亡回调

  1. 我们通过以下代码探寻
sp client = new SurfaceComposerClient();//[1]
sp surfaceControl = client->createSurface(String8("resize"),160, 240, PIXEL_FORMAT_RGB_565, 0);//[2]
sp surface = surfaceControl->getSurface();//[3]
SurfaceComposerClient::openGlobalTransaction();
surfaceControl->setLayer(100000);
SurfaceComposerClient::closeGlobalTransaction();
ANativeWindow_Buffer outBuffer;
surface->lock(&outBuffer, NULL);//[4]
  • 通过[1]我们了解到

    sp sm(ComposerService::getComposerService());
        return ComposerService& instance = ComposerService::getInstance();
            getService(name, &mComposerService)
    sp conn = sm->createConnection();
        remote()->transact(BnSurfaceComposer::CREATE_CONNECTION, data, &reply);
        sp client(new Client(this));
    
    1. 可以看出通过ComposerService获取SurfaceFlinger服务
    2. 通过SurfaceFlinger创建了Client对象
      class Client : public BnSurfaceComposerClient{}
      
  • 通过[2]我们知道

    status_t err = mClient->createSurface(name, w, h, format, flags,&handle, &gbp);
        *gbp = interface_cast(reply.readStrongBinder());
        flinger->createLayer(name, client, w, h, format, flags,handle, gbp);
    return sur = new SurfaceControl(this, handle, gbp);
        createNormalLayer(client,name, w, h, flags, format,handle, gbp, &layer);
            new Layer()
                BufferQueue::createBufferQueue(&producer, &consumer);
                    sp core(new BufferQueueCore(allocator));
                        BufferSlot[64]//成员变量
                    sp producer(new BufferQueueProducer(core));
                    sp consumer(new BufferQueueConsumer(core));
    
    1. 使用Client创造Layer
    2. 创建SurfaceControl返回
    3. 创建layer,bufferqueue,生产者消费者,BufferQueueCore
  • 通过[3]知道

    return new Surface()
    
  • 通过[4]我们知道

    surface->lock(&outBuffer, NULL);
        status_t err = dequeueBuffer(&out, &fenceFd);
            mGraphicBufferProducer->dequeueBuffer(&buf, &fence, swapIntervalZero,reqWidth, reqHeight, reqFormat, reqUsage);
                //开始对端
                mCore->mAllocator->createGraphicBuffer(width, height, format, usage, &error)
                        new GraphicBuffer(width, height, format, usage)
                            initSize()
                                allocator.alloc()
            result = mGraphicBufferProducer->requestBuffer(buf, &gbuf)
    

    现在我们知道lock分配内存

介绍

这一节我们需要知道当把内存分配出来之后需要做什么事情。

我们本节继续回到dequeueBuffer我们已经通过mGraphicBufferProducer->dequeueBuffer分配出来内存我们就需要知道分配内存之后怎么办,所以看下文

mGraphicBufferProducer->requestBuffer()

frameworks\native\libs\gui\Surface.cpp

virtual status_t requestBuffer(int bufferIdx, sp* buf) {
    Parcel data, reply;
    data.writeInterfaceToken(IGraphicBufferProducer::getInterfaceDescriptor());
    data.writeInt32(bufferIdx);
    status_t result =remote()->transact(REQUEST_BUFFER, data, &reply);
    if (result != NO_ERROR) {
        return result;
    }
    bool nonNull = reply.readInt32();
    if (nonNull) {
        *buf = new GraphicBuffer();
        result = reply.read(**buf);
        if(result != NO_ERROR) {
            (*buf).clear();
            return result;
        }
    }
    result = reply.readInt32();
    return result;
}
  • 通过Binder系统发起REQUEST_BUFFER
  • 如果返回结果不为null,则创建GraphicBuffer对象
    调用栈
    surface->lock(&outBuffer, NULL);
        status_t err = dequeueBuffer(&out, &fenceFd);
            mGraphicBufferProducer->requestBuffer(buf, &gbuf)
                remote()->transact(REQUEST_BUFFER...);
                *buf = new GraphicBuffer();
                result = reply.read(**buf);

我们由关系知道:

IGraphicBufferProducer

class BpGraphicBufferProducer : public BpInterface{}

class BnGraphicBufferProducer : public BnInterface{}

class BufferQueueProducer : public BnGraphicBufferProducer,
                            private IBinder::DeathRecipient {}

case REQUEST_BUFFER: {
    CHECK_INTERFACE(IGraphicBufferProducer, data, reply);
    int bufferIdx   = data.readInt32();
    sp buffer;
    int result = requestBuffer(bufferIdx, &buffer);
    reply->writeInt32(buffer != 0);
    if (buffer != 0) {
        reply->write(*buffer);
    }
    reply->writeInt32(result);
    return NO_ERROR;

所以我们知道调用如下对端:

BufferQueueProducer

status_t BufferQueueProducer::requestBuffer(int slot, sp* buf) {
    Mutex::Autolock lock(mCore->mMutex);
    if (mCore->mIsAbandoned) {
        return NO_INIT;
    }
    if (slot < 0 || slot >= BufferQueueDefs::NUM_BUFFER_SLOTS) {
                slot, BufferQueueDefs::NUM_BUFFER_SLOTS);
        return BAD_VALUE;
    } else if (mSlots[slot].mBufferState != BufferSlot::DEQUEUED) {
        return BAD_VALUE;
    }

    mSlots[slot].mRequestBufferCalled = true;
    *buf = mSlots[slot].mGraphicBuffer;
    return NO_ERROR;
}

我们通过上面两段代码

片段1
mSlots[slot].mRequestBufferCalled = true;
*buf = mSlots[slot].mGraphicBuffer;
片段2
sp buffer;
int result = requestBuffer(bufferIdx, &buffer);
reply->writeInt32(buffer != 0);
if (buffer != 0) {
    reply->write(*buffer);
}

我们通过代码看出来,对端把Buffer写到客户端去,然后客户端通过

 *buf = new GraphicBuffer();
result = reply.read(**buf);

读取buffer,所以服务端写buffer,客户端读buffer

我们继续看一看写了哪些内容:


status_t Parcel::write(const FlattenableHelperInterface& val)
{
    status_t err;

    // size if needed
    const size_t len = val.getFlattenedSize();
    const size_t fd_count = val.getFdCount();//文件句柄的个数

    if ((len > INT32_MAX) || (fd_count > INT32_MAX)) {
        return BAD_VALUE;
    }

    err = this->writeInt32(len);
    if (err) return err;

    err = this->writeInt32(fd_count);
    if (err) return err;

    // payload
    void* const buf = this->writeInplace(pad_size(len));
    if (buf == NULL)
        return BAD_VALUE;

    int* fds = NULL;
    if (fd_count) {
        fds = new int[fd_count];
    }

    //调用GraphicBuffer::flatten()将GraphicBuffer中重要信息写入buffer
    err = val.flatten(buf, len, fds, fd_count);
    for (size_t i=0 ; iwriteDupFileDescriptor( fds[i] );
    }

    if (fd_count) {
        delete [] fds;
    }

    return err;
}

我们看出来核心是通过err = val.flatten(buf, len, fds, fd_count);将核心数据通过binder驱动写到客户端

我们再来看看读

virtual status_t requestBuffer(int bufferIdx, sp* buf) {
    result = reply.read(**buf);
}

status_t Parcel::read(FlattenableHelperInterface& val) const
{
    // size
    const size_t len = this->readInt32();
    const size_t fd_count = this->readInt32();

    if (len > INT32_MAX) {
        return BAD_VALUE;
    }

    // payload
    void const* const buf = this->readInplace(pad_size(len));
    if (buf == NULL)
        return BAD_VALUE;

    int* fds = NULL;
    if (fd_count) {
        fds = new int[fd_count];
    }

    status_t err = NO_ERROR;
    for (size_t i=0 ; ireadFileDescriptor());
        if (fds[i] < 0) {
            err = BAD_VALUE;
            ALOGE("dup() failed in Parcel::read, i is %zu, fds[i] is %d, fd_count is %zu, error: %s",
                i, fds[i], fd_count, strerror(errno));
        }
    }

    if (err == NO_ERROR) {
        err = val.unflatten(buf, len, fds, fd_count);
    }

    if (fd_count) {
        delete [] fds;
    }

    return err;
}

这里读的核心在val.unflatten(buf, len, fds, fd_count)

读到的核心是:

  • 使用fd'构造handle
  • 得到虚拟地址,是通过mmap(fd')获取虚拟地址,mmap的值放入handle->base中

小结

可能大家看到这里就有点懵逼了,我们大概说一下做了什么,我们通过surface->lock()函数分配了一块匿名共享内存用fd表示,然后应用程序通过远程调用得到fd',应用程序这边通过fd'构造出handle,然后通过mmap(handle)得到地址。

所以到目前为止调用栈情况是:

    surface->lock(&outBuffer, NULL);
        status_t err = dequeueBuffer(&out, &fenceFd);
            mGraphicBufferProducer->dequeueBuffer(&buf, &fence, swapIntervalZero,reqWidth, reqHeight, reqFormat, reqUsage);
                //开始对端
                mCore->mAllocator->createGraphicBuffer(width, height, format, usage, &error)
                        new GraphicBuffer(width, height, format, usage)
                            initSize()
                                allocator.alloc()
            result = mGraphicBufferProducer->requestBuffer(buf , &gbuf)
                remote()->transact(REQUEST_BUFFER...);
                *buf = new GraphicBuffer();
                result = reply.read(**buf);
    sp backBuffer(GraphicBuffer::getSelf(out));//[1]
    backBuffer->lockAsync(...,&vaddr, fenceFd);//[2]

在1中的out就是我们在mGraphicBufferProducer->dequeueBuffer(&buf,...);中得到的buffer我们将得到的buffer赋值给backBuffer,然后使用GraphicBuffer::lockAsync(...,&vaddr)下面我们就看看具体这个函数做什么事情

GraphicBuffer::lockAsync()

status_t GraphicBuffer::lockAsync(uint32_t inUsage, const Rect& rect,
        void** vaddr, int fenceFd)
{
    if (rect.left < 0 || rect.right  > width ||
        rect.top  < 0 || rect.bottom > height) {
        return BAD_VALUE;
    }
    status_t res = getBufferMapper().lockAsync(handle, inUsage, rect, vaddr,fenceFd);
    return res;
}

我们调用到:

status_t GraphicBufferMapper::lockAsync(buffer_handle_t handle,
        uint32_t usage, const Rect& bounds, void** vaddr, int fenceFd)
{
    ATRACE_CALL();
    status_t err;

    if (mAllocMod->common.module_api_version >= GRALLOC_MODULE_API_VERSION_0_3) {
        err = mAllocMod->lockAsync(mAllocMod, handle, static_cast(usage),
                bounds.left, bounds.top, bounds.width(), bounds.height(),
                vaddr, fenceFd);
    } else {
        if (fenceFd >= 0) {
            sync_wait(fenceFd, -1);
            close(fenceFd);
        }
        err = mAllocMod->lock(mAllocMod, handle, static_cast(usage),
                bounds.left, bounds.top, bounds.width(), bounds.height(),
                vaddr);
    }
    return err;
}

这个时候就需要看GRALLOC_MODULE这个版本,如果超过0.3就调用mAllocMod->lockAsync()否则就调用mAllocMod->lock()

我们调用的是lock()
由于我们加载的HAL层:

GraphicBufferMapper::GraphicBufferMapper()
    : mAllocMod(0)
{
    hw_module_t const* module;
    int err = hw_get_module(GRALLOC_HARDWARE_MODULE_ID, &module);
    if (err == 0) {
        mAllocMod = reinterpret_cast(module);
    }
}

所以我们看不到代码,因为每个厂商都不一样,但是我们查阅资料知道是将是hande->base写入vaddr中,也就是虚拟地址。

所以GraphicBufferMapper::lockAsync()的事情就是直接返回handl->base写入了vaddr中

所以到目前为止我们就知道了:

  • APP跟SurfaceFlinger之间的重要数据结构
  • APP创建SurfaceFlinger Client的过程
  • APP申请创建Surface的过程
  • APP申请lock(buffer)的过程_框架
  • APP申请lock(buffer)的过程_分配buffer
  • APP申请lock(buffer)的过程_获得buffer信息

下一节我们将设计构图将这些串起来

你可能感兴趣的:(显示设备探寻(3))