android Binder详解 (2)

2.3 SampleService启动(main函数实现)

service的启动实际就是需要编写一个可执行程序来运行service。
还是以surfaceflinger为例,surfaceflinger的main函数在frameworks/native/service/surfacefllinger/main_surfaceflinger.cpp:

int main(int argc, char** argv) {
    // When SF is launched in its own process, limit the number of
    // binder threads to 4.
    ProcessState::self()->setThreadPoolMaxThreadCount(4);

    // start the thread pool
    sp ps(ProcessState::self());
    ps->startThreadPool();

    // instantiate surfaceflinger
    sp flinger = new SurfaceFlinger();

#if defined(HAVE_PTHREADS)
    setpriority(PRIO_PROCESS, 0, PRIORITY_URGENT_DISPLAY);
#endif
    set_sched_policy(0, SP_FOREGROUND);

    // initialize before clients can connect
    flinger->init();

    // publish surface flinger
    sp sm(defaultServiceManager());
    sm->addService(String16(SurfaceFlinger::getServiceName()), flinger, false);

    // run in this thread
    flinger->run();

    return 0;
}
大致看一下这个函数,主要牵扯到了了ProcessState和ServiceManager,其他主要是一些surfaceflinger 初始化动作等。
ServiceManager::addService()所做的动作是把service注册到servicemanager,这种中间是如何工作的,我们等到后面再分析。

这里先看下ProcessState。


2.3.1 ProcessState

ProcessState头文件在frameworks/native/include/binder/ProcessState.h,下面看下这个类中被调用到的函数:ProcessState::self(),setThreadPoolMaxThreadCount(),startThreadPool()。

ProcessState::self()

这个函数不用仔细说了,static函数,加上ProcessState的构造,析构函数的私有化,这是个典型的单例模式的使用。
也就是说,在当前进程中只有一个ProcessState对象,通过self()函数来访问这个唯一的对象。我们看看ProcessState的构造中做了什么:

ProcessState::ProcessState()
    : mDriverFD(open_driver())
    , mVMStart(MAP_FAILED)
    , mManagesContexts(false)
    , mBinderContextCheckFunc(NULL)
    , mBinderContextUserData(NULL)
    , mThreadPoolStarted(false)
    , mThreadPoolSeq(1)
{
    if (mDriverFD >= 0) {
        // XXX Ideally, there should be a specific define for whether we
        // have mmap (or whether we could possibly have the kernel module
        // availabla).
#if !defined(HAVE_WIN32_IPC)
        // mmap the binder, providing a chunk of virtual address space to receive transactions.
        mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
        if (mVMStart == MAP_FAILED) {
            // *sigh*
            ALOGE("Using /dev/binder failed: unable to mmap transaction memory.\n");
            close(mDriverFD);
            mDriverFD = -1;
        }
#else
        mDriverFD = -1;
#endif
    }

    LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened.  Terminating.");
}
构造中,主要是初始化mDriverFD和初始化了mVMStart。

先看mDriverFD的初始化,mDriverFD是从open_driver()中得到的:

static int open_driver()
{
    int fd = open("/dev/binder", O_RDWR);
    if (fd >= 0) {
        fcntl(fd, F_SETFD, FD_CLOEXEC);
        int vers;
        status_t result = ioctl(fd, BINDER_VERSION, &vers);
        if (result == -1) {
            ALOGE("Binder ioctl to obtain version failed: %s", strerror(errno));
            close(fd);
            fd = -1;
        }
        if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {
            ALOGE("Binder driver protocol does not match user space protocol!");
            close(fd);
            fd = -1;
        }
        size_t maxThreads = 15;
        result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
        if (result == -1) {
            ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
        }
    } else {
        ALOGW("Opening '/dev/binder' failed: %s\n", strerror(errno));
    }
    return fd;
}
这个函数中,我们看到了这边open了/dev/binder,也就是binder虚拟设备(后面我们就简称为binder了)。
然后获取了binder version,进行比较,之后进行了BINDER_SET_MAX_THREADS的操作。
BINDER_SET_MAX_THREADS这个操作,我们从log中可以看到是设置一个max thread,但是这个thread具体指什么thread还是很模糊的,先不去理会,后续分析binder的时候再注意一下。
再来看看mVMStart:
mVMStart的初始化就是一个mmap,从mDriverFD也就是binder中mmap出来一块BINDER_VM_SIZE大小的内存,返回出来的就是这块内存的指针。

代码中是找不到直接操作这个地址的地方的,看来具体的如何使用也要到后面具体分析才能看明白。

ProcessState::setThreadPoolMaxThreadCount()

status_t ProcessState::setThreadPoolMaxThreadCount(size_t maxThreads) {
    status_t result = NO_ERROR;
    if (ioctl(mDriverFD, BINDER_SET_MAX_THREADS, &maxThreads) == -1) {
        result = -errno;
        ALOGE("Binder ioctl to set max threads failed: %s", strerror(-result));
    }
    return result;
}

这边和open_driver()中一样,调用了BINDER_SET_MAX_THREADS。操作具体含义不清楚,我们在SampleService中可以暂时照着样子去写。

ProcessState::startThreadPool()
startThreadPool()的调用逻辑中,最重要的是调用了spawnPooledThread()启动了一个新的PoolThread,我们看看PoolThread里面做了什么。
PoolThread的loop函数在frameworks/native/libs/binder/ProcessState.cpp中,

virtual bool threadLoop()
{
    IPCThreadState::self()->joinThreadPool(mIsMain);
    return false;
}

threadloop()就是调用了IPCThreadState::joinThreadPool(),要了解threadloop做了什么,还要了解下IPCThreadState。


2.3.2 IPCThreadState

IPCThreadState的头文件在frameworks/natvie/include/binder/IPCThreadState.h。


2.3.2.1 static IPCThreadState* self()

IPCThreadState的构造析构函数都是private,只能通过self()来获得它的对象,看起来也是一个单例模式,但是实际有些差异:

IPCThreadState* IPCThreadState::self()
{
    if (gHaveTLS) {
restart:
        const pthread_key_t k = gTLS;
        IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
        if (st) return st;
        return new IPCThreadState;
    }
    
    if (gShutdown) return NULL;
    
    pthread_mutex_lock(&gTLSMutex);
    if (!gHaveTLS) {
        if (pthread_key_create(&gTLS, threadDestructor) != 0) {
            pthread_mutex_unlock(&gTLSMutex);
            return NULL;
        }
        gHaveTLS = true;
    }
    pthread_mutex_unlock(&gTLSMutex);
    goto restart;
}
IPCThreadState::IPCThreadState()
    : mProcess(ProcessState::self()),
      mMyThreadId(androidGetTid()),
      mStrictModePolicy(0),
      mLastTransactionBinderFlags(0)
{
    pthread_setspecific(gTLS, this);
    clearCaller();
    mIn.setDataCapacity(256);
    mOut.setDataCapacity(256);
}
在self()中看到了一些不一样的地方,它使用了线程存储。
self()中的pthread_key_create(&gTLS, threadDestructor)为gTLS创建了线程存储,在构造函数中通过pthread_setspecific(gTLS, this);为当前thread的gTLS赋值,self()中通过(IPCThreadState*)pthread_getspecific(k);来获得当前的gTLS值。

也就是说,IPCThreadState在每个thread中有一个唯一的对象,通过self()来获得这个对象。


2.3.2.2 IPCThreadState::joinThreadPool()

现在看下在LoopThread()中用到的joinThreadPool():

void IPCThreadState::joinThreadPool(bool isMain)
{
    ......
    do {
        processPendingDerefs();
        // now get the next command to be processed, waiting if necessary
        result = getAndExecuteCommand();

        if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) {
            ALOGE("getAndExecuteCommand(fd=%d) returned unexpected error %d, aborting",
                  mProcess->mDriverFD, result);
            abort();
        }
        
        // Let this thread exit the thread pool if it is no longer
        // needed and it is not the main process thread.
        if(result == TIMED_OUT && !isMain) {
            break;
        }
    } while (result != -ECONNREFUSED && result != -EBADF);
    ......
}
joinThreadLoop()里面有一个do...while的循环,这个是thead里面的主要的调用,也是是我们要看的重点。
循环中,主要就是调用了两个函数processPendingDerefs()和getAndExecuteCommand()。processPendingDerefs()这个函数中主要是将mPendingWeakDerefs,mPendingStrongDerefs中的指针解引用,而且他的执行结果并不影响这个Loop的执行。我们主要看下getAndExecuteCommand()里面做了什么:
status_t IPCThreadState::getAndExecuteCommand()
{
    status_t result;
    int32_t cmd;

    result = talkWithDriver();
    if (result >= NO_ERROR) {
        ......
        result = executeCommand(cmd);
        ......
    }

    return result;
}
getAndExecuteCommand()主要就是调用两个函数talkWithDriver()和executeCommand(),我们分别看一下。
IPCThreadState::talkWithDriver()
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    if (mProcess->mDriverFD <= 0) {
        return -EBADF;
    }
    
    binder_write_read bwr;
    
    // Is the read buffer empty?
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();
    
    // We don't want to write anything if we are still reading
    // from data left in the input buffer and the caller
    // has requested to read the next data.
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
    
    bwr.write_size = outAvail;
    bwr.write_buffer = (long unsigned int)mOut.data();

    // This is what we'll read.
    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (long unsigned int)mIn.data();
    } else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }

    IF_LOG_COMMANDS() {
        TextOutput::Bundle _b(alog);
        if (outAvail != 0) {
            alog << "Sending commands to driver: " << indent;
            const void* cmds = (const void*)bwr.write_buffer;
            const void* end = ((const uint8_t*)cmds)+bwr.write_size;
            alog << HexDump(cmds, bwr.write_size) << endl;
            while (cmds < end) cmds = printCommand(alog, cmds);
            alog << dedent;
        }
        alog << "Size of receive buffer: " << bwr.read_size
            << ", needRead: " << needRead << ", doReceive: " << doReceive << endl;
    }
    
    // Return immediately if there is nothing to do.
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
        IF_LOG_COMMANDS() {
            alog << "About to read/write, write size = " << mOut.dataSize() << endl;
        }
#if defined(HAVE_ANDROID_OS)
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
        if (mProcess->mDriverFD <= 0) {
            err = -EBADF;
        }
        IF_LOG_COMMANDS() {
            alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
        }
    } while (err == -EINTR);

    IF_LOG_COMMANDS() {
        alog << "Our err: " << (void*)err << ", write consumed: "
            << bwr.write_consumed << " (of " << mOut.dataSize()
                        << "), read consumed: " << bwr.read_consumed << endl;
    }

    if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < (ssize_t)mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else
                mOut.setDataSize(0);
        }
        if (bwr.read_consumed > 0) {
            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0);
        }
        IF_LOG_COMMANDS() {
            TextOutput::Bundle _b(alog);
            alog << "Remaining data size: " << mOut.dataSize() << endl;
            alog << "Received commands from driver: " << indent;
            const void* cmds = mIn.data();
            const void* end = mIn.data() + mIn.dataSize();
            alog << HexDump(cmds, mIn.dataSize()) << endl;
            while (cmds < end) cmds = printReturnCommand(alog, cmds);
            alog << dedent;
        }
        return NO_ERROR;
    }
    
    return err;
}
这个函数中核心就是调用了ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr),看看他的几个参数:
  • mProcess->mDriverFD: mProcess是指向ProcessState指针,前面我们已经看到过ProcessState中的mDriverFD是/dev/binder的fd。
  • BINDER_WRITE_READ: 这个是ioctl的command,从名字上我们可以猜到他的意思是读取或者写入一些数据。
  • bwr,是记录要读取/写入的数据,并且他还使用了mIn,mOut这两个Parcel中的buffer。

再结合注释,talkWithDriver()的功能就很明确:从binder device中读取/写入数据,而读取/写入的数据就放在mIn,mOut这两个Parcel的buffer中。

IPCThreadState::executeCommand()

这个函数是对不同的command进行处理,细节我们就不去看了。executeCommand的参数,在getAndExecuteCommand()中可以看到是从mIn中读取出来的,而mIn中的buffer我们在talkWithDriver()中看到是保存了从binder device中读取到的数据内容,也就是说executeCommand是处理从binder device中读取到的command的。

到这里,我们已经理解了IPCThreadState::joinThreadPool()的功能,也就是说ProcessState::startThreadPool()是启动了一个thread来和就和binder device进行交互,并且执行读取到的command。


2.3.3 Service启动小结

到这里,surfaceflinger main函数中主要的函数已经了解了,surfaceflinger的启动过程主要有几个动作:

  • 调用ProcessState::startThreadPool()来启动和binder交互的thread。
  • surfaceflinger初始化。
  • 调用IServiceManager::addService()来注册service。
  • 进入surfaceflinger的消息循环。

那么,对于我们写的SampleService来说,他只是要和binder交互就可以了,没有自己的消息处理循环,如果按照surfaceflinger的作法,那我们要自己添加一个空的消息循环了,这就不太好了,那我们是不是可以直接在主线程中去和binder交互?
当然是可以的,我们可以参考一下mediaserver的启动,他除了使用ProcessState::startThreadPool()启动了交互的thread外,还调用了IPCThreadState::joinThreadPool()来在主线程和binder交互。那对于我们这种简单的service,我们也可以直接调用IPCThreadState::joinThreadPool()在主要线程交互。
总结下,service启动的要点:

  • service初始化。
  • 调用IServiceManager::addService()来注册service。
  • ProcessState::startThreadPool()或IPCThreadState::joinThreadPool()来和binder device交互。

实现下SampleService的main函数:

int main(int argc, char** argv) {

    //Just set as surfaceflinger, we will check it later.
    ProcessState::self()->setThreadPoolMaxThreadCount(4);

    sp samplesrv = new SampleService();

#if defined(HAVE_PTHREADS)
    setpriority(PRIO_PROCESS, 0, PRIORITY_URGENT_DISPLAY);
#endif
    set_sched_policy(0, SP_FOREGROUND);

    // publish surface flinger
    sp sm(defaultServiceManager());
    sm->addService(String16("SampleService"), samplesrv, false);

    IPCThreadState::self()->joinThreadPool();
    return 0;
}
到这里,我们写完了service端的所有实现了,我们肯定想着先把service编译一下跑起来看看。但是很不幸,编译会失败,提示找不到BpSampleService这个class。
BpSampleService是属于client端的实现,它在ISampleService的IMPLEMENT_META_INTERFACE中被引用到了,而SampleService又继承了ISampleService,所以虽然SampleService用不到这个class,但是在编译还是有影响到。

没办法,我们还是先把client端也做好吧。


2.4 Sample Client实现

client端主要调用service的接口,我们目标是写一个sample来调用service接口,并完成相关的类的实现。这里我们以ComposerService为例来分析。


2.4.1 ComposerService

对于ComposerService,我们只想看他是如何获取到能够操作Surfaceflinger的接口。
class ComposerService的成员sp mComposerService是记录了一个指向ISurfaceComposer的指针,我们跟踪到它是在ComposerService::connectLocked()中被赋值了,我们看下这个函数,实现在在frameworks/natvie/libs/gui/SurfaceComposerClient.cpp:

void ComposerService::connectLocked() {
    const String16 name("SurfaceFlinger");
    while (getService(name, &mComposerService) != NO_ERROR) {
        usleep(250000);
    }
    assert(mComposerService != NULL);

    // Create the death listener.
    class DeathObserver : public IBinder::DeathRecipient {
        ComposerService& mComposerService;
        virtual void binderDied(const wp& who) {
            ALOGW("ComposerService remote (surfaceflinger) died [%p]",
                  who.unsafe_get());
            mComposerService.composerServiceDied();
        }
     public:
        DeathObserver(ComposerService& mgr) : mComposerService(mgr) { }
    };

    mDeathObserver = new DeathObserver(*const_cast(this));
    mComposerService->asBinder()->linkToDeath(mDeathObserver);
}
这个函数里面,我们需要注意两个函数的调用,getService()和mComposerService->asBinder()->linkToDeath(mDeathObserver);

getService(name, &mComposerService)

getService(name, &mComposerService)这个函数中可以得到可以直接操作service接口的对象,看下他的实现:

template
status_t getService(const String16& name, sp* outService)
{
    const sp sm = defaultServiceManager();
    if (sm != NULL) {
        *outService = interface_cast(sm->getService(name));
        if ((*outService) != NULL) return NO_ERROR;
    }
    return NAME_NOT_FOUND;
}
这个是模板函数,带入一下ISurfaceComposer的type:
status_t getService(const String16& name, sp* outService)
{
    const sp sm = defaultServiceManager();
    if (sm != NULL) {
        *outService = interface_cast(sm->getService(name));
        if ((*outService) != NULL) return NO_ERROR;
    }
    return NAME_NOT_FOUND;
}
getService里面的逻辑很简单,先获取ServiceManager的接口,然后通过接口getService来获得到要操作的service。但是getService()返回的是sp,不能直接访问service接口,还需要经过interface_cast转换后,才能获得操作的接口,这个转换里面做了什么?
template
inline sp interface_cast(const sp& obj)
{
    return INTERFACE::asInterface(obj);
}
INTERFACE::asInterface带入INTERFACE后是ISurfaceComposer::asInterface,这个就是我们在定义service接口的时候,使用的IMPLEMENT_META_INTERFACE宏中实现的函数,直接看下带入SurfaceComposer之后的宏:
android::sp ISurfaceComposer::asInterface(                \
            const android::sp& obj)                   \
    {                                                                   \
        android::sp intr;                                 \
        if (obj != NULL) {                                              \
            intr = static_cast(                          \
                obj->queryLocalInterface(                               \
                       ISurfaceComposer::descriptor).get());               \
            if (intr == NULL) {                                         \
                intr = new BpSurfaceComposer(obj);                          \
            }                                                           \
        }                                                               \
        return intr;                                                    \
    }
这下我们清楚了,getService出来的结果作为参数构造了一个BpSurfaceComposer对象。BpSurfaceComposer我们后面再仔细看下。
mComposerService->asBinder()->linkToDeath(mDeathObserver)
linkToDeath()这个函数传入的参数是IBinder::DeathRecipient,很眼熟吧,我们在SampleService中也继承了这个接口,但是我们完全不知道是干什么用的,这下可以解开这个疑问了。
linkToDeath()的最初的声明是在frameworks/native/include/binder/IBinder.h中,这边我们只要看下注释就明白了
/**
     * Register the @a recipient for a notification if this binder
     * goes away.  If this binder object unexpectedly goes away
     * (typically because its hosting process has been killed),
     * then DeathRecipient::binderDied() will be called with a reference
     * to this.
     *
     * The @a cookie is optional -- if non-NULL, it should be a
     * memory address that you own (that is, you know it is unique).
     *
     * @note You will only receive death notifications for remote binders,
     * as local binders by definition can't die without you dying as well.
     * Trying to use this function on a local binder will result in an
     * INVALID_OPERATION code being returned and nothing happening.
     *
     * @note This link always holds a weak reference to its recipient.
     *
     * @note You will only receive a weak reference to the dead
     * binder.  You should not try to promote this to a strong reference.
     * (Nor should you need to, as there is nothing useful you can
     * directly do with it now that it has passed on.)
     */
    virtual status_t        linkToDeath(const sp& recipient,
                                        void* cookie = NULL,
                                        uint32_t flags = 0) = 0;
linkToDeath()是注册了一个回调函数,当service关闭(或者挂掉)的时候,这个回调函数会被调用。而IBinder::DeathRecipient实际是service关闭时候的回调的接口定义。这边实际是注册了一个Surfaceflinger退出时候的回调函数。
还记得我们的SampleService当时仿照SurfaceFlinger的作法,也继承了IBinder::DeathRecipient,这边看起来这个是完全不必要的。我们去看看SurfaceFlinger中去搜索一下,就会发现SurfaceFlinger继承这个接口是因为他需要监听WindowMananger的关闭(在SurfaceFlinger::bootFinished()中),而不是因为service本身的需要。
现在,我们可以去把SampleService更新一下了,去掉继承IBinder::DeathRecipient。

ComposerService小结
分析ComposerService,我们清楚了,client端只需要两个动作:

  • 调用IServiceManager.h中的status_t getService(const String16& name, sp* outService) 函数来获得指向对应Interface的指针。
  • 如果需要监控service的状态,调用linkToDeath()来注册回调函数。

我们实现一下client端的main函数:

static bool deathCalled  = false;
class DeathCallBack : public IBinder::DeathRecipient {
public:
    virtual void binderDied(const wp& who) {
        ALOGE("SampleService remote died");
        deathCalled  = true;
    }
};


int main(int argc, char** argv) {
    ALOGE("Client get SampleService:");
    sp sampleSrv;
    if(getService(String16("SampleService"), &sampleSrv) != NO_ERROR){
        ALOGE("get SampleService fail");
        return 0;
    }
    
    ALOGE("Client Register callback:");
    sp deathCb(new DeathCallBack());
    sampleSrv->asBinder()->linkToDeath(deathCb);

    ALOGE("Client call sayHello:");
    sampleSrv->sayHello(String8("SampleClient"));

    do{
        //add loop,for test the death callback
        sleep(1);
        if(Deathcalled)
            ALOGE("service died!");
    }while(1);

    return 1;
}


2.4.2 BpSurfaceComposer

现在我们来看看BpSurfaceComposer,BpSurfaceComposer继承了BpInterface,看看BpInterface这个模板的声明,在frameworks/native/:

template
class BpInterface : public INTERFACE, public BpRefBase
{
public:
                                BpInterface(const sp& remote);

protected:
    virtual IBinder*            onAsBinder();
};
BpInterface有继承了INTERFACE和BpRefBase,对于BpInterface,就是继承了ISurfaceComposer和BpRefBase。ISurfaceComposer在前面已经分析实现了,而BpRefBase看起来和具体的service也没什么关系,暂时就不去看了。
BpInterface声明的接口,都已经实现了,我们暂时不需要管了,直接看BpSurfaceComposer的实现。

BpSurfaceComposer的实现,在frameworks/native/libs/gui/ISurfaceComposer.cpp:

class BpSurfaceComposer : public BpInterface
{
public:
    BpSurfaceComposer(const sp& impl)
        : BpInterface(impl)
    {
    }

    ......

    virtual sp getBuiltInDisplay(int32_t id)
    {
        Parcel data, reply;
        data.writeInterfaceToken(ISurfaceComposer::getInterfaceDescriptor());
        data.writeInt32(id);
        remote()->transact(BnSurfaceComposer::GET_BUILT_IN_DISPLAY, data, &reply);
        return reply.readStrongBinder();
    }
    ......
};
BpSurfaceComposer实现,只有ISurfaceComposer中的接口和构造函数,而构造函数又是空的,那我们只要看下如何实现ISurfaceComposer的接口了。
之前写BnSurfaceComposer的onTransact()时候,我们考察了getBuiltInDisplay()的调用,我们这里还是分析这个接口的实现。
        data.writeInterfaceToken(ISurfaceComposer::getInterfaceDescriptor());
        data.writeInt32(id);
这两行代码,是在data这个parcel中写入了interface的descriptor还有调用getBuiltInDisplay的参数。
        remote()->transact(BnSurfaceComposer::GET_BUILT_IN_DISPLAY, data, &reply);
        return reply.readStrongBinder();
reply中保存的是返回值,那么我们可以肯定 remote()->transact(BnSurfaceComposer::GET_BUILT_IN_DISPLAY, data, &reply);这行代码会调用到对应的service接口。
trasact的第一个参数是GET_BUILT_IN_DISPLAY,是不是很熟悉,我们对比下BnSurfaceComposer中的onTransact的对应的case:
status_t BnSurfaceComposer::onTransact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    switch(code) {
    ......
        case GET_BUILT_IN_DISPLAY: {
            CHECK_INTERFACE(ISurfaceComposer, data, reply);
            int32_t id = data.readInt32();
            sp display(getBuiltInDisplay(id));
            reply->writeStrongBinder(display);
            return NO_ERROR;
        }
    ......
    }
    // should be unreachable
    return NO_ERROR;
}
两个函数一对比,就很清楚了,两边的code,data,replay完全是一一对应的,BpSurfaceComposer::transact()通过binder会调用BnSurfaceComposer的onTransact()中来,从而调用到service端的getBuiltInDisplay函数。
那client端的接口函数实现就很清晰了:
  • 声明两个parcel,data和replay,分别用来保存参数和返回值。
  • Parcel data中写入INTERFACE Descriptor,写入参数。
  • 调用remote()->transact(command,data,replay);
  • 从Parcel reply中读取返回值,并返回。

按照这个步骤,实现我们的BpSampleService:

class BpSampleService : public BpInterface {
public:
    BpSampleService(const sp& impl)
        : BpInterface(impl)
    {
    }

    virtual int sayHello(const String8& clientName){
        Parcel data,reply;
        data.writeInterfaceToken(ISampleService::getInterfaceDescriptor());
        data.writeString8(clientName);
        remote()->transact(BnSampleService::SAY_HELLO, data, &reply);
        int ret = reply.readInt32();
        ALOGD("sayHello return %d",ret);
        return ret;
    }
};


2.5 binder native api 小结

到这里,实现service/client部分已经都完成了,编译运行看下是否可以执行。
先加上android.mk去编译:

LOCAL_PATH:= $(call my-dir)
include $(CLEAR_VARS)

LOCAL_SRC_FILES:= \
    ISampleService.cpp 

LOCAL_SHARED_LIBRARIES := \
    libcutils \
    libutils \
    liblog \
    libbinder 

LOCAL_MODULE:= libSampleService

include $(BUILD_SHARED_LIBRARY)
#####################################################################
# build executable
include $(CLEAR_VARS)

LOCAL_SRC_FILES:= \
    SampleService.cpp

LOCAL_SHARED_LIBRARIES := \
    libbinder \
    libutils \
    libSampleService \
    libcutils \
    liblog 

LOCAL_MODULE_TAGS := optional

LOCAL_MODULE:= sampleservice

include $(BUILD_EXECUTABLE)

#####################################################################
# build executable
include $(CLEAR_VARS)

LOCAL_SRC_FILES:= \
    SampleClient.cpp

LOCAL_SHARED_LIBRARIES := \
    libbinder \
    libutils \
    libSampleService \
    libcutils \
    liblog 

LOCAL_MODULE_TAGS := optional

LOCAL_MODULE:= sampleclient

include $(BUILD_EXECUTABLE)

编译出来三个文件,libSampleService,sampleservice,sampleclient。libSampleSerivce放到平台的/system/lib下去,sampleservice,sampleclient放到/system/bin下去。

检查service启动

root@dev:/system/bin # sampleservice &                                        
[1] 1214
root@dev:/system/bin # dumpsys -l
Currently running services:
  SampleService
  SurfaceFlinger
  accessibility
  account
  activity
  ……
看到dumpsys -l中已经有我们的SampleService在了,说明service已经运行起来了。

检查接口调用正常
130|root@dev:/system/bin # sampleclient &                                     
[3] 1262
08-21 13:34:06.333 E/        ( 1262): get SampleService ok
08-21 13:34:06.333 D/        ( 1235): Hello SampleClient
08-21 13:34:06.333 D/        ( 1262): sayHello return 1
执行结果正常,''Hello SampleClient''是service接口中print出来的,''sayHello return 1''是client端拿到返回值后的打印。

检查DeathRecipient响应
这个case出了问题,在kill掉service后,DeathRecipient中的log并没有打印出来,对比了一下surfacelfinger部分的调用,并没有什么特别的地方,一定有一些隐藏的制约条件在其中,我们先放一放这个问题。
在下面的细节分析的时候,把DeathRecipient作为一个内容单独分析一下。


好了,现在这个最简单的service是基本可以work了,再大概总结下这部分的内容:
Service的实现

  • 编写class IXXX ,该类继承IInterface,这个类主要是声明service的提供的接口。
  • 编写class BnXXX ,该类继承BnInterface,实现onTransact()接口。这个类处理client传来的信息,去调用
  • 编写class XXX,该类继承BnXXX,实现所有的service接口。
  • 编写service的main函数,调用ProcessState::StartThreadPool()或者IPCThreadState::joinThreadPool()来和binder device交互。

client调用service

  • 编写BpXXX,该类继承BpInterface,实现所有的service接口函数。这个类是client直接使用,他帮助client把调用信息传给service端。
  • 调用getService(const String16& name, sp* outService)函数来获取sp的对象,通过这个对象可以直接操作service接口。

这当中,BnXXX和BpXXX是两个相对应的函数,BpXXX是负责把client调用接口的信息发到binder,而BnXXX从binder中获取到调用信息去调用对应的接口。


SourceCode下载

SampleService的source code已经提供在了https://git.oschina.net/sky-Z/SampleService.git中。

-------------------------------------------

by sky


你可能感兴趣的:(android,system)