Binder机制的实现原理以及源码分析

REFRENCE :https://blog.csdn.net/zhwadezh/article/details/79310119
https://blog.csdn.net/liwei405499/article/details/42775319

一、Binder简介

Binder机制就是一种进程间通信(IPC)的实现方式,在linux中没有。

1、现有的IPC通信机制

管道:在创建时分配一个page大小的内存,缓存区大小比较有限;
消息队列:信息复制两次,额外的CPU消耗;不合适频繁或信息量大的通信;
共享内存:无须复制,共享缓冲区直接附加到进程虚拟地址空间,速度快;但进程间的同步问题操作系统无法实现,必须各进程利用同步工具解决;
套接字:作为更通用的接口,传输效率低,主要用于不同机器或跨网络的通信;
信号量:常作为一种锁机制,防止某进程正在访问共享资源时,被其他进程也访问该资源。因此,主要作为进程间以及同一进程内不同线程之间的同步手段。
信号:不适用于信息交换,更适用于进程中断控制,比如非法内存访问,杀死某个进程等

2、使用Binder的原因

Android系统是基于Linux系统的,理论上应该使用Linux内置的IPC方式。Linux中的IPC方式有管道、信号量、共享内存、消息队列、Socket,Android使用的Binder机制不属于Linux。Android不继承Linux中原有的IPC方式,而选择使用Binder,说明Binder具有一定的优势。

Android系统为了向应用开发者提供丰富的功能,广泛的使用了Client-Server通信方式,如媒体播放、各种传感器等,Client-Server的通信方式是Android IPC的核心,应用程序只需要作为Client端,与这些Server建立连接,即可使用这些功能服务。

下面通过一些功能点,解释为什么Android选择使用Binder:

从通信方式上说,我们希望得到的是一种Client-Server的通信方式,但在Linux的五种IPC机制中,只有Socket支持这种通信方式。虽然我们可以通过在另外四种方式的基础上架设一些协议来实现Client-Server通信,但这样增加了系统的复杂性,在手机这种条件复杂、资源稀缺的环境下,也难以保证可靠性;
从传输性能上说,Socket作为一款通用接口,其传输效率低,开销大,主要用在跨网络的进程间通信和本机上进程间的低速通信;消息队列和管道采用存储-转发方式,即数据先从发送方拷贝到内存开辟的缓存区中,然后再从内核缓存区拷贝到接收方缓存区,至少有两次拷贝过程;共享内存虽然无需拷贝,但控制复杂,难以使用;而Binder只需要拷贝一次;
从安全性上说,Android作为一个开放式的平台,应用程序的来源广泛,因此确保只能终端的安全是非常重要的。Linux传统的IPC没有任何安全措施,完全依赖上层协议来确保,具体有以下两点表现:第一,传统IPC的接收方无法获得对方可靠的UID/PID(用户ID/进程ID),从而无法鉴别对方身份,使用传统IPC时只能由用户在数据包里填入UID/PID,但这样不可靠,容易被恶意程序利用;第二,传统IPC的访问接入点是开放的,无法建立私有通信,只要知道这些接入点的程序都可以和对端建立连接,这样无法阻止恶意程序通过猜测接收方的地址获得连接。
  基于以上原因,Android需要建立一套新的IPC机制来满足系统对通信方式、传输性能和安全性的要求,这就是Binder。

综上,Binder是一种基于Client-Server通信模式的通信方式,传输过程只需要一次拷贝,可以为发送方添加UID/PID身份,支持实名Binder和匿名Binder,安全性高。Binder机制是Android系统的核心机制,几乎贯穿于整个Android系统,Android系统基本上可以看作是一个基于binder通信机制的C/S架构,Binder就像网络,把Android系统的各个部分连接到了一起。

二、Binder

1、IPC原理

从进程角度来看IPC机制
Binder机制的实现原理以及源码分析_第1张图片
每个Android的进程,只能运行在自己进程所拥有的虚拟地址空间。对应一个4GB的虚拟地址空间,其中3GB是用户空间,1GB是内核空间,当然内核空间的大小是可以通过参数配置调整的。对于用户空间,不同进程之间彼此是不能共享的,而内核空间却是可共享的。Client进程向Server进程通信,恰恰是利用进程间可共享的内核内存空间来完成底层通信工作的,Client端与Server端进程往往采用ioctl等方法跟内核空间的驱动进行交互。

2、Binder结构

(1)Binder通信架构

从组件视角来说,包含Client、Server、ServiceManager以及binder驱动,其中ServiceManager用于管理系统中的各种服务。架构图如下所示:
Binder机制的实现原理以及源码分析_第2张图片
Binder通信是一种client-server的通信结构,
1.从表面上来看,是client通过获得一个server的代理接口,对server进行直接调用;
2.实际上,代理接口中定义的方法与server中定义的方法是一一对应的;
3.client调用某个代理接口中的方法时,代理接口的方法会将client传递的参数打包成为Parcel对象;
4.代理接口将该Parcel发送给内核中的binder driver.
5.server会读取binder driver中的请求数据,如果是发送给自己的,解包Parcel对象,处理并将结果返回;
6.整个的调用过程是一个同步过程,在server处理的时候,client会block住。

(2)Binder通信的四个角色

Client进程:使用服务的进程。

Server进程:提供服务的进程。

ServiceManager进程:ServiceManager的作用是将字符形式的Binder名字转化成Client中对该Binder的引用,使得Client能够通过Binder名字获得对Server中Binder实体的引用。

Binder驱动:驱动负责进程之间Binder通信的建立,Binder在进程之间的传递,Binder引用计数管理,数据包在进程之间的传递和交互等一系列底层支持。

3、Binder实现

Binder的实现主要有以下几个部分:
(1)、注册服务:Server进程要先注册Service到ServiceManager。该过程:Server是客户端,ServiceManager是服务端。
(2)、获取服务:Client进程使用某个Service前,须先向ServiceManager中获取相应的Service。该过程:Client是客户端,ServiceManager是服务端。
(3)、使用服务:Client根据得到的Service信息建立与Service所在的Server进程通信的通路,然后就可以直接与Service交互。该过程:client是客户端,server是服务端。

Binder架构图中的Client,Server,Service Manager之间交互都是虚线表示,是由于它们彼此之间不是直接交互的,而是都通过与Binder驱动进行交互的,从而实现IPC通信方式。其中Binder驱动位于内核空间,Client,Server,Service Manager位于用户空间。

下面我们将结合源码对以上三个部分分别进行分析

三、源码分析

我这里是Android6.0的源码。这里以MediaService为例进行分析。

1、注册服务

MediaService是一个应用程序,其源码位置如下:
frameworks\av\media\mediaserver\main_mediaserver.cpp

int main(int argc __unused, char** argv)
{
      	.....
     	.....
       
        sp proc(ProcessState::self());//获得一个ProcessState实例
        sp sm = defaultServiceManager();//得到一个ServiceManager对象
        //实际上返回一个BpServiceManager对象
        MediaPlayerService::instantiate();//初始化MediaPlayerService服务
      
        ProcessState::self()->startThreadPool();//启动Process的线程池
        IPCThreadState::self()->joinThreadPool();//将自己加入到刚才的线程池
    }
}

我的理解:注册服务的大概流程就是server先获取一个BpBinder对象,然后该对象与ServiceManager取得联系即获取BpServiceManager对象,然后将自己的服务加到ServiceManager的服务list上去。然后ServiceManager一直loop等待client来请求服务,同时该server也loop等待Binder Driver 传来请求数据,执行并返回结果。

1.1 ProcessState::self()

第一个调用的函数是ProcessState::self(),然后赋值给了proc变量,程序运行完,proc会自动delete内部的内容,所以就自动释放了先前分配的资源。
ProcessState位置在frameworks\native\libs\binder\ProcessState.cpp

sp ProcessState::self()
{
    Mutex::Autolock _l(gProcessMutex);//锁保护
    if (gProcess != NULL) {
        return gProcess;
    }
    gProcess = new ProcessState;//创建一个ProcessState对象
    return gProcess;
}

我们看看ProcessState的构造函数

ProcessState::ProcessState()//ProcessState构造函数
    : mDriverFD(open_driver())//open_driver()返回fd,打开binder驱动
    , mVMStart(MAP_FAILED)//映射内存的起始地址
    , mThreadCountLock(PTHREAD_MUTEX_INITIALIZER)
    , mThreadCountDecrement(PTHREAD_COND_INITIALIZER)
    , mExecutingThreadsCount(0)
    , mMaxThreads(DEFAULT_MAX_BINDER_THREADS)
    , mManagesContexts(false)
    , mBinderContextCheckFunc(NULL)
    , mBinderContextUserData(NULL)
    , mThreadPoolStarted(false)
    , mThreadPoolSeq(1)
{
    if (mDriverFD >= 0) {
        // XXX Ideally, there should be a specific define for whether we
        // have mmap (or whether we could possibly have the kernel module
        // availabla).
#if !defined(HAVE_WIN32_IPC)
        // mmap the binder, providing a chunk of virtual address space to receive transactions.
        mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
        //这个需要你自己去查mmap的用法了,不过大概意思就是
//将fd映射为内存,这样内存的memcpy等操作就相当于write/read(fd)了
        if (mVMStart == MAP_FAILED) {
            // *sigh*
            ALOGE("Using /dev/binder failed: unable to mmap transaction memory.\n");
            close(mDriverFD);
            mDriverFD = -1;
        }
#else
        mDriverFD = -1;
#endif
    }

    LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened.  Terminating.");
}

open_driver(),就是打开/dev/binder这个设备,并且返回一个fd赋值给mDriverFD

static int open_driver()
{
    int fd = open("/dev/binder", O_RDWR);
    if (fd >= 0) {
        fcntl(fd, F_SETFD, FD_CLOEXEC);
        int vers = 0;
        status_t result = ioctl(fd, BINDER_VERSION, &vers);
        if (result == -1) {
            ALOGE("Binder ioctl to obtain version failed: %s", strerror(errno));
            close(fd);
            fd = -1;
        }
        if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {
            ALOGE("Binder driver protocol does not match user space protocol!");
            close(fd);
            fd = -1;
        }
        size_t maxThreads = DEFAULT_MAX_BINDER_THREADS;
        result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);//通过ioctl告诉内核这个,
        //这个fd支持最大线程数是15个。
        if (result == -1) {
            ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
        }
    } else {
        ALOGW("Opening '/dev/binder' failed: %s\n", strerror(errno));
    }
    return fd;
}

好了,到这里Process::self就分析完了,到底干什么了呢?

1 打开/dev/binder设备,这样的话就相当于和内核binder机制有了交互的通道
2 映射fd到内存,设备的fd传进去后,估计这块内存是和binder设备共享的

接下来就是 defaultServiceManager();

1.2 defaultServiceManager()

接下来就是 defaultServiceManager();
defaultServiceManager位置在frameworks\native\libs\binder\IServiceManager.cpp中

sp defaultServiceManager()
{
    if (gDefaultServiceManager != NULL) return gDefaultServiceManager;
    
    {
        AutoMutex _l(gDefaultServiceManagerLock);
        while (gDefaultServiceManager == NULL) {
        	//真正的gDefaultServiceManager是在这里创建的
            gDefaultServiceManager = interface_cast(
                ProcessState::self()->getContextObject(NULL));
                //ProcessState::self()->getContextObject(NULL)=new BpBinder(0)
            if (gDefaultServiceManager == NULL)
                sleep(1);
        }
    }
    return gDefaultServiceManager;
}

首先是 ProcessState::self()->getContextObject(NULL),这里的参数是NULL,我们来看看ProcessState类的成员函数getContextObject()的实现。

sp ProcessState::getContextObject(const sp& /*caller*/)
{
    return getStrongProxyForHandle(0);//返回一个IBinder类型的对象,实际上是BpBinder类型的
}
sp ProcessState::getStrongProxyForHandle(int32_t handle)
{
    sp result;

    AutoMutex _l(mLock);

    handle_entry* e = lookupHandleLocked(handle);

    if (e != NULL) {
        // We need to create a new BpBinder if there isn't currently one, OR we
        // are unable to acquire a weak reference on this current one.  See comment
        // in getWeakProxyForHandle() for more info about this.
        IBinder* b = e->binder;
        if (b == NULL || !e->refs->attemptIncWeak(this)) {
            if (handle == 0) {
                // Special case for context manager...
                // The context manager is the only object for which we create
                // a BpBinder proxy without already holding a reference.
                // Perform a dummy transaction to ensure the context manager
                // is registered before we create the first local reference
                // to it (which will occur when creating the BpBinder).
                // If a local reference is created for the BpBinder when the
                // context manager is not present, the driver will fail to
                // provide a reference to the context manager, but the
                // driver API does not return status.
                //
                // Note that this is not race-free if the context manager
                // dies while this code runs.
                //
                // TODO: add a driver API to wait for context manager, or
                // stop special casing handle 0 for context manager and add
                // a driver API to get a handle to the context manager with
                // proper reference counting.

                Parcel data;
                status_t status = IPCThreadState::self()->transact(
                        0, IBinder::PING_TRANSACTION, data, NULL, 0);
                if (status == DEAD_OBJECT)
                   return NULL;
            }

            b = new BpBinder(handle); //创建一个BpBinder对象
            e->binder = b;
            if (b) e->refs = b->getWeakRefs();
            result = b;
        } else {
            // This little bit of nastyness is to allow us to add a primary
            // reference to the remote proxy when this team doesn't have one
            // but another team is sending the handle to us.
            result.force_set(b);
            e->refs->decWeak(this);
        }
    }

    return result;
}

接下来我们看看BpBinder(handle),handle值为0,位置在frameworks\native\libs\binder\BpBinder.cpp

BpBinder::BpBinder(int32_t handle)
    : mHandle(handle)
    , mAlive(1)
    , mObitsSent(0)
    , mObituaries(NULL)
{
    ALOGV("Creating BpBinder %p handle %d\n", this, mHandle);

    extendObjectLifetime(OBJECT_LIFETIME_WEAK);
    IPCThreadState::self()->incWeakHandle(handle);
}

这里调用到了IPCThreadState类里的东西,那我们再看看IPCThreadState class 吧,
位置在frameworks\native\libs\binder\IPCThreadState.cpp中

IPCThreadState* IPCThreadState::self()
{
    if (gHaveTLS) {//第一次进来为false
restart:
        const pthread_key_t k = gTLS;//TLS是Thread Local Storage的意思,就是线程本地存储
        //这里知道这种空间每个线程有一个,而且线程间不共享这些空间,这样就不需要搞同步机制了
        IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
		//从线程本地空间中获得保存在其中的IPCThreadState对象,调用了pthread_getspecific
        if (st) return st;
        return new IPCThreadState;//new一个TPCThreadState对象
    }
    
    if (gShutdown) return NULL;
    
    pthread_mutex_lock(&gTLSMutex);
    if (!gHaveTLS) {
        if (pthread_key_create(&gTLS, threadDestructor) != 0) {
            pthread_mutex_unlock(&gTLSMutex);
            return NULL;
        }
        gHaveTLS = true;
    }
    pthread_mutex_unlock(&gTLSMutex);
    goto restart;
}
//IPCThreadState的构造函数
IPCThreadState::IPCThreadState()
    : mProcess(ProcessState::self()),
      mMyThreadId(gettid()),
      mStrictModePolicy(0),
      mLastTransactionBinderFlags(0)
{
    pthread_setspecific(gTLS, this);//调用了pthread_setspecific
    clearCaller();
    mIn.setDataCapacity(256);
    mOut.setDataCapacity(256);//设置Parcel的数据存储大小
}

void IPCThreadState::incWeakHandle(int32_t handle)
{
    LOG_REMOTEREFS("IPCThreadState::incWeakHandle(%d)\n", handle);
    mOut.writeInt32(BC_INCREFS);
    mOut.writeInt32(handle);//将这个handle写到parcel里面,parcel用于 与Binder驱动交换信息
}

到这里new BpBinder(0)算是走完了。所以ProcessState::self()->getContextObject(NULL)的返回值是new BpBinder(0)。

接下来就是interface_cast< IServiceManager>( ProcessState::self()->getContextObject(NULL) ),也就是gDefaultServiceManager = interface_cast< IServiceManager > ( new BpBinder(0) )
这里比较复杂,并且很重要,需要用心慢慢看,interface_cast在frameworks\native\include\binder\IInterface.h中

template//模块化编程
inline sp interface_cast(const sp& obj)
{
    return INTERFACE::asInterface(obj);
}

interface_cast返回的是 INTERFACE::asInterface(obj),而这里的INTERFACE是IServiceManager,也就是IServiceManager::asInterface( new BpBinder(0) ),所以我们需要看到IServiceManager里面去
IServiceManager 类在frameworks\native\include\binder\IServiceManager.h中定义,并且找不到该类成员函数的实现,所以估计是它的派生类实现了这些成员函数。该类的定义如下

class IServiceManager : public IInterface
{
public:
    DECLARE_META_INTERFACE(ServiceManager);//宏,很重要

    /**
     * Retrieve an existing service, blocking for a few seconds
     * if it doesn't yet exist.
     */
    virtual sp         getService( const String16& name) const = 0;

    /**
     * Retrieve an existing service, non-blocking.
     */
    virtual sp         checkService( const String16& name) const = 0;

    /**
     * Register a service.
     */
    virtual status_t            addService( const String16& name,
                                            const sp& service,
                                            bool allowIsolated = false) = 0;

    /**
     * Return list of all existing services.
     */
    virtual Vector    listServices() = 0;

    enum {
        GET_SERVICE_TRANSACTION = IBinder::FIRST_CALL_TRANSACTION,
        CHECK_SERVICE_TRANSACTION,
        ADD_SERVICE_TRANSACTION,
        LIST_SERVICES_TRANSACTION,
    };
};

DECLARE_META_INTERFACE(ServiceManager) 这个宏在IInterface.h里定义,

#define DECLARE_META_INTERFACE(INTERFACE)                               \
    static const android::String16 descriptor;                          \
    static android::sp asInterface(                       \
            const android::sp& obj);                  \
    virtual const android::String16& getInterfaceDescriptor() const;    \
    I##INTERFACE();                                                     \
    virtual ~I##INTERFACE();        

有了DECLARE_META_INTERFACE那么肯定有IMPLEMENT_META_INTERFACE,他在frameworks\native\libs\binder\IServiceManager.cpp中定义

IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager");

它的实现还是在IIneterface.h中,如下:

#define IMPLEMENT_META_INTERFACE(INTERFACE, NAME)                       \
    const android::String16 I##INTERFACE::descriptor(NAME);             \
    const android::String16&                                            \
            I##INTERFACE::getInterfaceDescriptor() const {              \
        return I##INTERFACE::descriptor;                                \
    }                                                                   \
    android::sp I##INTERFACE::asInterface(                \
            const android::sp& obj)                   \
    {                                                                   \
        android::sp intr;                                 \
        if (obj != NULL) {                                              \
            intr = static_cast(                          \
                obj->queryLocalInterface(                               \
                        I##INTERFACE::descriptor).get());               \
            if (intr == NULL) {                                         \
                intr = new Bp##INTERFACE(obj);                          \
            }                                                           \
        }                                                               \
        return intr;                                                    \
    }                                                                   \
    I##INTERFACE::I##INTERFACE() { }                                    \
    I##INTERFACE::~I##INTERFACE() { }                                   \

将ServiceManager带入上面两个宏变成:

#define DECLARE_META_INTERFACE(ServiceManager)                               \
    static const android::String16 descriptor; //增加一个描述符                         \
    static android::sp asInterface(  //声明asInterface函数             \
            const android::sp& obj);                  \
    virtual const android::String16& getInterfaceDescriptor() const;    \
    IServiceManager(); //构造函数                                                    \
    virtual ~IServiceManager();//析构函数

#define IMPLEMENT_META_INTERFACE(ServiceManager, android.os.IServiceManager)                       \
    const android::String16 IServiceManager::descriptor(android.os.IServiceManager);             \
    const android::String16&                                            \
            IServiceManager::getInterfaceDescriptor() const {              \
        return IServiceManager::descriptor;                                \
    }                                                                   \
    android::sp IServiceManager::asInterface(                \
            const android::sp& obj)                   \
    {                                                                   \
        android::sp intr;                                 \
        if (obj != NULL) {                                              \
            intr = static_cast(                          \
                obj->queryLocalInterface(                               \
                        IServiceManager::descriptor).get());               \
            if (intr == NULL) {                                         \
                intr = new BpServiceManager(obj); //new一个BpServiceManager对象 \
            }                                                           \
        }                                                               \
        return intr;                                                    \
    }                                                                   \
    IServiceManager::IServiceManager() { }                                    \
    IServiceManager::~IServiceManager() { }                                   \

所以这里IServiceManager::asInterface( new BpBinder(0) )返回的是BpServiceManager( new BpBinder(0) ) ,p是proxy即代理的意思,Bp就是BinderProxy,BpServiceManager,就是SM的Binder代理。下面我们看看BpServiceManager,它在IServiceManager.cpp定义

class BpServiceManager : public BpInterface
{//这种继承方式,表示同时继承BpInterface和IServiceManager,这样IServiceManger的
//addService必然在这个类中实现
public:
    BpServiceManager(const sp& impl)//BpServiceManager 的构造函数相当于给基类
    //BpInterface赋值
        : BpInterface(impl)//这里得impl参数是 new BpBinder(0)
    {
    }
};

我们转到IInterface.h中看看BpInterface类的实现:

template
class BpInterface : public INTERFACE, public BpRefBase//将INTERFACE用IServiceManager对换
{
public:
                                BpInterface(const sp& remote);//构造函数

protected:
    virtual IBinder*            onAsBinder();
};

BpInterface的构造函数定义如下:

template
inline BpInterface::BpInterface(const sp& remote)
    : BpRefBase(remote) //同样的这里给基类BpRefBase赋值
    //这里的remote参数值为new BpBinder(0),即BpRefBase(new BpBinder(0))
{
}

这里继续追踪,转到frameworks\native\libs\binder\Binder.cpp 查看BpRefBase,其构造函数如下:

BpRefBase::BpRefBase(const sp& o)
    : mRemote(o.get()), mRefs(NULL), mState(0)//所以这里mRemote的值为new BpBinder(0)
    //o.get(),这个是sp类的获取实际数据指针的一个方法,你只要知道
//它返回的是sp中xxx* 指针就行
{
    extendObjectLifetime(OBJECT_LIFETIME_WEAK);

    if (mRemote) {
        mRemote->incStrong(this);           // Removed on first IncStrong().
        mRefs = mRemote->createWeak(this);  // Held for our entire lifetime.
    }
}

好了,到这里defaultServiceManager()函数就执行完了。
我们知道了new BpServiceManager(new BpBinder(0)) 实际上给一个成员变量mRemote赋了值为new BpBinder(0)
然后知道了interface_cast< IServiceManager>( ProcessState::self()->getContextObject(NULL) ) 的返回值为new BpServiceManager(new BpBinder(0)) ,即gDefaultServiceManager=new BpServiceManager(new BpBinder(0))。
然后defaultServiceManager()的返回值就是new BpServiceManager(new BpBinder(0))。

总结一下,server进程首先获取一个ProcessState对象,然后才能获取BpBinder对象 , 然后通过BpBinder才能与Binder驱动进行交互,然后获取了BpServiceManager对象。
在这里的MediaService相当于客户端,ServiceManager相当于服务端,客户端要与服务端通信,首先要获取一个Binder的代理即BpBinder,然后通过BpBinder获取ServiceManager的一个代理即BpServiceManager,从而实现MediaService与ServiceManager的通信。

下面我们回到main_mediaserver.cpp

int main(int argc __unused, char** argv)
{
      	.....
     	.....
       
        sp proc(ProcessState::self());//获得一个ProcessState实例
        sp sm = defaultServiceManager();//得到一个ServiceManager对象
        //实际上返回一个BpServiceManager对象
        MediaPlayerService::instantiate();//初始化MediaPlayerService服务
      
        ProcessState::self()->startThreadPool();//启动Process的线程池
        IPCThreadState::self()->joinThreadPool();//将自己加入到刚才的线程池
    }
}

上面我i们完成了defaultServiceManager(),得到了一个BpServiceManager对象,下面我们看到 MediaPlayerService::instantiate(),初始化MediaPlayerService服务。

1.3 MediaPlayerService::instantiate()

MediaPlayerService的位置在frameworks\av\media\libmediaplayerservice\MediaPlayerService.cpp

void MediaPlayerService::instantiate() {
    defaultServiceManager()->addService(
            String16("media.player"), new MediaPlayerService());
}

将MediaPlayerService这种服务加入到ServiceManager的list中去。其中String16(“media.player”)是将这种服务名称字符化,客户端要用这种服务的时候就直接查询media.player就ok了。我们看到instantiate函数,其中
defaultServiceManager()返回的是一个BpServiceManager对象,然后调用其成员函数addService,其中参数是MediaPlayerService对象。从里往外剥,我们先看看MediaPlayerService构造函数

MediaPlayerService::MediaPlayerService()
{
    ALOGV("MediaPlayerService created");
    mNextConnId = 1;

    mBatteryAudio.refCount = 0;
    for (int i = 0; i < NUM_AUDIO_DEVICES; i++) {
        mBatteryAudio.deviceOn[i] = 0;
        mBatteryAudio.lastTime[i] = 0;
        mBatteryAudio.totalTime[i] = 0;
    }
    // speaker is on by default
    mBatteryAudio.deviceOn[SPEAKER] = 1;

    // reset battery stats
    // if the mediaserver has crashed, battery stats could be left
    // in bad state, reset the state upon service start.
    BatteryNotifier& notifier(BatteryNotifier::getInstance());
    notifier.noteResetVideo();
    notifier.noteResetAudio();

    MediaPlayerFactory::registerBuiltinFactories();
}

这里的MediaPlayerService继承自BnMediaPlayerService,所以实际上这里new了一个BnMediaPlayerService对象。然后到现在出现了BpServiceManager和BnMediaPlayerService,然后这里相当于是将BnMediaPlayerService通过BpServiceManager的addService函数添加到ServiceMaager服务list当中去。Bn 是Binder Native的含义,是和Bp相对的,Bp的p是proxy代理的意思,那么另一端一定有一个和代理打交道的东西,这个就是Bn。
BnMediaPlayerService的位置在 frameworks\av\include\media\IMediaPlayerService.h 中可以看到。这里暂且不谈,我们先看BpServiceManager的addService函数

virtual status_t addService(const String16& name, const sp& service,
            bool allowIsolated)
    {
        Parcel data, reply;//data是命令包
        //先把Interface名字写进去,也就是什么android.os.IServiceManager
        //这个在上面IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager")赋值的
        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
        data.writeString16(name);//再把新service的名字写进去 叫media.player
        data.writeStrongBinder(service);//把新服务service—>就是MediaPlayerService写到命令中
        data.writeInt32(allowIsolated ? 1 : 0);
        //调用remote的transact函数
        status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);//这里remote()返回的是new BpBinder(0)
        return err == NO_ERROR ? reply.readExceptionCode() : err;
    }

我们在BpServiceManager里面找不到remote(),而我们看到BpServiceManager类继承于IServiceManager和BpInterface但是我们在这两个类中仍然找不到remote(),那么就只有再往上面找

frameworks\native\include\binder\IServiceManager.h
class IServiceManager : public IInterface
{
public:
    DECLARE_META_INTERFACE(ServiceManager);

    /**
     * Retrieve an existing service, blocking for a few seconds
     * if it doesn't yet exist.
     */
    virtual sp         getService( const String16& name) const = 0;

    /**
     * Retrieve an existing service, non-blocking.
     */
    virtual sp         checkService( const String16& name) const = 0;

    /**
     * Register a service.
     */
    virtual status_t            addService( const String16& name,
                                            const sp& service,
                                            bool allowIsolated = false) = 0;

    /**
     * Return list of all existing services.
     */
    virtual Vector    listServices() = 0;

    enum {
        GET_SERVICE_TRANSACTION = IBinder::FIRST_CALL_TRANSACTION,
        CHECK_SERVICE_TRANSACTION,
        ADD_SERVICE_TRANSACTION,
        LIST_SERVICES_TRANSACTION,
    };
};


 frameworks\native\include\binder\IInterface.h
template
class BpInterface : public INTERFACE, public BpRefBase
{
public:
                                BpInterface(const sp& remote);

protected:
    virtual IBinder*            onAsBinder();
};

在这两个类中都没有remote()函数,但是发现IServiceManager继承于IInterface,BpInterface 继承于BpRefBase,而IInterface同样继承于RefBase,而且BpRefBase也继承于RefBase,那么基本可以肯定remote函数在RefBase中能找到,但是我没找到RefBase在哪,但是我在BpRefBase中发现了remote函数

class IInterface : public virtual RefBase
{
public:
            IInterface();
            static sp  asBinder(const IInterface*);
            static sp  asBinder(const sp&);

protected:
    virtual                     ~IInterface();
    virtual IBinder*            onAsBinder() = 0;
};
class BpRefBase : public virtual RefBase
{
protected:
                            BpRefBase(const sp& o);
    virtual                 ~BpRefBase();
    virtual void            onFirstRef();
    virtual void            onLastStrongRef(const void* id);
    virtual bool            onIncStrongAttempted(uint32_t flags, const void* id);

    inline  IBinder*        remote()                { return mRemote; }
    inline  IBinder*        remote() const          { return mRemote; }

private:
                            BpRefBase(const BpRefBase& o);
    BpRefBase&              operator=(const BpRefBase& o);

    IBinder* const          mRemote;
    RefBase::weakref_type*  mRefs;
    volatile int32_t        mState;
};
class BpRefBase : public virtual RefBase
{
protected:
                            BpRefBase(const sp& o);
    virtual                 ~BpRefBase();
    virtual void            onFirstRef();
    virtual void            onLastStrongRef(const void* id);
    virtual bool            onIncStrongAttempted(uint32_t flags, const void* id);

    inline  IBinder*        remote()                { return mRemote; }
    inline  IBinder*        remote() const          { return mRemote; }

private:
                            BpRefBase(const BpRefBase& o);
    BpRefBase&              operator=(const BpRefBase& o);

    IBinder* const          mRemote;
    RefBase::weakref_type*  mRefs;
    volatile int32_t        mState;
};

}; // namespace android

这里看到了remote()函数,该函数返回了mRemote,上面提到了mRemote的为new BpBinder(0)。
回到addService函数里,status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply),也就是执行了BpBinder的成员函数transact,即 status_t err = new BpBinder(0)->transact(ADD_SERVICE_TRANSACTION, data, &reply)。转到BpBinder中看看transact函数

status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
//注意啊,这里的mHandle为0,code是ADD_SERVICE_TRANSACTION,data是命令包
//reply是回复包,flags=0
    // Once a binder has died, it will never come back to life.
    if (mAlive) {//mAlive=1
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}

这里面又调用到了IPCThreadState里面的transact函数,现在又来看看IPCThreadState类吧
其位置在frameworks\native\libs\binder\IPCThreadState.cpp

IPCThreadState* IPCThreadState::self()
{
    if (gHaveTLS) {//第一次进来为false
restart:
        const pthread_key_t k = gTLS;//TLS是Thread Local Storage的意思,就是线程本地存储
        //这里知道这种空间每个线程有一个,而且线程间不共享这些空间,这样就不需要搞同步机制了
        IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
		//从线程本地空间中获得保存在其中的IPCThreadState对象,调用了pthread_getspecific
        if (st) return st;
        return new IPCThreadState;//new一个TPCThreadState对象
    }
    
    if (gShutdown) return NULL;
    
    pthread_mutex_lock(&gTLSMutex);
    if (!gHaveTLS) {
        if (pthread_key_create(&gTLS, threadDestructor) != 0) {
            pthread_mutex_unlock(&gTLSMutex);
            return NULL;
        }
        gHaveTLS = true;
    }
    pthread_mutex_unlock(&gTLSMutex);
    goto restart;
}
.....................................................................
IPCThreadState::IPCThreadState()//构造函数
    : mProcess(ProcessState::self()),
      mMyThreadId(gettid()),
      mStrictModePolicy(0),
      mLastTransactionBinderFlags(0)
{
    pthread_setspecific(gTLS, this);//调用pthread_setspecific
    clearCaller();
    mIn.setDataCapacity(256);//设置parcel包的大小
    mOut.setDataCapacity(256);
}
.....................................................................
status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    status_t err = data.errorCheck();

    flags |= TF_ACCEPT_FDS;

  
    
    if (err == NO_ERROR) {
        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);//发送数据
    }
    
    if ((flags & TF_ONE_WAY) == 0) {
        #if 0
        if (code == 4) { // relayout
            ALOGI(">>>>>> CALLING transaction 4");
        } else {
            ALOGI(">>>>>> CALLING transaction %d", code);
        }
        #endif
        if (reply) {
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
        #if 0
        if (code == 4) { // relayout
            ALOGI("<<<<<< RETURNING transaction 4");
        } else {
            ALOGI("<<<<<< RETURNING transaction %d", code);
        }
        #endif
        
        IF_LOG_TRANSACTIONS() {
            TextOutput::Bundle _b(alog);
            alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "
                << handle << ": ";
            if (reply) alog << indent << *reply << dedent << endl;
            else alog << "(none requested)" << endl;
        }
    } else {
        err = waitForResponse(NULL, NULL);
    }
    
    return err;
}

.....................................................................

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
    binder_transaction_data tr;//与binder驱动通信的数据,tr将传进来的数据与参数打包给了Parcel

    tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
    tr.target.handle = handle; 
    tr.code = code;
    tr.flags = binderFlags;
    tr.cookie = 0;
    tr.sender_pid = 0;
    tr.sender_euid = 0;
    
    const status_t err = data.errorCheck();
    if (err == NO_ERROR) {
        tr.data_size = data.ipcDataSize();
        tr.data.ptr.buffer = data.ipcData();
        tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
        tr.data.ptr.offsets = data.ipcObjects();
    } else if (statusBuffer) {
        tr.flags |= TF_STATUS_CODE;
        *statusBuffer = err;
        tr.data_size = sizeof(status_t);
        tr.data.ptr.buffer = reinterpret_cast(statusBuffer);
        tr.offsets_size = 0;
        tr.data.ptr.offsets = 0;
    } else {
        return (mLastError = err);
    }
    
    mOut.writeInt32(cmd);
    mOut.write(&tr, sizeof(tr));//mOut是命令的缓冲区,也是一个parcel
    //仅仅写到了Parcel中,Parcel好像没和/dev/binder设备有什么关联啊?
//只能在另外一个地方写到binder设备中去了
    return NO_ERROR;
}

也就是在waitForResponse中与驱动进行通信,通过talkWithDriver将数据发往BinderDriver。

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    uint32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;//阻塞
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;
         //看见没?这里开始操作mIn了,看来talkWithDriver中
//把mOut发出去,然后从driver中读到数据放到mIn中了。
        cmd = (uint32_t)mIn.readInt32();
        
        IF_LOG_COMMANDS() {
            alog << "Processing waitForResponse Command: "
                << getReturnString(cmd) << endl;
        }

        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;
            break;
        
        .........
    
    return err;
}

status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    if (mProcess->mDriverFD <= 0) {
        return -EBADF;
    }
    
    binder_write_read bwr;
    
    // Is the read buffer empty?
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();
    
    // We don't want to write anything if we are still reading
    // from data left in the input buffer and the caller
    // has requested to read the next data.
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
    
    bwr.write_size = outAvail;
    bwr.write_buffer = (uintptr_t)mOut.data();

    // This is what we'll read.
    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (uintptr_t)mIn.data();
    } else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }

    IF_LOG_COMMANDS() {
        TextOutput::Bundle _b(alog);
        if (outAvail != 0) {
            alog << "Sending commands to driver: " << indent;
            const void* cmds = (const void*)bwr.write_buffer;
            const void* end = ((const uint8_t*)cmds)+bwr.write_size;
            alog << HexDump(cmds, bwr.write_size) << endl;
            while (cmds < end) cmds = printCommand(alog, cmds);
            alog << dedent;
        }
        alog << "Size of receive buffer: " << bwr.read_size
            << ", needRead: " << needRead << ", doReceive: " << doReceive << endl;
    }
    
    // Return immediately if there is nothing to do.
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
        IF_LOG_COMMANDS() {
            alog << "About to read/write, write size = " << mOut.dataSize() << endl;
        }
#if defined(HAVE_ANDROID_OS)
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)//用ioctl来读写
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
        if (mProcess->mDriverFD <= 0) {
            err = -EBADF;
        }
        IF_LOG_COMMANDS() {
            alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
        }
    } while (err == -EINTR);

    

    if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else
                mOut.setDataSize(0);
        }
        if (bwr.read_consumed > 0) {
        //到这里,回复数据就在bwr中了,bmr接收回复数据的buffer就是mIn提供的
            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0);
        }
     
        return NO_ERROR;
    }
    
    return err;
}

到这里我们addService命令就算执行完了,BpServiceManager将真正的服务发送到binder驱动收到回复数据。通过调用BpServiceManager的addService,而addService调用BpBinder的trasact,进而调用IPCThreadState的trasact,还有writeTransactionData,然后waitForResponse,然后talkWithDriver与驱动进行交互。
然后现在有点印证了客户端与服务端的通信了,简单点说,MediaService(MS)要将自己的服务添加到ServiceManager(SM)里面,那就是MS相当于客户端,SM相当于服务端。MS必须取得SM的代理即BPServiceManager才能进行addService操作。

1.4 BnServiceManager

上面说了,defaultServiceManager返回的是一个BpServiceManager,通过它可以把命令请求发送到binder设备,而且handle的值为0。那么,系统的另外一端肯定有个接收命令的,那又是谁呢?

很可惜啊,BnServiceManager不存在,但确实有一个程序完成了BnServiceManager的工作,那就是service.exe(如果在windows上一定有exe后缀,叫service的名字太多了,这里加exe就表明它是一个程序)
位置在frameworks\native\cmds\servicemanager\service_manager.c中

int main(int argc, char **argv)//完成了BnServiceManager的工作 
{
    struct binder_state *bs;

    bs = binder_open(128*1024);//打开binder设备
    if (binder_become_context_manager(bs)) {//成为manager
        ALOGE("cannot become context manager (%s)\n", strerror(errno));
        return -1;
    }

    selinux_enabled = is_selinux_enabled();
    sehandle = selinux_android_service_context_handle();
    selinux_status_open(true);

    union selinux_callback cb;
    cb.func_audit = audit_callback;
    selinux_set_callback(SELINUX_CB_AUDIT, cb);
    cb.func_log = selinux_log_callback;
    selinux_set_callback(SELINUX_CB_LOG, cb);

    binder_loop(bs, svcmgr_handler);/处理BpServiceManager发过来的命令

    return 0;
}

struct binder_state *binder_open(size_t mapsize)//打开驱动程序,map到内存中
{
    struct binder_state *bs;
    struct binder_version vers;

    bs = malloc(sizeof(*bs));
    if (!bs) {
        errno = ENOMEM;
        return NULL;
    }

    bs->fd = open("/dev/binder", O_RDWR);
    if (bs->fd < 0) {
        fprintf(stderr,"binder: cannot open device (%s)\n",
                strerror(errno));
        goto fail_open;
    }

    if ((ioctl(bs->fd, BINDER_VERSION, &vers) == -1) ||
        (vers.protocol_version != BINDER_CURRENT_PROTOCOL_VERSION)) {
        fprintf(stderr,
                "binder: kernel driver version (%d) differs from user space version (%d)\n",
                vers.protocol_version, BINDER_CURRENT_PROTOCOL_VERSION);
        goto fail_open;
    }

    bs->mapsize = mapsize;
    bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);
    if (bs->mapped == MAP_FAILED) {
        fprintf(stderr,"binder: cannot map device (%s)\n",
                strerror(errno));
        goto fail_map;
    }

    return bs;

fail_map:
    close(bs->fd);
fail_open:
    free(bs);
    return NULL;
}


int binder_become_context_manager(struct binder_state *bs)
{
    return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);//把自己设为Manager
}

void binder_loop(struct binder_state *bs, binder_handler func)
{
    int res;
    struct binder_write_read bwr;
    uint32_t readbuf[32];

    bwr.write_size = 0;
    bwr.write_consumed = 0;
    bwr.write_buffer = 0;

    readbuf[0] = BC_ENTER_LOOPER;
    binder_write(bs, readbuf, sizeof(uint32_t));

    for (;;) {//果然是循环
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (uintptr_t) readbuf;

        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);//收到请求,解析命令

        if (res < 0) {
            ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
            break;
        }

        res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
        if (res == 0) {
            ALOGE("binder_loop: unexpected reply?!\n");
            break;
        }
        if (res < 0) {
            ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));
            break;
        }
    }
}

//svcmgr_handler处理各种各样的命令
int svcmgr_handler(struct binder_state *bs,
                   struct binder_transaction_data *txn,
                   struct binder_io *msg,
                   struct binder_io *reply)
{
    struct svcinfo *si;
    uint16_t *s;
    size_t len;
    uint32_t handle;
    uint32_t strict_policy;
    int allow_isolated;

    //ALOGI("target=%p code=%d pid=%d uid=%d\n",
    //      (void*) txn->target.ptr, txn->code, txn->sender_pid, txn->sender_euid);

    if (txn->target.ptr != BINDER_SERVICE_MANAGER)
        return -1;

    if (txn->code == PING_TRANSACTION)
        return 0;

    // Equivalent to Parcel::enforceInterface(), reading the RPC
    // header with the strict mode policy mask and the interface name.
    // Note that we ignore the strict_policy and don't propagate it
    // further (since we do no outbound RPCs anyway).
    strict_policy = bio_get_uint32(msg);
    s = bio_get_string16(msg, &len);
    if (s == NULL) {
        return -1;
    }

    if ((len != (sizeof(svcmgr_id) / 2)) ||
        memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {
        fprintf(stderr,"invalid id %s\n", str8(s, len));
        return -1;
    }

    if (sehandle && selinux_status_updated() > 0) {
        struct selabel_handle *tmp_sehandle = selinux_android_service_context_handle();
        if (tmp_sehandle) {
            selabel_close(sehandle);
            sehandle = tmp_sehandle;
        }
    }

    switch(txn->code) {
    case SVC_MGR_GET_SERVICE:
    case SVC_MGR_CHECK_SERVICE:
        s = bio_get_string16(msg, &len);
        if (s == NULL) {
            return -1;
        }
        handle = do_find_service(bs, s, len, txn->sender_euid, txn->sender_pid);//查找需要的服务
        if (!handle)
            break;
        bio_put_ref(reply, handle);
        return 0;

    case SVC_MGR_ADD_SERVICE:
        s = bio_get_string16(msg, &len);
        if (s == NULL) {
            return -1;
        }
        handle = bio_get_ref(msg);
        allow_isolated = bio_get_uint32(msg) ? 1 : 0;
        if (do_add_service(bs, s, len, handle, txn->sender_euid,//真正处理添加服务的地方
            allow_isolated, txn->sender_pid))
            return -1;
        break;

    case SVC_MGR_LIST_SERVICES: {
        uint32_t n = bio_get_uint32(msg);

        if (!svc_can_list(txn->sender_pid)) {
            ALOGE("list_service() uid=%d - PERMISSION DENIED\n",
                    txn->sender_euid);
            return -1;
        }
        si = svclist;
        while ((n-- > 0) && si)
            si = si->next;
        if (si) {
            bio_put_string16(reply, si->name);
            return 0;
        }
        return -1;
    }
    default:
        ALOGE("unknown code %d\n", txn->code);
        return -1;
    }

    bio_put_uint32(reply, 0);
    return 0;
}

int do_add_service(struct binder_state *bs,
                   const uint16_t *s, size_t len,
                   uint32_t handle, uid_t uid, int allow_isolated,
                   pid_t spid)
{
    struct svcinfo *si;

    //ALOGI("add_service('%s',%x,%s) uid=%d\n", str8(s, len), handle,
    //        allow_isolated ? "allow_isolated" : "!allow_isolated", uid);

    if (!handle || (len == 0) || (len > 127))
        return -1;

    if (!svc_can_register(s, len, spid)) {
        ALOGE("add_service('%s',%x) uid=%d - PERMISSION DENIED\n",
             str8(s, len), handle, uid);
        return -1;
    }

    si = find_svc(s, len);
    if (si) {
        if (si->handle) {
            ALOGE("add_service('%s',%x) uid=%d - ALREADY REGISTERED, OVERRIDE\n",
                 str8(s, len), handle, uid);
            svcinfo_death(bs, si);
        }
        si->handle = handle;
    } else {
        si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
        if (!si) {
            ALOGE("add_service('%s',%x) uid=%d - OUT OF MEMORY\n",
                 str8(s, len), handle, uid);
            return -1;
        }
        si->handle = handle;
        si->len = len;
        memcpy(si->name, s, (len + 1) * sizeof(uint16_t));
        si->name[len] = '\0';
        si->death.func = (void*) svcinfo_death;
        si->death.ptr = si;
        si->allow_isolated = allow_isolated;
        si->next = svclist;//看见没,这个svclist是一个列表,保存了当前注册到ServiceManager中的信息
        svclist = si;
    }

    binder_acquire(bs, handle);//当这个Service退出后,我希望系统通知我一下,好释放上面malloc出来的资源。大概就是干这个事情的。
    binder_link_to_death(bs, handle, &si->death);
    return 0;
}

下面我们回到主函数继续

int main(int argc __unused, char** argv)
{
      	.....
     	.....
       
        sp proc(ProcessState::self());//获得一个ProcessState实例
        sp sm = defaultServiceManager();//得到一个ServiceManager对象
        //实际上返回一个BpServiceManager对象
        MediaPlayerService::instantiate();//初始化MediaPlayerService服务
      
        ProcessState::self()->startThreadPool();//启动Process的线程池
        IPCThreadState::self()->joinThreadPool();//将自己加入到刚才的线程池
    }
}

MediaPlayerService::instantiate();执行完毕就将自己的服务发送到binder驱动,而驱动的另一端有ServiceManager处理我们发送过去的添加服务的请求,然后将该服务添加到自己维护的服务列表中了,那么接下来就要看到MediaPlayerService如何运行的了。

1.5 startThreadPool()

void ProcessState::startThreadPool()
{
    AutoMutex _l(mLock);
    if (!mThreadPoolStarted) {
        mThreadPoolStarted = true;
        spawnPooledThread(true);
    }
}

void ProcessState::spawnPooledThread(bool isMain)
{
    if (mThreadPoolStarted) {
        String8 name = makeBinderThreadName();
        ALOGV("Spawning new pooled thread, name=%s\n", name.string());
        sp t = new PoolThread(isMain);
        //创建线程池,然后run起来,和java的Thread何其像也
        t->run(name.string());
    }
}

PoolThread从Thread类中派生,那么此时会产生一个线程吗?看看PoolThread和Thread的构造吧

PoolThread::PoolThread(boolisMain)

        : mIsMain(isMain)

    {

    }

Thread::Thread(boolcanCallJava)//canCallJava默认值是true

    :  mCanCallJava(canCallJava),

        mThread(thread_id_t(-1)),

        mLock("Thread::mLock"),

        mStatus(NO_ERROR),

        mExitPending(false), mRunning(false)

{

}
喔,这个时候还没有创建线程呢。然后调用PoolThread::run,实际调用了基类的run。
status_tThread::run(const char* name, int32_t priority, size_t stack)

{

  bool res;

    if (mCanCallJava) {

        res = createThreadEtc(_threadLoop,//线程函数是_threadLoop

                this, name, priority, stack,&mThread);

    }

//终于,在run函数中,创建线程了。从此主线程执行
新开的线程执行_threadLoop
我们先看看_threadLoop
intThread::_threadLoop(void* user)

{

    Thread* const self =static_cast(user);

    spstrong(self->mHoldSelf);

    wp weak(strong);

    self->mHoldSelf.clear();

 

    do {

 ...

        if (result && !self->mExitPending){

                result = self->threadLoop();哇塞,调用自己的threadLoop

            }

        }

我们是PoolThread对象,所以调用PoolThread的threadLoop函数

virtualbool PoolThread ::threadLoop()

    {

//mIsMain为true。

//而且注意,这是一个新的线程,所以必然会创建一个

新的IPCThreadState对象(记得线程本地存储吗?TLS),然后      

IPCThreadState::self()->joinThreadPool(mIsMain);

        return false;

    }

主线程和工作线程都调用了joinThreadPool,看看这个干嘛了!

1.6 joinThreadPool

frameworks\native\libs\binder\IPCThreadState.cpp

void IPCThreadState::joinThreadPool(bool isMain)
{
    LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid());

    mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
    
    // This thread may have been spawned by a thread that was in the background
    // scheduling group, so first we will make sure it is in the foreground
    // one to avoid performing an initial transaction in the background.
    set_sched_policy(mMyThreadId, SP_FOREGROUND);
        
    status_t result;
    do {
        processPendingDerefs();
        // now get the next command to be processed, waiting if necessary
        result = getAndExecuteCommand();

        if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) {
            ALOGE("getAndExecuteCommand(fd=%d) returned unexpected error %d, aborting",
                  mProcess->mDriverFD, result);
            abort();
        }
        
        // Let this thread exit the thread pool if it is no longer
        // needed and it is not the main process thread.
        if(result == TIMED_OUT && !isMain) {
            break;
        }
    } while (result != -ECONNREFUSED && result != -EBADF);

    LOG_THREADPOOL("**** THREAD %p (PID %d) IS LEAVING THE THREAD POOL err=%p\n",
        (void*)pthread_self(), getpid(), (void*)result);
    
    mOut.writeInt32(BC_EXIT_LOOPER);
    talkWithDriver(false);//接受/发送数据
}

status_t IPCThreadState::getAndExecuteCommand()
{
    status_t result;
    int32_t cmd;

    result = talkWithDriver();
    if (result >= NO_ERROR) {
        size_t IN = mIn.dataAvail();
        if (IN < sizeof(int32_t)) return result;
        cmd = mIn.readInt32();
        IF_LOG_COMMANDS() {
            alog << "Processing top-level Command: "
                 << getReturnString(cmd) << endl;
        }

        pthread_mutex_lock(&mProcess->mThreadCountLock);
        mProcess->mExecutingThreadsCount++;
        pthread_mutex_unlock(&mProcess->mThreadCountLock);

        result = executeCommand(cmd);

        pthread_mutex_lock(&mProcess->mThreadCountLock);
        mProcess->mExecutingThreadsCount--;
        pthread_cond_broadcast(&mProcess->mThreadCountDecrement);
        pthread_mutex_unlock(&mProcess->mThreadCountLock);

        // After executing the command, ensure that the thread is returned to the
        // foreground cgroup before rejoining the pool.  The driver takes care of
        // restoring the priority, but doesn't do anything with cgroups so we
        // need to take care of that here in userspace.  Note that we do make
        // sure to go in the foreground after executing a transaction, but
        // there are other callbacks into user code that could have changed
        // our group so we want to make absolutely sure it is put back.
        set_sched_policy(mMyThreadId, SP_FOREGROUND);
    }

    return result;
}

status_t IPCThreadState::getAndExecuteCommand()
{
    status_t result;
    int32_t cmd;

    result = talkWithDriver();
    if (result >= NO_ERROR) {
        size_t IN = mIn.dataAvail();
        if (IN < sizeof(int32_t)) return result;
        cmd = mIn.readInt32();
        IF_LOG_COMMANDS() {
            alog << "Processing top-level Command: "
                 << getReturnString(cmd) << endl;
        }

        pthread_mutex_lock(&mProcess->mThreadCountLock);
        mProcess->mExecutingThreadsCount++;
        pthread_mutex_unlock(&mProcess->mThreadCountLock);

        result = executeCommand(cmd);

        pthread_mutex_lock(&mProcess->mThreadCountLock);
        mProcess->mExecutingThreadsCount--;
        pthread_cond_broadcast(&mProcess->mThreadCountDecrement);
        pthread_mutex_unlock(&mProcess->mThreadCountLock);

        // After executing the command, ensure that the thread is returned to the
        // foreground cgroup before rejoining the pool.  The driver takes care of
        // restoring the priority, but doesn't do anything with cgroups so we
        // need to take care of that here in userspace.  Note that we do make
        // sure to go in the foreground after executing a transaction, but
        // there are other callbacks into user code that could have changed
        // our group so we want to make absolutely sure it is put back.
        set_sched_policy(mMyThreadId, SP_FOREGROUND);
    }

    return result;
}

下面看看executeCommand

status_tIPCThreadState::executeCommand(int32_t cmd)

{

BBinder*obj;

    RefBase::weakref_type* refs;

    status_t result = NO_ERROR;

caseBR_TRANSACTION:

        {

            binder_transaction_data tr;

            result = mIn.read(&tr,sizeof(tr));

//来了一个命令,解析成BR_TRANSACTION,然后读取后续的信息

       Parcel reply;

             if (tr.target.ptr) {

//这里用的是BBinder。

                spb((BBinder*)tr.cookie);

                const status_t error =b->transact(tr.code, buffer, &reply, 0);

}

让我们看看BBinder的transact函数干嘛了

status_tBBinder::transact(

    uint32_t code, const Parcel& data,Parcel* reply, uint32_t flags)

{

就是调用自己的onTransact函数嘛      

err= onTransact(code, data, reply, flags);

    return err;

}

BnMediaPlayerService从BBinder派生,所以会调用到它的onTransact函数

终于水落石出了,让我们看看BnMediaPlayerServcice的onTransact函数。

status_tBnMediaPlayerService::onTransact(

    uint32_t code, const Parcel& data,Parcel* reply, uint32_t flags)

{

//BnMediaPlayerService从BBinder和IMediaPlayerService派生,所有IMediaPlayerService

//看到下面的switch没?所有IMediaPlayerService提供的函数都通过命令类型来区分

//

    switch(code) {

        case CREATE_URL: {

           CHECK_INTERFACE(IMediaPlayerService, data, reply);

            create是一个虚函数,由MediaPlayerService来实现!!

sp player = create(

                    pid, client, url,numHeaders > 0 ? &headers : NULL);

 

           reply->writeStrongBinder(player->asBinder());

            return NO_ERROR;

        } break;

其实,到这里,我们就明白了。BnXXX的onTransact函数收取命令,然后派发到派生类的函数,由他们完成实际的工作。
说明:

这里有点特殊,startThreadPool和joinThreadPool完后确实有两个线程,主线程和工作线程,而且都在做消息循环。为什么要这么做呢?他们参数isMain都是true。不知道google搞什么。难道是怕一个线程工作量太多,所以搞两个线程来工作?这种解释应该也是合理的。

网上有人测试过把最后一句屏蔽掉,也能正常工作。但是难道主线程提出了,程序还能不退出吗?这个…管它的,反正知道有两个线程在那处理就行了。

到这里整个注册服务我们心里也有一点数了,首先MediaService获取一个BpBinder,然后通过这个BpBinder获取一个ServiceManager的代理即BpServiceManager,从而能够调用将服务添加到ServiceManager的接口函数。

2. 获取服务

举个例子看看是怎么获取到服务的,

IMediaDeathNotifier::getMediaPlayerService()
{
    ALOGV("getMediaPlayerService");
    Mutex::Autolock _l(sServiceLock);
    if (sMediaPlayerService == 0) {
        sp sm = defaultServiceManager();
        //获取BpServiceManager对象
        sp binder;
        do { //向SM查询对应服务的信息,返回binder   
            binder = sm->getService(String16("media.player"));
            if (binder != 0) {
                break;
            }
            ALOGW("Media player service not published, waiting...");
            usleep(500000); // 0.5 s
        } while (true);

        if (sDeathNotifier == NULL) {
            sDeathNotifier = new DeathNotifier();
        }
        binder->linkToDeath(sDeathNotifier);
        /通过interface_cast,将这个binder转化成BpMediaPlayerService

//注意,这个binder只是用来和binder设备通讯用的,实际

//上和IMediaPlayerService的功能一点关系都没有。

//BpMediaPlayerService用这个binder和BnMediaPlayerService

//通讯。
        sMediaPlayerService = interface_cast(binder);
    }
    ALOGE_IF(sMediaPlayerService == 0, "no media player service!?");
    return sMediaPlayerService;
}

我们知道defaultServiceManager()函数返回的是一个BpServiceManager对象,那么下面我们看看getService
位于frameworks\native\libs\binder\IServiceManager.cpp

virtual sp getService(const String16& name) const
    {
        unsigned n;
        for (n = 0; n < 5; n++){
            sp svc = checkService(name);
            if (svc != NULL) return svc;
            ALOGI("Waiting for service %s...\n", String8(name).string());
            sleep(1);
        }
        return NULL;
    }

virtual sp checkService( const String16& name) const
    {
        Parcel data, reply;
        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());//获取接口描述符
        data.writeString16(name);
        remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
        return reply.readStrongBinder();
    }

getService获取服务返回一个BpBinder对象,我们又一次看到了interface_cast

template
inline sp interface_cast(const sp& obj)
{
    return INTERFACE::asInterface(obj);
}

看到IMediaPlayerService里面

class IMediaPlayerService: public IInterface
{
public:
    DECLARE_META_INTERFACE(MediaPlayerService);
};
#define DECLARE_META_INTERFACE(INTERFACE)                               \
    static const android::String16 descriptor;                          \
    static android::sp asInterface(                       \
            const android::sp& obj);                  \
    virtual const android::String16& getInterfaceDescriptor() const;    \
    I##INTERFACE();                                                     \
    virtual ~I##INTERFACE();                                            \

#define IMPLEMENT_META_INTERFACE(INTERFACE, NAME)                       \
    const android::String16 I##INTERFACE::descriptor(NAME);             \
    const android::String16&                                            \
            I##INTERFACE::getInterfaceDescriptor() const {              \
        return I##INTERFACE::descriptor;                                \
    }                                                                   \
    android::sp I##INTERFACE::asInterface(                \
            const android::sp& obj)                   \
    {                                                                   \
        android::sp intr;                                 \
        if (obj != NULL) {                                              \
            intr = static_cast(                          \
                obj->queryLocalInterface(                               \
                        I##INTERFACE::descriptor).get());               \
            if (intr == NULL) {                                         \
                intr = new Bp##INTERFACE(obj);                          \
            }                                                           \
        }                                                               \
        return intr;                                                    \
    }                                                                   \
    I##INTERFACE::I##INTERFACE() { }                                    \
    I##INTERFACE::~I##INTERFACE() { }                  

带入MediaPlayerService

#define DECLARE_META_INTERFACE(MediaPlayerService)                               \
    static const android::String16 descriptor;                          \
    static android::sp asInterface(                       \
            const android::sp& obj);                  \
    virtual const android::String16& getInterfaceDescriptor() const;    \
    IMediaPlayerService();                                                     \
    virtual ~IMediaPlayerService();                                            \

#define IMPLEMENT_META_INTERFACE(MediaPlayerService, NAME)                       \
    const android::String16 IMediaPlayerService::descriptor(NAME);             \
    const android::String16&                                            \
            IINTERFACE::getInterfaceDescriptor() const {              \
        return IINTERFACE::descriptor;                                \
    }                                                                   \
    android::sp IMediaPlayerService::asInterface(                \
            const android::sp& obj)                   \
    {                                                                   \
        android::sp intr;                                 \
        if (obj != NULL) {                                              \
            intr = static_cast(                          \
                obj->queryLocalInterface(                               \
                        IINTERFACE::descriptor).get());               \
            if (intr == NULL) {                                         \
                intr = new BpMediaPlayerService(obj);                          \
            }                                                           \
        }                                                               \
        return intr;                                                    \
    }                                                                   \
    IINTERFACE::IMediaPlayerService() { }                                    \
    IINTERFACE::~IMediaPlayerService() { }                  

返回一个BpMediaPlayerService对象,下面我们看看

BpMediaPlayerService(const sp& impl)
        : BpInterface(impl)
    {
    }
template
inline BpInterface::BpInterface(const sp& remote)
    : BpRefBase(remote)
{
}

这里又是给mRemote赋值了,值为BpBinder
所以到最后IMediaDeathNotifier::getMediaPlayerService()返回的是一个BpMediaPlayerService对象。
然后就可以使用服务了。

嘻嘻嘻嘻 DONE.
很痛苦的几天!

你可能感兴趣的:(Binder机制的实现原理以及源码分析)