Binder系列()——addService——代码分析

1 概述

addService的过程涉及三个模块:

  1. Service,service通过调用service manager的addService接口将自己注册到SM,本质上是client;
  2. SM,接收service的注册请求,本质上是server;
  3. Binder Driver,负责CS之间的数据传输和kernel中关键数据结构的建立。

2 Client(MediaPlayerService)

看了网上很多资料都是以MediaPlayerService为例的,本篇也用它来分析。

2.1 MediaPlayerService

frameworks/av/media/mediaserver/main_mediaserver.cpp

int main(int argc __unused, char **argv __unused)   
{                                                   
    signal(SIGPIPE, SIG_IGN);                       
                                                    
    sp proc(ProcessState::self());      -------------1
    sp sm(defaultServiceManager());	-------------2
    ALOGI("ServiceManager: %p", sm.get());          
    InitializeIcuOrDie();                           
    MediaPlayerService::instantiate();              -------------3
    ResourceManagerService::instantiate();          
    registerExtensions();                           
    ProcessState::self()->startThreadPool();        
    IPCThreadState::self()->joinThreadPool();       
}                                                   
  1. 获取该进程对应的ProcessState,进程级别的数据结构,每个进程只有一个,ProcessState相关内容见TOD;
  2. 获取ServiceManger对应的BpServiceManager,defaultServiceManager相关内容见TOD;
  3. 调用MediaPlayerService的instantiate,如下:

frameworks/av/media/libmediaplayerservice/MediaPlayerService.cpp

void MediaPlayerService::instantiate() {                        
    defaultServiceManager()->addService(                        
            String16("media.player"), new MediaPlayerService());  //这行代码信息量很大
}                                                               

随后就调用了BpServiceManager的addService:

 virtual status_t addService(const String16& name, const sp& service,   
         bool allowIsolated)                                                     
 {                                                                               
     Parcel data, reply;                                                         
     data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());    //"android.os.IServiceManager"   
     data.writeString16(name);           //"media.player"                                        
     data.writeStrongBinder(service);   //  MediaPlayerService                                         
     data.writeInt32(allowIsolated ? 1 : 0);                                     
     status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);   //code: ADD_SERVICE_TRANSACTION
     return err == NO_ERROR ? reply.readExceptionCode() : err;                   
 }                                                                                                                                                     

2.1.1 BpServiceManager

需要注意的一点是BpServiceManager的继承关系,如下

							   -->IServiceManager-->IInterface
 BpServiceManager-->BpInterface
							   -->BpRefBase: mRemote(BpBinder)

这种继承方式十分巧妙,一方面保证了BpServiceManager作为interface的功能,可以通过它来进行远程接口调用,另一方面保证了BpServiceManager可以拿到SM的对应的远程binder即BpBinder,基于以上两点实现RPC功能。所以:

  1. BpServiceManager的addService其实是重写了IServiceManager的addService;
  2. BpRefBase中的mRemote保存的是defaultServiceManager()时new的BpBinder,remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply), 调用了BpBinder::transact(…)。

2.1.2 new MediaPlayerService()

作为BpBinder::transact(…)的第二个参数,MediaPlayerService类的继承方式同样也很巧妙,如下:

												      -->IMediaPlayerService
MediaPlayerService-->BnMediaPlayerService->BnInterface
													  -->BBinder

MediaPlayerService整合了Binder数据结构(BBinder)和MediaPlayerService提供的接口类(IMediaPlayerService)。
总结一下BBinder和BpBinder在数据结构上的套路:
BBinder和BpBinder都继承自IBinder,前者是本地对象,后者是远程对象。

  1. 如果是从远程拿到的Binder,那么该数据结构会有远程Service的本地方法和对应的BpBinder,通过这些重写的本地方法再和远程service通信;
  2. 如果是注册服务,那么要将本service的接口类和BBinder整合后,再transact;

一句话,不管是远程还是本地,都既要对应的IBinder数据结构(找到TA),又要接口(使用TA)。

2.2 addService

回到addService,主要做了两件事:

  1. 准备Parcel数据;
  2. 调用BpBinder::transact(…)。

2.2.1 Parcel打包数据

关于Parcel的分析,见binder_context_mgr_node。这里写入的数据为:

data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());    //"android.os.IServiceManager"   
data.writeString16(name);           //"media.player"                                        
data.writeStrongBinder(service);   //  MediaPlayerService
data.writeInt32(allowIsolated ? 1 : 0);  // 0       

数据分两种:

  1. 常规类型的数据,比如SM的token,MediaPlayerService的name,int类型等
  2. 写入IBinder,即传入的MediaPlayerService,此时需要将IBinder类型的数据扁平化,调用writeStrongBinder:
status_t Parcel::writeStrongBinder(const sp& val)  
{                                                           
    return flatten_binder(ProcessState::self(), val, this); 
}                                                           
 status_t flatten_binder(const sp& /*proc*/,                                 
     const sp& binder, Parcel* out)                                               
 {                                                                                         
     flat_binder_object obj;      											------------------  1                                                   
                                                                                           
     if (IPCThreadState::self()->backgroundSchedulingDisabled()) {                         
         /* minimum priority for all nodes is nice 0 */                                    
         obj.flags = FLAT_BINDER_FLAG_ACCEPTS_FDS;                                         
     } else {                                                                              
         /* minimum priority for all nodes is MAX_NICE(19) */                              
         obj.flags = 0x13 | FLAT_BINDER_FLAG_ACCEPTS_FDS;                                  
     }                                                                                     
                                                                                           
     if (binder != NULL) {                                                                 
         IBinder *local = binder->localBinder();                     ------------------- 2                      
         if (!local) {                                                                     
......                                                  
         } else {                                                                          
             obj.type = BINDER_TYPE_BINDER;                  -------------------3                               
             obj.binder = reinterpret_cast(local->getWeakRefs());               
             obj.cookie = (local);                              
         }                                                                                 
     } else {                                                                              
......                                                           
     }                                                                                     
                                                                                           
     return finish_flatten_binder(binder, obj, out);     		--------------------4                                  
 }                                                                                         
  1. 定义flat_binder_object,扁平化之后的IBinder就存放在这里;
  2. 注册时的IBinder是BBinder,所以返回不为空;
  3. 本地Binder的类型为BINDER_TYPE_BINDER,binder 保存的是MediaPlayerService的弱引用,cookie保存的是MediaPlayerService的地址。
  4. 将flat_binder_object数据结构写入Parcel变量。
    写入完毕Parcel中的数据应该如下:
    Binder系列()——addService——代码分析_第1张图片

2.2.1 BpBinder::transact(…)

165         status_t status = IPCThreadState::self()->transact(
166             mHandle, code, data, reply, flags);

mHandle的值为0,是BpBinder在被new的时候的初始化参数,因为是ServiceManager对应的BpBinder,而SM的默认handle就是0。默认flags的值为0,非ONEWAY通信。
接下来调用了IPCThreadState::self()->transact

2.3 IPCThreadState::transact

status_t IPCThreadState::transact(int32_t handle,                                      
                                  uint32_t code, const Parcel& data,                   
                                  Parcel* reply, uint32_t flags)                       
{                                                                                      
    status_t err = data.errorCheck();                                                  
                                                                                       
    flags |= TF_ACCEPT_FDS;                                                            
                                 
    if (err == NO_ERROR) {                                                             
        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),              
            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");                     
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);   ----------------------1
    }                                                                                  
                                                                                       
    if (err != NO_ERROR) {                                                             
        if (reply) reply->setError(err);                                               
        return (mLastError = err);                                                     
    }                                                                                  
                                                                                       
    if ((flags & TF_ONE_WAY) == 0) {                                                                                                                         
        if (reply) {                                                                   
            err = waitForResponse(reply);               ------------------------2                               
        } else {                                                                       
            Parcel fakeReply; 
            err = waitForResponse(&fakeReply);                            
         }                                                                                                                                                                                    
     } else {                                                              
         err = waitForResponse(NULL, NULL);                                
     }                                                                     
                                                                           
     return err;                                                           
 }                                                                                                                                                       
  1. writeTransactionData对要传输的数据再次封装;
  2. waitForResponse其实不仅等待返回结果,真正与binder driver通信也是在这个函数。

2.3.1 writeTransactionData

IPCThreadState::transact是进入驱动之前应用中的最后一步调用,所以这时候要加上Binder Command(即BC开头的宏)——BC_TRANSACTION。
该函数构造了binder_transaction_data结构体,初始化、保存了binder事物相关的数据,其中也保存了之前的parcel data,然后再次将binder_transaction_data和BC_TRANSACTION写入到IPCThreadState的mOut成员,该成员也是一个Parcel类型数据,在IPCThreadState构造时被初始化。writeTransactionData处理后的数据封装如下图:

Binder系列()——addService——代码分析_第2张图片

2.3.2 waitForResponse

分两块理解:

  1. talkWithDriver与binder driver通信;
  2. 根据binder driver的返回结果再做操作。

talkWithDriver

 status_t IPCThreadState::talkWithDriver(bool doReceive)                            
 {                                                                                                                                           
     binder_write_read bwr;                                                         
                                                                                    
     // Is the read buffer empty?                                                   
     const bool needRead = mIn.dataPosition() >= mIn.dataSize();                    
                                                                                    
     // We don't want to write anything if we are still reading                     
     // from data left in the input buffer and the caller                           
     // has requested to read the next data.                                        
     const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;        
                                                                                    
     bwr.write_size = outAvail;                                                     
     bwr.write_buffer = (uintptr_t)mOut.data();                                                                
       
      bwr.write_consumed = 0;
      bwr.read_consumed = 0;   
     do {                                                                                                                               
         if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)     --------------2           
             err = NO_ERROR;                                                          
         else                                                                         
             err = -errno;                                                                                                                  
     } while (err == -EINTR);    
           
	   if (err >= NO_ERROR) {                     ------------------3                        
	     if (bwr.write_consumed > 0) {                                  
	         if (bwr.write_consumed < mOut.dataSize())                  
	             mOut.remove(0, bwr.write_consumed);                    
	         else                                                       
	             mOut.setDataSize(0);                                   
	     }                                                              
	     if (bwr.read_consumed > 0) {                                   
	         mIn.setDataSize(bwr.read_consumed);                        
	         mIn.setDataPosition(0);                                    
	     }                                                                                                                    
	     return NO_ERROR;                                               
	 }                                                                  
                                                                    
 return err;   
 }                                                                                                            

又构造了一个数据结构binder_write_read,对通信的数据再做一次封装,如下:
Binder系列()——addService——代码分析_第3张图片

  1. 本次通信只涉及write,所以省略了对read相关数据初始化的代码,原则上如果read、write buffer都有数据,那么优先read,我理解这么做是为了提高client的响应速度,毕竟client往往需要阻塞等待driver返回的数据;
  2. 通过ioctl调到内核,ioctl cmd是BINDER_WRITE_READ,传入的数据就是上面封装的binder_write_read;
  3. driver会更新binder_write_read的consumed,上层拿到这个数据再更新mOut和mIn的data size、position等。

3 Driver(Binder Driver)

  static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)                  
 {                                                                                                 
         int ret;                                                                                  
         struct binder_proc *proc = filp->private_data;                                            
         struct binder_thread *thread;                                                             
         unsigned int size = _IOC_SIZE(cmd);                                                       
         void __user *ubuf = (void __user *)arg;                                                   
                        
         thread = binder_get_thread(proc);          										---------------1                                               
                                                              
         switch (cmd) {                                                                            
         case BINDER_WRITE_READ:                                                                   
                 ret = binder_ioctl_write_read(filp, cmd, arg, &thread);                ----------------2           
                 if (ret)                                                                          
                         goto err;                                                                 
                 break;                                                                            
  1. 一个进程中可能有多个binder线程,kernel space的binder_thread对应userspace的一个binder线程。binder_get_thread会根据current->pid在proc->threads红黑树中检索,如果找到对应binder_thread,就返回,否则创建新的binder_thread并插入到红黑树中。
    关于PID检索,有的人可能会疑问,同一进程中的线程PID不是应该相同吗?
    可以参考这个链接:current->pid is the current process id, how can it be used for distinguishing threads?
  2. binder_ioctl_write_read的参数中,proc代表进程,thread代表binder线程,这样就确定了哪个进程中的哪个线程在发起通信。

3.1 binder_ioctl_write_read

该函数负责binder驱动的读写

static int binder_ioctl_write_read(struct file *filp,                                      
                                unsigned int cmd, unsigned long arg,                       
                                struct binder_thread **threadp)                            
{                                                                                          
        int ret = 0;                                                                       
        int thread_pid = (*threadp)->pid;                                                  
        struct binder_proc *proc = filp->private_data;                                     
        unsigned int size = _IOC_SIZE(cmd);                                                
        void __user *ubuf = (void __user *)arg;              // arg就是userspace的 binder_write_read                       
        struct binder_write_read bwr;                                                      
                                                                                                                                
        if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {       // copy到kernel space的binder_write_read                              
                ret = -EFAULT;                                                             
                goto out;                                                                  
        }                                                                                  

        if (bwr.write_size > 0) {                                                          
                ret = binder_thread_write(proc, *threadp,                                  
                                          bwr.write_buffer,                                
                                          bwr.write_size,                                  
                                          &bwr.write_consumed);                            
                trace_binder_write_done(ret);                                                                                                              
        }  
        ...
        if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {          
                 ret = -EFAULT;                                
                 goto out;                                     
         }                                                     
 out:                                                          
         return ret;                                                                                                                           

这里bwr.write_buffer指向的就是userspace的IPCThreadState::mOut,省略了binder_thread_read,最后还有个copy_to_user操作,我看了下代码,对binder_write_read 有影响的只有consumed成员,所以相当于更新了userspace的consumed成员。直接看binder_thread_write:

3.2 binder_thread_write

 static int binder_thread_write(struct binder_proc *proc,               
                         struct binder_thread *thread,                  
                         binder_uintptr_t binder_buffer, size_t size,   
                         binder_size_t *consumed)                       
 {                                                                      
         uint32_t cmd;                                                  
         struct binder_context *context = proc->context;                
         void __user *buffer = (void __user *)(uintptr_t)binder_buffer;  //对应userspace的IPCThreadState::mOut
         void __user *ptr = buffer + *consumed;        //获取有效数据                  
         void __user *end = buffer + size;                              
          // mOut中可能存在着多个CMD+ binder_transaction_data的组合,放在一个while循环中处理                                                           
         while (ptr < end && thread->return_error.cmd == BR_OK) {       
                 if (get_user(cmd, (uint32_t __user *)ptr))             // cmd 即BC_TRANSACTION
                         return -EFAULT;                                
                 ptr += sizeof(uint32_t);        //     更新数据指针                  
                 trace_binder_command(cmd);                             
                 if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {      
                         atomic_inc(&binder_stats.bc[_IOC_NR(cmd)]);    
                         atomic_inc(&proc->stats.bc[_IOC_NR(cmd)]);     
                         atomic_inc(&thread->stats.bc[_IOC_NR(cmd)]);   
                 }                                                      
                 switch (cmd) {      
                 case BC_TRANSACTION:                              
                 case BC_REPLY: {                                  
                         struct binder_transaction_data tr;        
                                                                   
                         if (copy_from_user(&tr, ptr, sizeof(tr)))   //copy完CMD,再copy binder_transaction_data 
                                 return -EFAULT;                   
                         ptr += sizeof(tr);        //再次更新数据指针                
                         binder_transaction(proc, thread, &tr,     
                                            cmd == BC_REPLY, 0);   
                         break;                                    
                 }   
                  *consumed = ptr - buffer  // ptr是已处理数据指针,减去数据起始地址,即已消耗/处理的数据 
          return 0;
 }                                                                                                       

3.3 binder_transaction

3.3.1 binder“事务”与“一次拷贝”

static void binder_transaction(struct binder_proc *proc,                     
                               struct binder_thread *thread,                 
                               struct binder_transaction_data *tr, int reply,
                               binder_size_t extra_buffers_size)             
{                                                                            
                                                                 
        struct binder_transaction *t;                                        
        struct binder_work *tcomplete;                                       
        binder_size_t *offp, *off_end, *off_start;                                                                  
        struct binder_proc *target_proc;                                     
        struct binder_thread *target_thread = NULL;                          
        struct binder_node *target_node = NULL;                              
        struct binder_ref *target_ref = NULL;                                                          
        struct binder_context *context = proc->context;                      
         if (reply) {
         ...
         } else {                                                                                       
                 if (tr->target.handle) {                                                               
					...                                          
                 } else {                                                                               
                         target_node = context->binder_context_mgr_node;                                                                                                      
                 }                                                                                                                                                  
                 target_proc = target_node->proc;  

addService是与SM通信,所以handle为0,直接拿到binder_context_mgr_node。通过binder_node再拿到对应的bind_proc。

                 binder_proc_lock(thread->proc, __LINE__);                                                                                   
                 if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {                                                               
                         struct binder_transaction *tmp;                                                                                                                                                                                                                 
                         tmp = thread->transaction_stack;                                                                                    
                         if (tmp->to_thread != thread) {                                                                                     
                                 binder_user_error(...);
                                 ...                                                                                                                                                      
                         }                                                                                                                   
                         while (tmp) {                                                                                                       
                                 if (tmp->from && tmp->from->proc == target_proc)                                                            
                                         target_thread = tmp->from;                                                                          
                                 tmp = tmp->from_parent;                                                                                     
                         }                                                                                                                   
                 }                                                                                                                           
                 binder_proc_unlock(thread->proc, __LINE__);                                                                                 
         }        

这一段代码很值得研究,是为了实现binder线程的复用,防止binder线程循环传输时,创建不必要的多余线程。详细分析见TODO

         /* TODO: reuse incoming transaction for reply */                                                            
         t = kzalloc(sizeof(*t), GFP_KERNEL);                          ----------1                                                                                                                                   
         binder_stats_created(BINDER_STAT_TRANSACTION);                                                              
                                                                                                                     
         tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);         -----------2                                               
                                                                                              
         binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);                                                     
                                                                                                                     
         t->debug_id = ++binder_last_id;                                                                             
                                                                        
         if (!reply && !(tr->flags & TF_ONE_WAY))       
                 t->from = thread; 				-------------3                     
         else                                           
                 t->from = NULL;                                                                                                                                             
         t->sender_euid = task_euid(proc->tsk);      ----------------4                           
         t->to_proc = target_proc;                                              
         t->to_thread = target_thread;                                          
         t->code = tr->code;                                                    
         t->flags = tr->flags;                                                  
         if (!(t->flags & TF_ONE_WAY)) {                                        
                 t->priority.sched_policy = current->policy;                    
                 t->priority.prio = current->normal_prio;                       
         } else {                                                               
                 /* Oneway transactions run at default priority of the target */
                 t->priority = target_proc->default_priority;                   
         }                                                                      
         t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,   --------5
                 tr->offsets_size, extra_buffers_size,                          
                 !reply && (t->flags & TF_ONE_WAY));                                                                                  
         t->buffer->allow_user_free = 0;                                        
         t->buffer->debug_id = t->debug_id;                                     
         t->buffer->transaction = t;                                            
         t->buffer->target_node = target_node;                                  
         trace_binder_transaction_alloc_buf(t->buffer);                         
         if (target_node) {                                                     
                 binder_inc_node(target_node, 1, 0, NULL);            ------------------6          
                 if (target_ref)                                                
                         binder_put_ref(target_ref);                            
                 target_ref = NULL;                                             
         }   
  1. 分配binder_transaction结构体;
  2. 分配binder_work结构体,用来返回给请求端;
  3. 非reply且非oneway的情况下,记录请求端的binder线程,表示这个binder_transaction是由该线程发出的。初始化binder_transaction的部分成员,值得在意的是to_proc、to_thread
  4. 非ONE WAY的事务,则设置schedule policy和priority和当前线程一样;
  5. 在服务端的binder_proc分配tr->data_size大小的内存,其实就是Parcel的mDataSize的大小;
  6. 增加binder_node的强引用计数。
         off_start = (binder_size_t *)(t->buffer->data +                 ----------1                               
                                       ALIGN(tr->data_size, sizeof(void *)));                   
         offp = off_start;                                                                      
                                                                                                
         if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)  -----------2                   
                            tr->data.ptr.buffer, tr->data_size)) {                              
			...                                                   
         }                                                                                      
         if (copy_from_user(offp, (const void __user *)(uintptr_t)     ---------3                         
                            tr->data.ptr.offsets, tr->offsets_size)) {                          
				...                                                   
         }                                                                                      
         off_end = (void *)off_start + tr->offsets_size;              ----------3                                                    
         sg_bufp = (u8 *)(PTR_ALIGN(off_end, sizeof(void *)));                                                                
         sg_buf_end = sg_bufp + extra_buffers_size;                                                                           
         off_min = 0;                                                                                                             
  1. 把用户空间的 tr->data.ptr.buffe,即Parcel::mData拷贝到内核空间的t->buffer->data(target_porc中分配的内存)。再说的明白一点,在请求端的进程上下文,把请求端的数据,拷贝到服务端的target_porc中。这就是传说中的“一次拷贝”。
  2. 将用户空间binder_transaction_data的data.ptr.offsets拷贝过来,这是一个地址数组(其实就是Parcel::mObjects),每个数据成员对应一个flat_binder_object在Parcel::mData中的偏移,所谓偏移,说白了就是写入flat_binder_object时,Parcel的mDataPos。而这个数组有多大呢?binder_transaction_data的offsets_size里面已经记录好了,tr之前已经拷贝过来了,off_start + tr->offsets_size就得到了地址数组的结尾。

这时候再看1,off_start是数组的起始地址,表示该数组就放在t->buffer->data + tr->data_size之后,如下:
Binder系列()——addService——代码分析_第4张图片

          for (; offp < off_end; offp++) {                                                                           
                 struct binder_object_header *hdr;                                                                            
                 size_t object_size = binder_validate_object(t->buffer, *offp);                                               
                                                                                                                              
                 if (object_size == 0 || *offp < off_min) {                                                                   
                         binder_user_error(...);                                                                                       
                 }                                                                                                            
                                                                                                                              
                 hdr = (struct binder_object_header *)(t->buffer->data + *offp);          -------------1                                    
                 off_min = *offp + object_size;                                                                               
                 switch (hdr->type) {
                   switch (hdr->type) {                                              
                 case BINDER_TYPE_BINDER:                                          
                 case BINDER_TYPE_WEAK_BINDER: {                                   
                         struct flat_binder_object *fp;                            
                                                                                   
                         fp = to_flat_binder_object(hdr);                    ---------------------1       
                         ret = binder_translate_binder(fp, t, thread);                            ---------------2                 
                 } break;                                                          
                 case BINDER_TYPE_HANDLE:                                          
                 case BINDER_TYPE_WEAK_HANDLE: {                                   
					...     
                 } break;                                                                                                                              
                 case BINDER_TYPE_FD: {                                            
					...                                  
                 } break;
                 ......
                 default:                                                                           
					...                                             
                 }                                                                                         
  1. 已经能拿到userspace的所有flat_binder_object,现在遍历对每个flat_binder_object进行处理,addService是本地的Binder,BINDER_TYPE_BINDER;
  2. 调用binder_translate_binder,这里传入的参数是要传输的binder_transaction、flat_binder_object和对应的thread(请求方的)。

3.3.2 binder_translate_binder

static int binder_translate_binder(struct flat_binder_object *fp,              
                                   struct binder_transaction *t,               
                                   struct binder_thread *thread)               
{                                                                              
        struct binder_node *node;                                              
        struct binder_ref *ref;                                                
        struct binder_proc *proc = thread->proc;                               
        struct binder_proc *target_proc = t->to_proc;                          
                                                                               
        node = binder_get_node(proc, fp->binder);             				 ------------1               
        if (!node) {                                                           
                s8 priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;      
                int sched_policy =                                             
                        (fp->flags & FLAT_BINDER_FLAG_SCHED_POLICY_MASK) >>    
                        FLAT_BINDER_FLAG_SCHED_POLICY_SHIFT;                   
                node = binder_new_node(proc, fp->binder, fp->cookie);          ------------2
                if (!node)                                                     
                        return -ENOMEM;                                        
                                                                               
                binder_proc_lock(node->proc, __LINE__);                        
                node->sched_policy = sched_policy;                             
                node->min_priority = to_kernel_prio(sched_policy, priority);   
                node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS)
                binder_proc_unlock(node->proc, __LINE__);                      
        }        
         ref = binder_get_ref_for_node(target_proc, node, &thread->todo);         ------------3  
         if (!ref) {                                                                
                 binder_put_node(node);                                             
                 return -EINVAL;                                                    
         }                                                                          
                                                                                    
         if (fp->hdr.type == BINDER_TYPE_BINDER)                                   -------------------4            
                 fp->hdr.type = BINDER_TYPE_HANDLE;                                 
         else                                                                       
                 fp->hdr.type = BINDER_TYPE_WEAK_HANDLE;                            
         fp->binder = 0;                                                            -----------------------5
         fp->handle = ref->desc;                                                    
         fp->cookie = 0;                                                            
         binder_inc_ref(ref, fp->hdr.type == BINDER_TYPE_HANDLE, &thread->todo);    
                                                                                    
         trace_binder_transaction_node_to_ref(t, node, ref);                                                         
         binder_put_ref(ref);                                                       
         binder_put_node(node);                                                     
         return 0;                                                                  
 }                                                                                                                                                
  1. binder_get_node根据flat_binder_object 的binder指针来检索请求端binder_porc的nodes树,fp->binder的值来源于Parcel::flatten_binder中obj.binder = reinterpret_cast(local->getWeakRefs());
  2. 如果没有找到node,就新建一个binder_node,并进行初始化;
  3. 然后根据这个node,到target_proc中去检索refs_by_node红黑树,看target_proc是否存在该node对应的binder_ref,有则返回,没有则新建一个binder_ref。
    这里比较值得在意的是new_ref->desc,binder_context_mgr_node在每个binder_ref中的desc都是0,其余的binder_ref->desc按照被创建的顺序依次加1。也就是说,同一个binder_node在不同的binder_proc中对应的binder_ref->desc很有可能是不同的。另外,binder_ref有两个rb_node,对应binder_proc的refs_by_node、refs_by_desc两颗树,所以新建完了binder_ref要同时将rb_node插入两棵树。
    总结一下:一个进程内的本地binder,即BBinder,在本进程的binder_proc中的nodes树中对应一个binder_node。在对端则变为BpBinder,对端进程的binder_proc中对应refs_by_node和refs_by_desc两棵树的节点。
  4. 把flat_binder_object中的type改成HANDLE类型的,因为不管是请求端的flat_binder_object是本地的还是远程的,对于服务端来讲,都是远程的;
  5. fp->binder和fp->cookie全部清零,这里能看出来,fp->binder就是为了在当前binder_proc中查找对对应的binder_node,fp->cookie用于在binder_node没有时作为创建的参数。flat_binder_object的handle保存了在服务端对应的binder_ref->desc,这个值前面也说了,在不同的服务端中可能不一样。

3.3.3 sends a transaction to a process and wakes it up

         t->work.type = BINDER_WORK_TRANSACTION;              	----------1                      
         tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE; 		----------2                      
         binder_enqueue_work(tcomplete, &thread->todo, __LINE__);    --------3
         oneway = !!(t->flags & TF_ONE_WAY);                                        
                                                                                    
         if (reply) {                                                               
			...
         } else if (!(t->flags & TF_ONE_WAY)) {                                    
                 BUG_ON(t->buffer->async_transaction != 0);                         
                 binder_proc_lock(thread->proc, __LINE__);                          
                 t->need_reply = 1;                                                 ---------------4
                 t->from_parent = thread->transaction_stack;                        
                 thread->transaction_stack = t;                                     
                 binder_proc_unlock(thread->proc, __LINE__);                        
                 binder_proc_lock(target_proc, __LINE__);                           
                 binder_proc_transaction(t, target_proc, target_thread);            
                 binder_proc_unlock(target_proc, __LINE__);                         
         } else { 
                  BUG_ON(target_node == NULL);                                   
                 BUG_ON(t->buffer->async_transaction != 1);                     
                                                                                
                 binder_proc_lock(target_node->proc, __LINE__);                 
                 /*                                                             
                  * Test/set of has_async_transaction                           
                  * must be atomic with enqueue on                              
                  * async_todo                                                  
                  */                                                            
                 if (target_node->has_async_transaction) {                      
                         binder_enqueue_work(&t->work, &target_node->async_todo,
                                             __LINE__);                         
                 } else {                                                       
                         target_node->has_async_transaction = 1;                
                         binder_proc_transaction(t, target_proc, NULL);         
                 }                                                              
                 binder_proc_unlock(target_node->proc, __LINE__);
             }
             return;
 }                                                                                                                                                                                                                                                                                                                                                                                                                                       
  1. 设置binder_transaction的work.type,这是要发送给target_proc的;
  2. tcomplete,也是binder_work类型的,这是要发送给请求端的,也就是当前线程;
  3. 把tcomplete加入到当前线程的binder_workllist中;
  4. 如果当前thread->transaction_stack中已经有binder_transaction,说明本次通信并不是一次请求,一次reply就能完成的(这边表述不是很精确),可能涉及到多个线程或者涉及到请求端和服务端的循环调用,这时候就要把前一次的前的binder_transaction(存放在transaction_stack)保存到from_parent 中,然后把当前的binder_transaction保存到transaction_stack;

binder_proc_transaction

static void binder_proc_transaction(struct binder_transaction *t,              
                                    struct binder_proc *proc,                  
                                    struct binder_thread *thread)              
{                                                                              
        struct binder_worklist *target_list = NULL;                            
        wait_queue_head_t *target_wait = NULL;                                 
        bool oneway = !!(t->flags & TF_ONE_WAY);                               
                                                                               
        if (!thread) {                                                                      
                thread = binder_select_thread(proc);              -----------------1             
        }                                                                      
                                                                               
        if (thread) {                                                 ------------2
                target_list = &thread->todo;                                   
                target_wait = &thread->wait;                                   
                binder_transaction_priority(thread->task, t,                   
                                            t->buffer->target_node);           
        } else {                                                               
                target_list = &proc->todo;                                     
        }                                                                      
                                                                               
        binder_enqueue_work(&t->work, target_list, __LINE__);  -------3                  
                                                                               
        binder_wakeup_thread(proc, thread, !oneway /* sync */);   --------4     
}                                                                              
  1. 如果在TODO中没有找到target_thread,那么thread参数应该是null。从binder_proc的waiting_threads中选一个空闲thread;
  2. 如果找到空闲binder_thread,那么拿到binder_thread的binder_workllist(todo list)。否则就用binder_proc的todo lisit作为target_list;
  3. 相当于把binder_transaction插入到了target_list中,可能是target_thread,也可能是target_proc;
  4. 如果步骤1中找到空闲thread,则唤醒;否则调用binder_wakeup_poll_threads,从binder_proc的threads里面去找没有binder_transaction并且waitlist为空的thread去唤醒。

4. ServiceManger

SM启动后会进入binder_loop,死循环调用ioctl发送BINDER_WRITE_READ命令,然后从driver读数据,并用binder_parse进行处理。这期间可能睡眠,请求端有需求是会将其唤醒。分两块:

  1. 从binder driver读数据;
  2. SM解析数据

4.1 binder_thread_read

跟之前一样,binder_ioctl_write_read先把userspace的binder_write_read copy到kernel space。参数non_block为false,可以看ProcessState::open_driver打开/dev/binder时的参数。

尝试睡眠

 static int binder_thread_read(struct binder_proc *proc,                                  
                               struct binder_thread **threadp,                            
                               binder_uintptr_t binder_buffer, size_t size,               
                               binder_size_t *consumed, int non_block)                    
 {                                                                                        
         struct binder_thread *thread = *threadp;                                         
         void __user *buffer = (void __user *)(uintptr_t)binder_buffer;                   
         void __user *ptr = buffer + *consumed;                                           
         void __user *end = buffer + size;                                                
         struct binder_worklist *wlist = NULL;                                            
         int ret = 0;                                                                     
         bool wait_for_proc_work;                                                         
                                                                                          
         if (*consumed == 0) {                    ----------------1                                         
                 if (put_user(BR_NOOP, (uint32_t __user *)ptr))                           
                         return -EFAULT;                                                  
                 ptr += sizeof(uint32_t);                                                 
         }                                                                                
                                                                                          
 retry:                                                                                   
         binder_proc_lock(proc, __LINE__);                                                
         wait_for_proc_work = binder_available_for_proc_work(thread); --------2                     
         binder_proc_unlock(proc, __LINE__);                                              
                                                                                          
         if (wait_for_proc_work)                                                          
                 atomic_inc(&proc->ready_threads);                                        
                                                                                          
         trace_binder_wait_for_work(wait_for_proc_work,                                   
                                    !!thread->transaction_stack,                          
                                    !binder_worklist_empty(&thread->todo));               
                                                                                          
         thread->looper |= BINDER_LOOPER_STATE_WAITING;                                   
                                                                                          
         if (wait_for_proc_work) {                                                        
                 BUG_ON(!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |              
                                            BINDER_LOOPER_STATE_ENTERED)));               
                 binder_set_priority(current, proc->default_priority,                     
                                     true /* restore */);                                 
         }                                                                                
                                                                                          
         if (non_block) {                                                                 
                 if (!binder_has_work(thread, wait_for_proc_work))                        
                         ret = -EAGAIN;                                                   
         } else {                                                            
                 ret = binder_wait_for_work(thread, wait_for_proc_work); --------------3               
         if (wait_for_proc_work)                                                   
                 atomic_dec(&proc->ready_threads);                                 
                                                                                   
         if (ret)                                                                  
                 return ret;                                                       
                                                                                   
         thread->looper &= ~BINDER_LOOPER_STATE_WAITING;                           
  1. 第一次读取时consumed肯定为0,先往binder_write_read.read_buffer中写个BR_NOOP;
  2. binder_available_for_proc_work判断当前线程transaction_stack是否为空,thread->todo是否为空及looper的状态,返回true表示当前线程已经就绪并且空闲,没有binder事务要处理;
  3. binder_wait_for_work,如果当前thread的todo list不为空即has work,则继续往下走;否则睡眠等待直到has work。这期间会更新read_threads的数目和looper的状态。

获取binder_work和binder_transaction

         while (1) {                                                             
                 uint32_t cmd;                                                   
                 struct binder_transaction_data tr;                              
                 struct binder_transaction *t = NULL;                            
                 struct binder_work *w = NULL;                                   
                 wlist = NULL;                                                   
                                                                                 
                 binder_proc_lock(thread->proc, __LINE__);                       
                 spin_lock(&thread->todo.lock);                                  
                 if (!_binder_worklist_empty(&thread->todo)) {                   -------------1
                         w = list_first_entry(&thread->todo.list,                
                                              struct binder_work,                
                                              entry);                            
                         wlist = &thread->todo;                                  
                         binder_freeze_worklist(wlist);                          
                 }                                                               
                 spin_unlock(&thread->todo.lock);                                
                 if (!w) {                                                       
                         spin_lock(&proc->todo.lock);                            
                         if (!_binder_worklist_empty(&proc->todo) &&             
                                         wait_for_proc_work) {                   
                                 w = list_first_entry(&proc->todo.list,          
                                                      struct binder_work,        
                                                      entry);                    
                                 wlist = &proc->todo;                            
                                 binder_freeze_worklist(wlist);                  
                         }                                                       
                         spin_unlock(&proc->todo.lock);                          
                         if (!w) {                                               
                                 binder_proc_unlock(thread->proc, __LINE__);     
                                 /* no data added */                             
                                 if (ptr - buffer == 4 &&                        
                                     !READ_ONCE(thread->looper_need_return))     
                                         goto retry;                             
                                 break;                                          
                         }                                                       
                 }                                                               
                 binder_proc_unlock(thread->proc, __LINE__);                     
                 if (end - ptr < sizeof(tr) + 4) {                               
                         if (wlist)                                              
                                 binder_unfreeze_worklist(wlist);                
                         break;                                                  
                 }                                                               
                                                                                 
                  switch (w->type) {                                                
                 case BINDER_WORK_TRANSACTION: {                                   -----------------2
                         t = container_of(w, struct binder_transaction, work);     
                 } break;                                                                                                     
  1. 首先从binder_thread的todo list中获取binder_work和work_list,找不到再去binder_proc的todo list中找;
  2. 根据binder_work在找到对应的binder_transaction。BINDER_WORK_TRANSACTION是在binder_transaction函数中被设置的(见3.3.3)。
                 if (t->buffer->target_node) {                 -----------------------1                     
                         struct binder_node *target_node = t->buffer->target_node;
                                                                                  
                         tr.target.ptr = target_node->ptr;                        ------------------2
                         tr.cookie =  target_node->cookie;                        
                         /* Don't need a lock to check set_priority_called, since 
                          * the lock was held when pulling t of the workqueue,    
                          * and it hasn't changed since then                      
                          */                                                      
                         if (!t->set_priority_called)                             
                                 binder_transaction_priority(current, t,          
                                                             target_node);        
                         cmd = BR_TRANSACTION;                                    
                 } else {                                            
                         tr.target.ptr = 0;                                       
                         tr.cookie = 0;                                           
                         cmd = BR_REPLY;                                          
                 }                                                                
                 tr.code = t->code;                //ADD_SERVICE_TRANSACTION                               
                 tr.flags = t->flags;                                             
                 tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid);   
                 if (t->from) {                                                              
                         struct task_struct *sender = t->from->proc->tsk;                    
                                                                                             
                         tr.sender_pid = task_tgid_nr_ns(sender,                             
                                                         task_active_pid_ns(current));       
                 } else {                                                                    
                         tr.sender_pid = 0;                                                  
                 }                                                                           
                                                                                             
                 tr.data_size = t->buffer->data_size;                                        
                 tr.offsets_size = t->buffer->offsets_size;                                  
                 tr.data.ptr.buffer = (binder_uintptr_t)                                     
                         ((uintptr_t)t->buffer->data +                                       
                         binder_alloc_get_user_buffer_offset(&proc->alloc));                 
                 tr.data.ptr.offsets = tr.data.ptr.buffer +                                  
                                         ALIGN(t->buffer->data_size,                         
                                             sizeof(void *));                                
                                                                                             
                 if (put_user(cmd, (uint32_t __user *)ptr)) {           ---------------3                     
                         binder_unfreeze_worklist(wlist);                                    
                         return -EFAULT;                                                     
                 }                                                                           
                 ptr += sizeof(uint32_t);                                                    
                 if (copy_to_user(ptr, &tr, sizeof(tr))) {                  ----------------4
                         binder_unfreeze_worklist(wlist);                                    
                         return -EFAULT;                                                     
                 }                                                                           
                 ptr += sizeof(tr);                                                             
  1. 这里t->buffer->target_node即binder_context_mgr_node。在binder_transaction函数中,如果是BC_TRANSACTION,会找到对应的target_node,说明是client向服务端发送数据,并且服务端还没有处理,此时将cmd 设置为BR_TRANSACTION;在binder_transaction函数中,如果是BC_REPLY,target_node为空,说明是服务端返回数据,将cmd设置成BR_REPLY。
  2. 构建binder_transaction_data,这里target_node即SM对应的binder_node,target_node->ptr对应SM的IBinder的weak_ref,target_node->cookie对应SM的IBinder的实体指针。后面一系列操作都是讲binder_transaction数据转化为binder_transaction_data,与binder_transaction函数中相反。
  3. 将BR_TRANSACTION拷贝到用空间;
  4. 将binder_transaction_data拷贝到用户空间的binder_write_read.read_buffer + read_consumed地址。
 done:                                                                           
         *consumed = ptr - buffer;                    //更新consumed的值                         
         binder_proc_lock(thread->proc, __LINE__);                               
         if (proc->requested_threads +                        -----------------------1                   
                         atomic_read(&proc->ready_threads) == 0 &&               
                         proc->requested_threads_started < proc->max_threads &&  
                         (thread->looper & (BINDER_LOOPER_STATE_REGISTERED |     
                                            BINDER_LOOPER_STATE_ENTERED))        
                     /* the user-space code fails to */                          
                     /* spawn a new thread if we leave this out */) {            
                 proc->requested_threads++;                                      
                 binder_proc_unlock(thread->proc, __LINE__);                     
                                                                                 
                 binder_debug(BINDER_DEBUG_THREADS,                              
                              "%d:%d BR_SPAWN_LOOPER\n",                         
                              proc->pid, thread->pid);                           
                 if (put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer))  ------------2     
                         return -EFAULT;                                         
                 binder_stat_br(proc, thread, BR_SPAWN_LOOPER);                  
         } else                                                                  
                 binder_proc_unlock(thread->proc, __LINE__);                     
                                                                                 
         return 0;                                                               
 }                                                                               

1.binder_porc中线程相关字段的含义:

  • max_threads:进程所能启动的最大非主Binder线程数目,BINDER_SET_MAX_THREADS的时候设置;
  • requested_threads:请求启动的非主线程数,BR_SPAWN_LOOPER之前加一,userspace处理完之后BC_REGISTER_LOOPER中自减;
  • requested_threads_started:已经启动的非主线程数,每次BR_SPAWN_LOOPER之后在BC_REGISTER_LOOPER加一;
  • ready_threads:当前可用的Binder线程数,binder_thread_read中线程睡眠之前加1,被唤醒后自减。
  1. requested_threads和ready_threads都为0的话,此时要确保当前进程有一个空闲binder线程供其他请求端链接,所以向userspace发送命令BR_SPAWN_LOOPER。(BR_SPAWN_LOOPER的后续流程见TODO,本篇不分析)

4.2 binder_parse

解析从binder driver读取的数据

 int binder_parse(struct binder_state *bs, struct binder_io *bio,   
                  uintptr_t ptr, size_t size, binder_handler func)     // func == svcmgr_handler
 {                                                                  
     int r = 1;                                                     
     uintptr_t end = ptr + (uintptr_t) size;                        
                                                                    
     while (ptr < end) {                        ------------1                    
         uint32_t cmd = *(uint32_t *) ptr;                          
         ptr += sizeof(uint32_t);      
         switch(cmd) {
         ...
         case BR_TRANSACTION: {                                                           
             struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;                                                 
             if (func) {                                                                  
                 unsigned rdata[256/4];                                                   
                 struct binder_io msg;                                                    
                 struct binder_io reply;                                                  
                 int res;                                                                 
                                                                                          
                 bio_init(&reply, rdata, sizeof(rdata), 4);                               
                 bio_init_from_txn(&msg, txn);                        ----------2                    
                 res = func(bs, txn, &msg, &reply);                  ----------3     
                 if (txn->flags & TF_ONE_WAY) {                                           
                     binder_free_buffer(bs, txn->data.ptr.buffer);                        
                 } else {                                                                 
                     binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);          ----------4
                 }                                                                        
             }                                                                            
             ptr += sizeof(*txn);                                                         
             break;                                                                       
         } 
         ...         
          return r;                                                                                    
  1. 去读的数据是连续的CMD+binder_transaction_data组合,循环解析。
  2. 将binder_transaction_data转换成binder_io
void bio_init_from_txn(struct binder_io *bio, struct binder_transaction_data *txn)
{                                                                                 
    bio->data = bio->data0 = (char *)(intptr_t)txn->data.ptr.buffer;              //Parcel data的地址
    bio->offs = bio->offs0 = (binder_size_t *)(intptr_t)txn->data.ptr.offsets;    // flat_binder_object在Parcel data中的偏移
    bio->data_avail = txn->data_size;                                             //Parcel data大小
    bio->offs_avail = txn->offsets_size / sizeof(size_t);       //offsets_size应该是flat_binder_object的数量,除以sizeof(size_t)没看懂
    bio->flags = BIO_F_SHARED;                                                    
}                                                                                 
  1. 调用svcmgr_handler
  2. 发送reply消息,流程和发送大同小异,不做分析了。

4.3 svcmgr_handler

     s = bio_get_string16(msg, &len);   //"android.os.IServiceManager"
     switch(txn->code) {                                                           
     case SVC_MGR_GET_SERVICE:                                                     
     case SVC_MGR_CHECK_SERVICE:                                                   
...                                                        
     case SVC_MGR_ADD_SERVICE:                   //对应code: ADD_SERVICE_TRANSACTION                                  
         s = bio_get_string16(msg, &len);       //"media.player"                                   
         if (s == NULL) {                                                          
             return -1;                                                            
         }                                                                         
         handle = bio_get_ref(msg);                // 拿到MediaPlayerServer的 weak ref                                
         allow_isolated = bio_get_uint32(msg) ? 1 : 0;                             
         if (do_add_service(bs, s, len, handle, txn->sender_euid,                  
             allow_isolated, txn->sender_pid))                                     
             return -1;                                                            
         break;                                                                    
                                                                                   
     case SVC_MGR_LIST_SERVICES: {                                                 
...
     bio_put_uint32(reply, 0);
     return 0;                
 }                            
                                                  
  • 把Parcel data中的android.os.IServiceManager和media.player取出来,前者没怎么用,后者用于查找该service是否已经被添加;
  • do_add_service真正添加服务。

4.4 do_add_service

int do_add_service(struct binder_state *bs,                                       
                   const uint16_t *s, size_t len,                                 
                   uint32_t handle, uid_t uid, int allow_isolated,                
                   pid_t spid)                                                    
{                                                                                 
    struct svcinfo *si;                                                           
                                              
    if (!svc_can_register(s, len, spid, uid)) {                      -------------------1             
        ALOGE("add_service('%s',%x) uid=%d - PERMISSION DENIED\n",                
             str8(s, len), handle, uid);                                          
        return -1;                                                                
    }                                                                             
                                                                                  
    si = find_svc(s, len);                      -----------------2                                  
    if (si) {                                                                     
        if (si->handle) {                                                         
            ALOGE("add_service('%s',%x) uid=%d - ALREADY REGISTERED, OVERRIDE\n", 
                 str8(s, len), handle, uid);                                      
            svcinfo_death(bs, si);                                                
        }                                                                         
        si->handle = handle;                                                      
    } else {                                                                      
        si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));                  
        if (!si) {                                                                
            ALOGE("add_service('%s',%x) uid=%d - OUT OF MEMORY\n",                
                 str8(s, len), handle, uid);                                      
            return -1;                                                            
        }                                                                         
        si->handle = handle;                                                      
        si->len = len;                                                            
        memcpy(si->name, s, (len + 1) * sizeof(uint16_t));                        
        si->name[len] = '\0';                                                     
        si->death.func = (void*) svcinfo_death;                                   
        si->death.ptr = si;                                                       
        si->allow_isolated = allow_isolated;                                      
        si->next = svclist;                                                       
        svclist = si;                                                             
    }     
                                                     
     binder_acquire(bs, handle);                   
     binder_link_to_death(bs, handle, &si->death);  ----------------3
     return 0;                                     
 }                                                                                                                         
  1. 对发送方的pid和uid进行检查,普通app是不允许注册服务的;
  2. servicemanager维护了一个svclist,每个已注册服务对应一个svcinfo节点。遍历svclist,根据name进行匹配字符串"media.player" ,没有找到的话就需要注册,新建一个svcinfo 节点并初始化,插入到svclist 中;
  3. 这边应该是设置了binder的死亡通知函数,详细分析见TODO。

总结

代码分析就到这里,对addService的整体梳理,见TODO。

你可能感兴趣的:(binder,android)