ServiceManager 注册 Native 层的服务的过程。
framework/native/libs/binder/
- Binder.cpp
- BpBinder.cpp
- IPCThreadState.cpp
- ProcessState.cpp
- IServiceManager.cpp
- IInterface.cpp
- Parcel.cpp
frameworks/native/include/binder/
- IInterface.h (包括BnInterface, BpInterface)
media 入口函数是 main_mediaserver.cpp 中的 main() 方法,代码如下:
int main(int argc __unused, char** argv)
{
...
InitializeIcuOrDie();
//获得ProcessState实例对象【见小节2.1】
sp proc(ProcessState::self());
//获取BpServiceManager对象
sp sm = defaultServiceManager();
AudioFlinger::instantiate();
//注册多媒体服务 【见小节3.1】
MediaPlayerService::instantiate();
ResourceManagerService::instantiate();
CameraService::instantiate();
AudioPolicyService::instantiate();
SoundTriggerHwService::instantiate();
RadioService::instantiate();
registerExtensions();
//启动Binder线程池
ProcessState::self()->startThreadPool();
//当前线程加入到线程池
IPCThreadState::self()->joinThreadPool();
}
过程说明:
本文的重点就是讲解 Native 层服务注册的过程。
在 Native 层的服务以 media 服务为例,来说一说注册过程,先来看看 media 的整个的类关系图。
图解:
media 服务启动过程是如何向 Servicemanager 注册服务的。
[-> ProcessState.cpp]
sp ProcessState::self()
{
Mutex::Autolock _l(gProcessMutex);
if (gProcess != NULL) {
return gProcess;
}
//实例化ProcessState 【见小节2.2】
gProcess = new ProcessState;
return gProcess;
}
获取 ProcessState 对象:这也是单例模式,从而保证每一个进程只有一个 ProcessState 对象。其中 gProcess 和 gProcessMutex 是保存在 Static.cpp 类的全局变量。
[-> ProcessState.cpp ]
ProcessState::ProcessState()
: mDriverFD(open_driver()) // 打开Binder驱动【见小节2.3】
, mVMStart(MAP_FAILED)
, mThreadCountLock(PTHREAD_MUTEX_INITIALIZER)
, mThreadCountDecrement(PTHREAD_COND_INITIALIZER)
, mExecutingThreadsCount(0)
, mMaxThreads(DEFAULT_MAX_BINDER_THREADS)
, mManagesContexts(false)
, mBinderContextCheckFunc(NULL)
, mBinderContextUserData(NULL)
, mThreadPoolStarted(false)
, mThreadPoolSeq(1)
{
if (mDriverFD >= 0) {
//采用内存映射函数mmap,给binder分配一块虚拟地址空间【见小节2.4】
mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
if (mVMStart == MAP_FAILED) {
close(mDriverFD); //没有足够空间分配给/dev/binder,则关闭驱动
mDriverFD = -1;
}
}
}
[-> ProcessState.cpp ]
static int open_driver()
{
// 打开/dev/binder设备,建立与内核的Binder驱动的交互通道
int fd = open("/dev/binder", O_RDWR);
if (fd >= 0) {
fcntl(fd, F_SETFD, FD_CLOEXEC);
int vers = 0;
status_t result = ioctl(fd, BINDER_VERSION, &vers);
if (result == -1) {
close(fd);
fd = -1;
}
if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {
close(fd);
fd = -1;
}
size_t maxThreads = DEFAULT_MAX_BINDER_THREADS;
// 通过ioctl设置binder驱动,能支持的最大线程数
result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
if (result == -1) {
ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
}
} else {
ALOGW("Opening '/dev/binder' failed: %s\n", strerror(errno));
}
return fd;
}
open_driver 作用是打开 /dev/binder 设备,设定binder 支持的最大线程数。
ProcessState 采用单例模式,保证每一个进程都只打开一次Binder Driver。
//原型
void* mmap(void* addr, size_t size, int prot, int flags, int fd, off_t offset)
//此处 mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
参数说明:
mmap() 经过系统调用,执行 binder_mmap 过程。
[-> MediaPlayerService.cpp]
void MediaPlayerService::instantiate() {
//注册服务【见小节3.2】
defaultServiceManager()->addService(String16("media.player"),
new MediaPlayerService());
}
注册服务 MediaPlayerService:由 defaultServiceManager() 返回的是 BpServiceManager,同时会创建 ProcessState 对象和 BpBinder 对象。故因此等价于调用 BpServiceManager -> addService。其中 MediaPlayerService 位于 libmediaplayerservice库。
[->IServiceManager.cpp :: BpServiceManager ]
virtual status_t addService(const String16& name, const sp& service, bool allowIsolated) {
Parcel data, reply; //Parcel是数据通信包
//写入头信息"android.os.IServiceManager"
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
data.writeString16(name); // name为 "media.player"
data.writeStrongBinder(service); // MediaPlayerService对象【见小节3.2.1】
data.writeInt32(allowIsolated ? 1 : 0); // allowIsolated= false
//remote()指向的是BpBinder对象【见小节3.3】
status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
return err == NO_ERROR ? reply.readExceptionCode() : err;
}
服务注册过程:向 ServiceManager 注册服务 MediaPlayerService,服务名为 "media.player";
[-> parcel.cpp]
status_t Parcel::writeStrongBinder(const sp& val) {
return flatten_binder(ProcessState::self(), val, this);
}
status_t flatten_binder(const sp& /*proc*/,
const sp& binder, Parcel* out)
{
flat_binder_object obj;
obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
if (binder != NULL) {
IBinder *local = binder->localBinder(); //本地Binder不为空
if (!local) {
BpBinder *proxy = binder->remoteBinder();
const int32_t handle = proxy ? proxy->handle() : 0;
obj.type = BINDER_TYPE_HANDLE;
obj.binder = 0;
obj.handle = handle;
obj.cookie = 0;
} else { //进入该分支
obj.type = BINDER_TYPE_BINDER;
obj.binder = reinterpret_cast(local->getWeakRefs());
obj.cookie = reinterpret_cast(local);
}
} else {
...
}
//【见小节3.2.3】
return finish_flatten_binder(binder, obj, out);
}
将 Binder 对象扁平化,转换成 flat_binder_object 对象。
关于 localBinder,代码见 Binder.cpp。
BBinder* BBinder::localBinder() {
return this;
}
BBinder* IBinder::localBinder() {
return NULL;
}
inline static status_t finish_flatten_binder(
const sp& , const flat_binder_object& flat, Parcel* out)
{
return out->writeObject(flat, false);
}
将 flat_binder_object 写入 out。
[-> BpBinder.cpp ]
status_t BpBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) {
if (mAlive) {
// code=ADD_SERVICE_TRANSACTION【见小节3.4】
status_t status = IPCThreadState::self()->transact( mHandle, code, data, reply, flags);
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}
return DEAD_OBJECT;
}
Binder 代理类调用 transact()方法,真正工作还是交给 IPCThreadState 来进行 transact 工作。先来看看 IPCThreadState::self 的过程。
[-> IPCThreadState.cpp ]
IPCThreadState* IPCThreadState::self()
{
if (gHaveTLS) {
restart:
const pthread_key_t k = gTLS;
IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
if (st) return st;
return new IPCThreadState; //初始IPCThreadState 【见小节3.3.2】
}
if (gShutdown) return NULL;
pthread_mutex_lock(&gTLSMutex);
if (!gHaveTLS) { //首次进入gHaveTLS为false
if (pthread_key_create(&gTLS, threadDestructor) != 0) { //创建线程的TLS
pthread_mutex_unlock(&gTLSMutex);
return NULL;
}
gHaveTLS = true;
}
pthread_mutex_unlock(&gTLSMutex);
goto restart;
}
TLS 是指 Thread local storage(线程本地储存空间),每个线程都拥有自己的 TLS,并且是私有空间,线程之间不会共享。通过 pthread_getspecific/pthread_setspecific 函数可以获取/设置这些空间中的内容。从线程本地存储空间中获得保存在其中的 IPCThreadState 对象。
[-> IPCThreadState.cpp ]
IPCThreadState::IPCThreadState()
: mProcess(ProcessState::self()),
mMyThreadId(gettid()),
mStrictModePolicy(0),
mLastTransactionBinderFlags(0)
{
pthread_setspecific(gTLS, this);
clearCaller();
mIn.setDataCapacity(256);
mOut.setDataCapacity(256);
}
每个线程都有一个 IPCThreadState,每个 IPCThreadState 中都有一个 mlin、一个 mOut。成员变量 mProcess 保存了 ProcessState 变量(每个进程只有一个)。
[-> IPCThreadState.cpp ]
status_t IPCThreadState::transact(int32_t handle,
uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)
{
status_t err = data.errorCheck(); //数据错误检查
flags |= TF_ACCEPT_FDS;
....
if (err == NO_ERROR) { // 传输数据 【见小节3.5】
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
}
...
if ((flags & TF_ONE_WAY) == 0) {
if (reply) {
//等待响应 【见小节3.6】
err = waitForResponse(reply);
} else {
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}
} else {
//oneway,则不需要等待reply的场景
err = waitForResponse(NULL, NULL);
}
return err;
}
IPCThreadState 进行 transact 事务处理分 3 部分:
[-> IPCThreadState.cpp ]
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
binder_transaction_data tr;
tr.target.ptr = 0;
tr.target.handle = handle; // handle = 0
tr.code = code; // code = ADD_SERVICE_TRANSACTION
tr.flags = binderFlags; // binderFlags = 0
tr.cookie = 0;
tr.sender_pid = 0;
tr.sender_euid = 0;
// data为记录Media服务信息的Parcel对象
const status_t err = data.errorCheck();
if (err == NO_ERROR) {
tr.data_size = data.ipcDataSize(); // mDataSize
tr.data.ptr.buffer = data.ipcData(); //mData
tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t); //mObjectsSize
tr.data.ptr.offsets = data.ipcObjects(); //mObjects
} else if (statusBuffer) {
...
} else {
return (mLastError = err);
}
mOut.writeInt32(cmd); //cmd = BC_TRANSACTION
mOut.write(&tr, sizeof(tr)); //写入binder_transaction_data数据
return NO_ERROR;
}
其中 handle 的值用来标识目的端,注册服务过程的目的端为 service manager,此处 handle = 0 所对应的是 binder_context_mgr_node对象,正是 service manager 所对应的 binder 实体对象。binder_transaction_data 结构体是 binder 驱动通信的数据结构,该过程最终是把 Binder 请求码 BC_TRANSACTION 和 binder_transaction_data 结构体写入到 mOut。
transact过程,先写完 binder_transaction_data 数据,其中 Parcel data 的重要成员变量:
接下来执行 waitForResponse() 方法。
[-> IPCThread]
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
int32_t cmd;
int32_t err;
while (1) {
if ((err=talkWithDriver()) < NO_ERROR) break; // 【见小节3.7】
...
if (mIn.dataAvail() == 0) continue;
cmd = mIn.readInt32();
switch (cmd) {
case BR_TRANSACTION_COMPLETE: ...
case BR_DEAD_REPLY: ...
case BR_FAILED_REPLY: ...
case BR_ACQUIRE_RESULT: ...
case BR_REPLY: ...
goto finish;
default:
err = executeCommand(cmd); //【见小节3.x】
if (err != NO_ERROR) goto finish;
break;
}
}
...
return err;
}
在 waitForResponse 过程,首先执行 BR_TRANSACTION_COMPLETE;另外,目标进程收到事务后,处理 BR_TRANSACTION 事务。然后发送当前进程,再执行 BR_REPLY 命令。
[-> IPCThreadState.cpp ]
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
...
binder_write_read bwr;
const bool needRead = mIn.dataPosition() >= mIn.dataSize();
const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
bwr.write_size = outAvail;
bwr.write_buffer = (uintptr_t)mOut.data();
if (doReceive && needRead) {
//接收数据缓冲区信息的填充。如果以后收到数据,就直接填在mIn中了。
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (uintptr_t)mIn.data();
} else {
bwr.read_size = 0;
bwr.read_buffer = 0;
}
//当读缓冲和写缓冲都为空,则直接返回
if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
bwr.write_consumed = 0;
bwr.read_consumed = 0;
status_t err;
do {
//通过ioctl不停的读写操作,跟Binder Driver进行通信
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
...
} while (err == -EINTR); //当被中断,则继续执行
...
return err;
}
binder_write_read 结构体用来与 Binder 设备交换数据的结构,通过 ioctl 与 mDriverFD 通信,是真正与 Binder 驱动进行数据读写交互的过程。主要是操作 mOut 和 mIn 变量。
ioctl() 经过系统调用后进入 Binder Driver。
ioctl -> binder_ioctl -> binder_ioctl_write_read
[->binder.c ]
static int binder_ioctl_write_read(struct file *filp,
unsigned int cmd, unsigned long arg,
struct binder_thread *thread)
{
struct binder_proc *proc = filp->private_data;
void __user *ubuf = (void __user *)arg;
struct binder_write_read bwr;
//将用户空间bwr结构体拷贝到内核空间
copy_from_user(&bwr, ubuf, sizeof(bwr));
...
if (bwr.write_size > 0) {
//将数据放入目标进程【见小节4.2】
ret = binder_thread_write(proc, thread,
bwr.write_buffer,
bwr.write_size,
&bwr.write_consumed);
...
}
if (bwr.read_size > 0) {
//读取自己队列的数据 【见小节】
ret = binder_thread_read(proc, thread, bwr.read_buffer,
bwr.read_size,
&bwr.read_consumed,
filp->f_flags & O_NONBLOCK);
if (!list_empty(&proc->todo))
wake_up_interruptible(&proc->wait);
...
}
//将内核空间bwr结构体拷贝到用户空间
copy_to_user(ubuf, &bwr, sizeof(bwr));
...
}
static int binder_thread_write(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed)
{
uint32_t cmd;
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
while (ptr < end && thread->return_error == BR_OK) {
//拷贝用户空间的cmd命令,此时为BC_TRANSACTION
if (get_user(cmd, (uint32_t __user *)ptr)) -EFAULT;
ptr += sizeof(uint32_t);
switch (cmd) {
case BC_TRANSACTION:
case BC_REPLY: {
struct binder_transaction_data tr;
//拷贝用户空间的binder_transaction_data
if (copy_from_user(&tr, ptr, sizeof(tr))) return -EFAULT;
ptr += sizeof(tr);
// 见小节4.3】
binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
break;
}
...
}
*consumed = ptr - buffer;
}
return 0;
}
static void binder_transaction(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_transaction_data *tr, int reply){
struct binder_transaction *t;
struct binder_work *tcomplete;
...
if (reply) {
...
}else {
if (tr->target.handle) {
...
} else {
// handle=0则找到servicemanager实体
target_node = binder_context_mgr_node;
}
//target_proc为servicemanager进程
target_proc = target_node->proc;
}
if (target_thread) {
...
} else {
//找到servicemanager进程的todo队列
target_list = &target_proc->todo;
target_wait = &target_proc->wait;
}
t = kzalloc(sizeof(*t), GFP_KERNEL);
tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
//非oneway的通信方式,把当前thread保存到transaction的from字段
if (!reply && !(tr->flags & TF_ONE_WAY))
t->from = thread;
else
t->from = NULL;
t->sender_euid = task_euid(proc->tsk);
t->to_proc = target_proc; //此次通信目标进程为servicemanager进程
t->to_thread = target_thread;
t->code = tr->code; //此次通信code = ADD_SERVICE_TRANSACTION
t->flags = tr->flags; // 此次通信flags = 0
t->priority = task_nice(current);
//从servicemanager进程中分配buffer
t->buffer = binder_alloc_buf(target_proc, tr->data_size,
tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
t->buffer->allow_user_free = 0;
t->buffer->transaction = t;
t->buffer->target_node = target_node;
if (target_node)
binder_inc_node(target_node, 1, 0, NULL); //引用计数加1
offp = (binder_size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));
//分别拷贝用户空间的binder_transaction_data中ptr.buffer和ptr.offsets到内核
copy_from_user(t->buffer->data,
(const void __user *)(uintptr_t)tr->data.ptr.buffer, tr->data_size);
copy_from_user(offp,
(const void __user *)(uintptr_t)tr->data.ptr.offsets, tr->offsets_size);
off_end = (void *)offp + tr->offsets_size;
for (; offp < off_end; offp++) {
struct flat_binder_object *fp;
fp = (struct flat_binder_object *)(t->buffer->data + *offp);
off_min = *offp + sizeof(struct flat_binder_object);
switch (fp->type) {
case BINDER_TYPE_BINDER:
case BINDER_TYPE_WEAK_BINDER: {
struct binder_ref *ref;
//【见4.3.1】
struct binder_node *node = binder_get_node(proc, fp->binder);
if (node == NULL) {
//服务所在进程 创建binder_node实体【见4.3.2】
node = binder_new_node(proc, fp->binder, fp->cookie);
...
}
//servicemanager进程binder_ref【见4.3.3】
ref = binder_get_ref_for_node(target_proc, node);
...
//调整type为HANDLE类型
if (fp->type == BINDER_TYPE_BINDER)
fp->type = BINDER_TYPE_HANDLE;
else
fp->type = BINDER_TYPE_WEAK_HANDLE;
fp->binder = 0;
fp->handle = ref->desc; //设置handle值
fp->cookie = 0;
binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE,
&thread->todo);
} break;
case :...
}
if (reply) {
..
} else if (!(t->flags & TF_ONE_WAY)) {
//BC_TRANSACTION 且 非oneway,则设置事务栈信息
t->need_reply = 1;
t->from_parent = thread->transaction_stack;
thread->transaction_stack = t;
} else {
...
}
//将BINDER_WORK_TRANSACTION添加到目标队列,本次通信的目标队列为target_proc->todo
t->work.type = BINDER_WORK_TRANSACTION;
list_add_tail(&t->work.entry, target_list);
//将BINDER_WORK_TRANSACTION_COMPLETE添加到当前线程的todo队列
tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
list_add_tail(&tcomplete->entry, &thread->todo);
//唤醒等待队列,本次通信的目标队列为target_proc->wait
if (target_wait)
wake_up_interruptible(target_wait);
return;
}
注册服务的过程,传递的是 BBinder 对象,故 [小节3.2.1] 的 writeStrongBinder() 过程中 localBinder 不为空,从而 flat_binder_object.type 等于 BINDER_TYPE_BINDER。
服务注册过程是在服务所在进程创建 binder_node,在 servicemanager 进程创建 binder_ref。对于同一个 binder_node,每个进程只会创建一个 binder_ref 对象。
向 servicemanager 的 binder_proc -> todo 添加 BINDER_WORK_TRANSACTION 事务,接下来进入 ServiceManager 进程。
static struct binder_node *binder_get_node(struct binder_proc *proc,
binder_uintptr_t ptr)
{
struct rb_node *n = proc->nodes.rb_node;
struct binder_node *node;
while (n) {
node = rb_entry(n, struct binder_node, rb_node);
if (ptr < node->ptr)
n = n->rb_left;
else if (ptr > node->ptr)
n = n->rb_right;
else
return node;
}
return NULL;
}
从 binder_proc 来根据 binder 指针 prt 值,查询相应的 binder_node。
static struct binder_node *binder_new_node(struct binder_proc *proc,
binder_uintptr_t ptr,
binder_uintptr_t cookie)
{
struct rb_node **p = &proc->nodes.rb_node;
struct rb_node *parent = NULL;
struct binder_node *node;
... //红黑树位置查找
//给新创建的binder_node 分配内核空间
node = kzalloc(sizeof(*node), GFP_KERNEL);
// 将新创建的node添加到proc红黑树;
rb_link_node(&node->rb_node, parent, p);
rb_insert_color(&node->rb_node, &proc->nodes);
node->debug_id = ++binder_last_id;
node->proc = proc;
node->ptr = ptr;
node->cookie = cookie;
node->work.type = BINDER_WORK_NODE; //设置binder_work的type
INIT_LIST_HEAD(&node->work.entry);
INIT_LIST_HEAD(&node->async_todo);
return node;
}
static struct binder_ref *binder_get_ref_for_node(struct binder_proc *proc,
struct binder_node *node)
{
struct rb_node *n;
struct rb_node **p = &proc->refs_by_node.rb_node;
struct rb_node *parent = NULL;
struct binder_ref *ref, *new_ref;
//从refs_by_node红黑树,找到binder_ref则直接返回。
while (*p) {
parent = *p;
ref = rb_entry(parent, struct binder_ref, rb_node_node);
if (node < ref->node)
p = &(*p)->rb_left;
else if (node > ref->node)
p = &(*p)->rb_right;
else
return ref;
}
//创建binder_ref
new_ref = kzalloc_preempt_disabled(sizeof(*ref));
new_ref->debug_id = ++binder_last_id;
new_ref->proc = proc; //记录进程信息
new_ref->node = node; //记录binder节点
rb_link_node(&new_ref->rb_node_node, parent, p);
rb_insert_color(&new_ref->rb_node_node, &proc->refs_by_node);
//计算binder引用的handle值,该值返回给target_proc进程
new_ref->desc = (node == binder_context_mgr_node) ? 0 : 1;
//从红黑树最最左边的handle对比,依次递增,直到红黑树遍历结束或者找到更大的handle则结束。
for (n = rb_first(&proc->refs_by_desc); n != NULL; n = rb_next(n)) {
//根据binder_ref的成员变量rb_node_desc的地址指针n,来获取binder_ref的首地址
ref = rb_entry(n, struct binder_ref, rb_node_desc);
if (ref->desc > new_ref->desc)
break;
new_ref->desc = ref->desc + 1;
}
// 将新创建的new_ref 插入proc->refs_by_desc红黑树
p = &proc->refs_by_desc.rb_node;
while (*p) {
parent = *p;
ref = rb_entry(parent, struct binder_ref, rb_node_desc);
if (new_ref->desc < ref->desc)
p = &(*p)->rb_left;
else if (new_ref->desc > ref->desc)
p = &(*p)->rb_right;
else
BUG();
}
rb_link_node(&new_ref->rb_node_desc, parent, p);
rb_insert_color(&new_ref->rb_node_desc, &proc->refs_by_desc);
if (node) {
hlist_add_head(&new_ref->node_entry, &node->refs);
}
return new_ref;
}
handle 值计算方法规律:
启动 ServiceManager,循环在 binder_loop() 过程,会调用 binder_parse() 方法。
[-> servicemanager/binder.c]
int binder_parse(struct binder_state *bs,
struct binder_io *bio,
uintptr_t ptr,
size_t size,
binder_handler func) {
int r = 1;
uintptr_t end = ptr + (uintptr_t) size;
while (ptr < end) {
uint32_t cmd = *(uint32_t *) ptr;
ptr += sizeof(uint32_t);
switch(cmd) {
case BR_TRANSACTION: {
struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
...
binder_dump_txn(txn);
if (func) {
unsigned rdata[256/4];
struct binder_io msg;
struct binder_io reply;
int res;
bio_init(&reply, rdata, sizeof(rdata), 4);
bio_init_from_txn(&msg, txn); //从txn解析出binder_io信息
// 收到Binder事务 【见小节5.2】
res = func(bs, txn, &msg, &reply);
// 发送reply事件【见小节5.4】
binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);
}
ptr += sizeof(*txn);
break;
}
case : ...
}
return r;
}
[-> service_manager.c ]
int svcmgr_handler(struct binder_state *bs,
struct binder_transaction_data *txn,
struct binder_io *msg,
struct binder_io *reply) {
struct svcinfo *si;
uint16_t *s;
size_t len;
uint32_t handle;
uint32_t strict_policy;
int allow_isolated;
...
strict_policy = bio_get_uint32(msg);
s = bio_get_string16(msg, &len);
...
switch(txn->code) {
case SVC_MGR_ADD_SERVICE:
s = bio_get_string16(msg, &len);
...
handle = bio_get_ref(msg); //获取handle
allow_isolated = bio_get_uint32(msg) ? 1 : 0;
//注册指定服务 【见小节5.3】
if (do_add_service(bs, s, len, handle, txn->sender_euid,
allow_isolated, txn->sender_pid))
return -1;
break;
case :...
}
bio_put_uint32(reply, 0);
return 0;
}
[-> service_manager.c ]
int do_add_service(struct binder_state *bs,
const uint16_t *s, size_t len,
uint32_t handle, uid_t uid, int allow_isolated,
pid_t spid)
{
struct svcinfo *si;
if (!handle || (len == 0) || (len > 127))
return -1;
//权限检查
if (!svc_can_register(s, len, spid)) {
return -1;
}
//服务检索
si = find_svc(s, len);
if (si) {
if (si->handle) {
svcinfo_death(bs, si); //服务已注册时,释放相应的服务
}
si->handle = handle;
} else {
si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
if (!si) { //内存不足,无法分配足够内存
return -1;
}
si->handle = handle;
si->len = len;
memcpy(si->name, s, (len + 1) * sizeof(uint16_t)); //内存拷贝服务信息
si->name[len] = '\0';
si->death.func = (void*) svcinfo_death;
si->death.ptr = si;
si->allow_isolated = allow_isolated;
si->next = svclist; // svclist保存所有已注册的服务
svclist = si;
}
//以BC_ACQUIRE命令,handle为目标的信息,通过ioctl发送给binder驱动
binder_acquire(bs, handle);
//以BC_REQUEST_DEATH_NOTIFICATION命令的信息,通过ioctl发送给binder驱动,主要用于清理内存等收尾工作。
binder_link_to_death(bs, handle, &si->death);
return 0;
}
svcinfo 记录着服务名和 handle 信息,保存到 svclist 列表。
[-> servicemanager/binder.c ]
void binder_send_reply(struct binder_state *bs, struct binder_io *reply, binder_uintptr_t buffer_to_free, int status) {
struct {
uint32_t cmd_free;
binder_uintptr_t buffer;
uint32_t cmd_reply;
struct binder_transaction_data txn;
} __attribute__((packed)) data;
data.cmd_free = BC_FREE_BUFFER; //free buffer命令
data.buffer = buffer_to_free;
data.cmd_reply = BC_REPLY; // reply命令
data.txn.target.ptr = 0;
data.txn.cookie = 0;
data.txn.code = 0;
if (status) {
...
} else {
data.txn.flags = 0;
data.txn.data_size = reply->data - reply->data0;
data.txn.offsets_size = ((char*) reply->offs) - ((char*) reply->offs0);
data.txn.data.ptr.buffer = (uintptr_t)reply->data0;
data.txn.data.ptr.offsets = (uintptr_t)reply->offs0;
}
//向Binder驱动通信
binder_write(bs, &data, sizeof(data));
}
binder_write 进入 binder 驱动后,将 BC_FREE_BUFFER 和 BC_REPLY 命令协议发送给 Binder 驱动,向 Client 端发送 reply。
服务注册过程(addService)核心功能:在服务所在进程创建 binder_node,在 servicemanager 进程创建 binder_ref。其中binder_ref 的 desc 再同一个进程内是唯一的:
Media 服务注册的过程涉及到 MediaPlayerService(作为 Client 进程)和 Service Manager(作为 Service 进程),通信流程图如下所示:
分析过程:
1. MediaPlayerService 进程调用 ioctl() 向 Binder 驱动发送 IPC 数据,该过程可以理解成一个事务 binder_transaction(记为T1),执行当前操作的线程 binder_thread(记为 thread1),则 T1 -> from_parent = NULL,T1 -> from = thread1,thread1 -> transaction_stack = T1。其中 IPC 数据内容包含:
2. Binder 驱动收到该 Binder 请求,生成 BR_TRANSACTION 命令,选择目标处理该请求的线程,即 ServiceManager 的 binder 线程(记为 thread2),则 T1 -> to_parent = NULL,T2 -> to_thread = thread2。并将整个 binder_transaction 数据(记为 T2)插入到目标线程的 todo 队列;
3. Service Manager 的线程 thread2 收到 T2 后,调用服务注册函数将服务 "media.player" 注册到服务目录中。当服务注册完成后,生成 IPC 应答数据(BC_REPLY),T2 -> form_parent = T1,T2 -> from = thread2,thread2 -> transaction_stack = T2。
4. Binder 驱动收到该 Binder 应答请求,生成 BR_REPLY 命令,T2 -> to_parent = T1,T2 -> to_thread = thread1,thread1 ->transaction_stack = T2。在 MediaPlayService 收到该命令后,知道服务注册完成便可以正常使用。
整个过程中,BC_TRANSACTION 和 BR_TRANSACTION 过程是一个完整的事务过程;BC_REPLY 和 BR_REPLY 是一个完整的事务过程。到此,其他进行便可以获取该服务,使用服务提供的方法。
参考:http://gityuan.com/2015/11/14/binder-add-service/