移步系列Android跨进程通信IPC系列
- Framework是一个中间层,它对接了底层的实现,封装了复杂的内部逻辑,并提供外部使用接口。
- Binder Framework层为了C++和Java两个部分,为了达到功能的复用,中间通过JNI进行衔接。
- Binder Framework的C++部分,头文件位于这个路径:/frameworks/native/include/binder/。实现位于这个路径:/frameworks/native/libs/binder/。
- binder库最终会编译成一个动态链接库:/libbinder.so,供其他进程连接使用。
1 ServiceManager启动简述
- ServiceManager(后边简称 SM) 是 Binder的守护进程,它本身也是一个Binder的服务。
- 是通过编写binder.c直接和Binder驱动来通信,里面含量一个循环binder_looper来进行读取和处理事务。
SM的工作也很简单,就是两个
- 1、注册服务
- 2、查询
源码的位置
framework/native/cmds/servicemanager/
- service_manager.c
- binder.c
system/core/rootdir
-/init.rc
kernel/drivers/ (不同Linux分支路径略有不同)
- android/binder.c
- service_manager.c
- binder.c
- init.rc
kernel下binder.c这个文件已经不在android的源码里面了,在Linux源码里面
- binder.c
强调一下这里面有两个binder.c文件,一个是framework/native/cmds/servicemanager/binder.c,另外一个是kernel/drivers/android/binder.c ,绝对不是同一个东西,千万不要弄混了。
2启动过程
任何使用Binder机制的进程都必须要对/dev/binder设备进行open以及mmap之后才能使用,这部分逻辑是所有使用Binder机制进程通用的,SM也不例外。
启动流程图下:
ServiceManager是由init进程通过解析init.rc文件而创建的,其所对应的可执行程序是/system/bin/servicemanager,所对应的源文件是service_manager.c,进程名为/system/bin/servicemanager。
代码如下:
// init.rc 602行
service servicemanager /system/bin/servicemanager
class core
user system
group system
critical
onrestart restart healthd
onrestart restart zygote
onrestart restart media
onrestart restart surfaceflinger
onrestart restart drm
2.1 service_manager.c
启动Service Manager的入口函数是service_manager.c的main()方法如下:
//service_manager.c 347行
int main(int argc, char **argv)
{
struct binder_state *bs;
//打开binder驱动,申请128k字节大小的内存空间
bs = binder_open(128*1024);
...
//省略部分代码
...
//成为上下文管理者
if (binder_become_context_manager(bs)) {
return -1;
}
selinux_enabled = is_selinux_enabled(); //selinux权限是否使能
sehandle = selinux_android_service_context_handle();
selinux_status_open(true);
if (selinux_enabled > 0) {
if (sehandle == NULL) {
abort(); //无法获取sehandle
}
if (getcon(&service_manager_context) != 0) {
abort(); //无法获取service_manager上下文
}
}
union selinux_callback cb;
cb.func_audit = audit_callback;
selinux_set_callback(SELINUX_CB_AUDIT, cb);
cb.func_log = selinux_log_callback;
selinux_set_callback(SELINUX_CB_LOG, cb);
//进入无限循环,充当Server角色,处理client端发来的请求
binder_loop(bs, svcmgr_handler);
return 0;
}
PS:svcmgr_handler是一个方向指针,相当于binder_loop的每一次循环调用到svcmgr_handler()函数。
这部分代码 主要分为3块
- bs = binder_open(128*1024):打开binder驱动,申请128k字节大小的内存空间
- binder_become_context_manager(bs):变成上下文的管理者
- binder_loop(bs, svcmgr_handler):进入轮询,处理来自client端发来的请求
2.2 binder_open(128*1024)
这块代码在framework/native/cmds/servicemanager/binder.c中
// framework/native/cmds/servicemanager/binder.c 96行
struct binder_state *binder_open(size_t mapsize)
{
struct binder_state *bs;
struct binder_version vers;
bs = malloc(sizeof(*bs));
if (!bs) {
errno = ENOMEM;
return NULL;
}
//通过系统调用进入内核,打开Binder的驱动设备
bs->fd = open("/dev/binder", O_RDWR);
if (bs->fd < 0) {
//无法打开binder设备
goto fail_open;
}
//通过系统调用,ioctl获取binder版本信息
if ((ioctl(bs->fd, BINDER_VERSION, &vers) == -1) ||
(vers.protocol_version != BINDER_CURRENT_PROTOCOL_VERSION)) {
//如果内核空间与用户空间的binder不是同一版本
goto fail_open;
}
bs->mapsize = mapsize;
//通过系统调用,mmap内存映射,mmap必须是page的整数倍
bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);
if (bs->mapped == MAP_FAILED) {
//binder设备内存映射失败
goto fail_map; // binder
}
return bs;
fail_map:
close(bs->fd);
fail_open:
free(bs);
return NULL;
}
- 打开binder相关操作,先调用open()打开binder设备,open()方法经过系统调用,进入Binder驱动,然后调用方法binder_open(),该方法会在Binder驱动层创建一个binder_proc对象,再将 binder_proc 对象赋值给fd->private_data,同时放入全局链表binder_proc。
- 再通过ioctl检验当前binder版本与Binder驱动层的版本是否一致。
- 调用mmap()进行内存映射,同理mmap()方法经过系统调用,对应Binder驱动层binde_mmap()方法,该方法会在Binder驱动层创建Binder_buffer对象,并放入当前binder_proc的proc->buffers 链表
这里重点说下binder_state
//framework/native/cmds/servicemanager/binder.c 89行
struct binder_state
{
int fd; //dev/binder的文件描述
void *mapped; //指向mmap的内存地址
size_t mapsize; //分配内存的大小,默认是128K
};
至此,整个binder_open就已经结束了。
2.3 binder_become_context_manager()函数解析
代码很简单,如下:
//framework/native/cmds/servicemanager/binder.c 146行
int binder_become_context_manager(struct binder_state *bs)
{
//通过ioctl,传递BINDER_SET_CONTEXT_MGR执行
return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
}
变成上下文的管理者,整个系统中只有一个这样的管理者。通过ioctl()方法经过系统调用,对应的是Binder驱动的binder_ioctl()方法。
2.3.1 binder_ioctl解析
Binder驱动在Linux 内核中,代码在kernel中
//kernel/drivers/android/binder.c 3134行
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
...
//省略部分代码
...
switch (cmd) {
...
//省略部分代码
...
//3279行
case BINDER_SET_CONTEXT_MGR:
ret = binder_ioctl_set_ctx_mgr(filp);
if (ret)
goto err;
break;
}
...
//省略部分代码
...
}
...
//省略部分代码
...
}
根据参数BINDER_SET_CONTEXT_MGR,最终调用binder_ioctl_set_ctx_mgr()方法,这个过程会持有binder_main_lock。
2.3.2 binder_ioctl_set_ctx_mgr() 是属于Linux kernel的部分,代码
//kernel/drivers/android/binder.c 3198行
static int binder_ioctl_set_ctx_mgr(struct file *filp)
{
int ret = 0;
struct binder_proc *proc = filp->private_data;
struct binder_context *context = proc->context;
kuid_t curr_euid = current_euid();
//保证binder_context_mgr_node对象只创建一次
if (context->binder_context_mgr_node) {
pr_err("BINDER_SET_CONTEXT_MGR already set\n");
ret = -EBUSY;
goto out;
}
ret = security_binder_set_context_mgr(proc->tsk);
if (ret < 0)
goto out;
if (uid_valid(context->binder_context_mgr_uid)) {
if (!uid_eq(context->binder_context_mgr_uid, curr_euid)) {
pr_err("BINDER_SET_CONTEXT_MGR bad uid %d != %d\n",
from_kuid(&init_user_ns, curr_euid),
from_kuid(&init_user_ns,
context->binder_context_mgr_uid));
ret = -EPERM;
goto out;
}
} else {
//设置当前线程euid作为Service Manager的uid
context->binder_context_mgr_uid = curr_euid;
}
//创建ServiceManager的实体。
context->binder_context_mgr_node = binder_new_node(proc, 0, 0);
if (!context->binder_context_mgr_node) {
ret = -ENOMEM;
goto out;
}
context->binder_context_mgr_node->local_weak_refs++;
context->binder_context_mgr_node->local_strong_refs++;
context->binder_context_mgr_node->has_strong_ref = 1;
context->binder_context_mgr_node->has_weak_ref = 1;
out:
return ret;
}
进入Binder驱动,在Binder驱动中定义的静态变量
2.3.3 binder_context 结构体
//kernel/drivers/android/binder.c 228行
struct binder_context {
//service manager所对应的binder_node
struct binder_node *binder_context_mgr_node;
//运行service manager的线程uid
kuid_t binder_context_mgr_uid;
const char *name;
};
创建了全局的binder_node对象binder_context_mgr_node,并将binder_context_mgr_node的强弱引用各加1
2.3.4 binder_new_node()函数解析
//kernel/drivers/android/binder.c
static struct binder_node *binder_new_node(struct binder_proc *proc,
binder_uintptr_t ptr,
binder_uintptr_t cookie)
{
struct rb_node **p = &proc->nodes.rb_node;
struct rb_node *parent = NULL;
struct binder_node *node;
//第一次进来是空
while (*p) {
parent = *p;
node = rb_entry(parent, struct binder_node, rb_node);
if (ptr < node->ptr)
p = &(*p)->rb_left;
else if (ptr > node->ptr)
p = &(*p)->rb_right;
else
return NULL;
}
//给创建的binder_node 分配内存空间
node = kzalloc(sizeof(*node), GFP_KERNEL);
if (node == NULL)
return NULL;
binder_stats_created(BINDER_STAT_NODE);
//将创建的node对象添加到proc红黑树
rb_link_node(&node->rb_node, parent, p);
rb_insert_color(&node->rb_node, &proc->nodes);
node->debug_id = ++binder_last_id;
node->proc = proc;
node->ptr = ptr;
node->cookie = cookie;
//设置binder_work的type
node->work.type = BINDER_WORK_NODE;
INIT_LIST_HEAD(&node->work.entry);
INIT_LIST_HEAD(&node->async_todo);
binder_debug(BINDER_DEBUG_INTERNAL_REFS,
"%d:%d node %d u%016llx c%016llx created\n",
proc->pid, current->pid, node->debug_id,
(u64)node->ptr, (u64)node->cookie);
return node;
}
在Binder驱动层创建了binder_node结构体对象,并将当前的binder_pro加入到binder_node的node->proc。并创建binder_node的async_todo和binder_work两个队列
2.4 binder_loop()详解
// framework/native/cmds/servicemanager/binder.c 372行
void binder_loop(struct binder_state *bs, binder_handler func) {
int res;
struct binder_write_read bwr;
uint32_t readbuf[ 32];
bwr.write_size = 0;
bwr.write_consumed = 0;
bwr.write_buffer = 0;
readbuf[0] = BC_ENTER_LOOPER;
//将BC_ENTER_LOOPER命令发送给Binder驱动,让ServiceManager进行循环
binder_write(bs, readbuf, sizeof(uint32_t));
for (; ; ) {
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (uintptr_t) readbuf;
//进入循环,不断地binder读写过程
res = ioctl(bs -> fd, BINDER_WRITE_READ, & bwr);
if (res < 0) {
ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
break;
}
//解析binder信息
res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
if (res == 0) {
ALOGE("binder_loop: unexpected reply?!\n");
break;
}
if (res < 0) {
ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));
break;
}
}
}
进入循环读写操作,由main()方法传递过来的参数func指向svcmgr_handler。binder_write通过ioctl()将BC_ENTER_LOOPER命令发送给binder驱动,此时bwr只有write_buffer有数据,进入binder_thread_write()方法。 接下来进入for循环,执行ioctl(),此时bwr只有read_buffer有数据,那么进入binder_thread_read()方法。
- 主要是循环读写操作,这里有3个重点是
- binder_thread_write结构体
- binder_write函数
- binder_parse函数
2.4.1 binder_thread_write
//kernel/drivers/android/binder.c 2248行
static int binder_thread_write(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed)
{
uint32_t cmd;
struct binder_context *context = proc->context;
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
while (ptr < end && thread->return_error == BR_OK) {
//获取命令
get_user(cmd, (uint32_t __user *)ptr);
switch (cmd) {
//**** 省略部分代码 ****
case BC_ENTER_LOOPER:
//设置该线程的looper状态
thread->looper |= BINDER_LOOPER_STATE_ENTERED;
break;
//**** 省略部分代码 ****
}
//**** 省略部分代码 ****
return 0;
}
主要是从bwr.write_buffer中拿出数据,此处为BC_ENTER_LOOPER,可见上层调用binder_write()方法主要是完成当前线程的looper状态为BINDER_LOOPER_STATE_ENABLE。
2.4.2 binder_write函数
// framework/native/cmds/servicemanager/binder.c 151行
int binder_write(struct binder_state *bs, void *data, size_t len) {
struct binder_write_read bwr;
int res;
bwr.write_size = len;
bwr.write_consumed = 0;
//此处data为BC_ENTER_LOOPER
bwr.write_buffer = (uintptr_t) data;
bwr.read_size = 0;
bwr.read_consumed = 0;
bwr.read_buffer = 0;
res = ioctl(bs -> fd, BINDER_WRITE_READ, & bwr);
if (res < 0) {
fprintf(stderr, "binder_write: ioctl failed (%s)\n",
strerror(errno));
}
return res;
}
根据传递进来的参数,初始化bwr,其中write_size大小为4,write_buffer指向缓冲区的起始地址,其内容为BC_ENTER_LOOPER请求协议号。通过ioctl将bwr数据发送给Binder驱动,则调用binder_ioctl函数
2.4.3 让我们来看下binder_ioctl函数
//kernel/drivers/android/binder.c 3239行
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
//**** 省略部分代码 ****
//获取binder_thread
thread = binder_get_thread(proc);
switch (cmd) {
case BINDER_WRITE_READ:
//进行binder的读写操作
ret = binder_ioctl_write_read(filp, cmd, arg, thread);
if (ret)
goto err;
break;
//**** 省略部分代码 ****
}
}
主要就是根据参数 BINDER_SET_CONTEXT_MGR,最终调用binder_ioctl_set_ctx_mgr()方法,这个过程会持有binder_main_lock。
binder_ioctl_write_read()函数解析
//kernel/drivers/android/binder.c 3134
static int binder_ioctl_write_read(struct file *filp,
unsigned int cmd, unsigned long arg,
struct binder_thread *thread)
{
int ret = 0;
struct binder_proc *proc = filp->private_data;
unsigned int size = _IOC_SIZE(cmd);
void __user *ubuf = (void __user *)arg;
struct binder_write_read bwr;
if (size != sizeof(struct binder_write_read)) {
ret = -EINVAL;
goto out;
}
//把用户空间数据ubuf拷贝到bwr中
if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
ret = -EFAULT;
goto out;
}
binder_debug(BINDER_DEBUG_READ_WRITE,
"%d:%d write %lld at %016llx, read %lld at %016llx\n",
proc->pid, thread->pid,
(u64)bwr.write_size, (u64)bwr.write_buffer,
(u64)bwr.read_size, (u64)bwr.read_buffer);
// “写缓存” 有数据
if (bwr.write_size > 0) {
ret = binder_thread_write(proc, thread,
bwr.write_buffer,
bwr.write_size,
&bwr.write_consumed);
trace_binder_write_done(ret);
if (ret < 0) {
bwr.read_consumed = 0;
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
ret = -EFAULT;
goto out;
}
}
// "读缓存" 有数据
if (bwr.read_size > 0) {
ret = binder_thread_read(proc, thread, bwr.read_buffer,
bwr.read_size,
&bwr.read_consumed,
filp->f_flags & O_NONBLOCK);
trace_binder_read_done(ret);
if (!list_empty(&proc->todo))
wake_up_interruptible(&proc->wait);
if (ret < 0) {
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
ret = -EFAULT;
goto out;
}
}
binder_debug(BINDER_DEBUG_READ_WRITE,
"%d:%d wrote %lld of %lld, read return %lld of %lld\n",
proc->pid, thread->pid,
(u64)bwr.write_consumed, (u64)bwr.write_size,
(u64)bwr.read_consumed, (u64)bwr.read_size);
//将内核数据bwr拷贝到用户控件bufd
if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
ret = -EFAULT;
goto out;
}
out:
return ret;
}
此处代码就一个作用:就是讲用户空间的binder_write_read结构体 拷贝到内核空间。
2.4.3binder_parse函数解析
binder_parse在// framework/native/cmds/servicemanager/binder.c中
// framework/native/cmds/servicemanager/binder.c 204行
int binder_parse(struct binder_state *bs, struct binder_io *bio,
uintptr_t ptr, size_t size, binder_handler func) {
int r = 1;
uintptr_t end = ptr + (uintptr_t) size;
while (ptr < end) {
uint32_t cmd = *(uint32_t *) ptr;
ptr += sizeof(uint32_t);
#if TRACE
fprintf(stderr, "%s:\n", cmd_name(cmd));
#endif
switch (cmd) {
case BR_NOOP:
//误操作,退出循环
break;
case BR_TRANSACTION_COMPLETE:
break;
case BR_INCREFS:
case BR_ACQUIRE:
case BR_RELEASE:
case BR_DECREFS:
#if TRACE
fprintf(stderr, " %p, %p\n", (void *)ptr, (void *)(ptr + sizeof(void *)));
#endif
ptr += sizeof(struct binder_ptr_cookie);
break;
case BR_TRANSACTION: {
struct binder_transaction_data *txn = (struct binder_transaction_data *)ptr;
if ((end - ptr) < sizeof( * txn)){
ALOGE("parse: txn too small!\n");
return -1;
}
binder_dump_txn(txn);
if (func) {
unsigned rdata[ 256 / 4];
struct binder_io msg;
struct binder_io reply;
int res;
bio_init( & reply, rdata, sizeof(rdata), 4);
bio_init_from_txn( & msg, txn);
res = func(bs, txn, & msg, &reply);
binder_send_reply(bs, & reply, txn -> data.ptr.buffer, res);
}
ptr += sizeof( * txn);
break;
}
case BR_REPLY: {
struct binder_transaction_data *txn = (struct binder_transaction_data *)ptr;
if ((end - ptr) < sizeof( * txn)){
ALOGE("parse: reply too small!\n");
return -1;
}
binder_dump_txn(txn);
if (bio) {
bio_init_from_txn(bio, txn);
bio = 0;
} else {
/* todo FREE BUFFER */
}
ptr += sizeof( * txn);
r = 0;
break;
}
case BR_DEAD_BINDER: {
struct binder_death *death = (struct binder_death *)
(uintptr_t) * (binder_uintptr_t *) ptr;
ptr += sizeof(binder_uintptr_t);
//binder死亡消息
death -> func(bs, death -> ptr);
break;
}
case BR_FAILED_REPLY:
r = -1;
break;
case BR_DEAD_REPLY:
r = -1;
break;
default:
ALOGE("parse: OOPS %d\n", cmd);
return -1;
}
}
return r;
}
主要是解析binder消息,此处参数ptr指向BC_ENTER_LOOPER,func指向svcmgr_handler,所以有请求来,则调用svcmgr
这里面我们重点分析BR_TRANSACTION里面的几个函数
- bio_init()函数
- bio_init_from_txn()函数
bio_init()函数
// framework/native/cmds/servicemanager/binder.c 409行
void bio_init_from_txn(struct binder_io *bio, struct binder_transaction_data *txn)
{
bio->data = bio->data0 = (char *)(intptr_t)txn->data.ptr.buffer;
bio->offs = bio->offs0 = (binder_size_t *)(intptr_t)txn->data.ptr.offsets;
bio->data_avail = txn->data_size;
bio->offs_avail = txn->offsets_size / sizeof(size_t);
bio->flags = BIO_F_SHARED;
}
其中binder_io的结构体在 /frameworks/native/cmds/servicemanager/binder.h 里面
binder.h
//frameworks/native/cmds/servicemanager/binder.h 12行
struct binder_io
{
char *data; /* pointer to read/write from */
binder_size_t *offs; /* array of offsets */
size_t data_avail; /* bytes available in data buffer */
size_t offs_avail; /* entries available in offsets array */
char *data0; //data buffer起点位置
binder_size_t *offs0; //buffer偏移量的起点位置
uint32_t flags;
uint32_t unused;
};
** bio_init_from_txn()函数**
// framework/native/cmds/servicemanager/binder.c 409行
void bio_init_from_txn(struct binder_io *bio, struct binder_transaction_data *txn)
{
bio->data = bio->data0 = (char *)(intptr_t)txn->data.ptr.buffer;
bio->offs = bio->offs0 = (binder_size_t *)(intptr_t)txn->data.ptr.offsets;
bio->data_avail = txn->data_size;
bio->offs_avail = txn->offsets_size / sizeof(size_t);
bio->flags = BIO_F_SHARED;
}
其实很简单,就是将readbuf的数据赋给bio对象的data
将readbuf的数据赋给bio对象的data
2.4.4 svcmgr_handler
//service_manager.c 244行
int svcmgr_handler(struct binder_state*bs,
struct binder_transaction_data*txn,
struct binder_io*msg,
struct binder_io*reply) {
struct svcinfo*si;
uint16_t * s;
size_t len;
uint32_t handle;
uint32_t strict_policy;
int allow_isolated;
if (txn -> target.ptr != BINDER_SERVICE_MANAGER)
return -1;
if (txn -> code == PING_TRANSACTION)
return 0;
strict_policy = bio_get_uint32(msg);
s = bio_get_string16(msg, & len);
if (s == NULL) {
return -1;
}
if ((len != (sizeof(svcmgr_id) / 2)) ||
memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {
fprintf(stderr, "invalid id %s\n", str8(s, len));
return -1;
}
if (sehandle && selinux_status_updated() > 0) {
struct selabel_handle*tmp_sehandle = selinux_android_service_context_handle();
if (tmp_sehandle) {
selabel_close(sehandle);
sehandle = tmp_sehandle;
}
}
switch (txn -> code) {
case SVC_MGR_GET_SERVICE:
case SVC_MGR_CHECK_SERVICE:
//获取服务名
s = bio_get_string16(msg, & len);
if (s == NULL) {
return -1;
}
//根据名称查找相应服务
handle = do_find_service(bs, s, len, txn -> sender_euid, txn -> sender_pid);
if (!handle)
break;
bio_put_ref(reply, handle);
return 0;
case SVC_MGR_ADD_SERVICE:
//获取服务名
s = bio_get_string16(msg, & len);
if (s == NULL) {
return -1;
}
handle = bio_get_ref(msg);
allow_isolated = bio_get_uint32(msg) ? 1 : 0;
//注册服务
if (do_add_service(bs, s, len, handle, txn -> sender_euid,
allow_isolated, txn -> sender_pid))
return -1;
break;
case SVC_MGR_LIST_SERVICES: {
uint32_t n = bio_get_uint32(msg);
if (!svc_can_list(txn -> sender_pid)) {
ALOGE("list_service() uid=%d - PERMISSION DENIED\n",
txn -> sender_euid);
return -1;
}
si = svclist;
while ((n-- > 0) && si)
si = si -> next;
if (si) {
bio_put_string16(reply, si -> name);
return 0;
}
return -1;
}
default:
ALOGE("unknown code %d\n", txn -> code);
return -1;
}
bio_put_uint32(reply, 0);
return 0;
}
代码看着很多,其实主要就是servicemanger提供查询服务和注册服务以及列举所有服务。
//service_manager.c 128行
struct svcinfo
{
struct svcinfo*next;
uint32_t handle;
struct binder_death death;
int allow_isolated;
size_t len;
uint16_t name[ 0];
};
每一个服务用svcinfo结构体来表示,该handle值是注册服务的过程中,又服务所在进程那一端所确定。
3 总结
- ServiceManager集中管理系统内的所有服务,通过权限控制进程是否有权注册服务,通过字符串名称来查找对应的Service;
- 由于ServiceManager进程建立跟所有向其注册服务的死亡通知,那么当前服务所在进程死亡后,会只需要告知ServiceManager。
- 每个Client通过查询ServiceManager可获取Service进程的情况,降低所有Client进程直接检测导致负载过重。
ServiceManager 启动流程:
- 打开binder驱动,并调用mmap()方法分配128k内存映射空间:binder_open()
- 通知binder驱动使其成为守护进程:binder_become_context_manager();
- 验证selinux权限,判断进程是否有权注册或查看指定服务;
- 进入循环状态,等待Client端的请求
- 注册服务的过程,根据服务的名称,但同一个服务已注册,然后调用binder_node_release。这个过程便会发出死亡通知的回调。
参考
Android跨进程通信IPC之9——Binder之Framework层C++篇1