本文主要从分析 Java层 getService是怎么实现的,getService最终的返回值是什么,以及怎么使用的。
从Am.java的 onRun函数开始:
@Override
public void onRun() throws Exception {
mAm = ActivityManager.getService();
mPm = IPackageManager.Stub.asInterface(ServiceManager.getService("package"));
...
}
这里实际上有两个 getService,一个是ActivityManager.getService,另一个是获取PackageManagerService,比较显然的是直接通过 ServiceManager.getService("package"); 来获取。
先看下 ActivityManager.getService:
public static IActivityManager getService() {
return IActivityManagerSingleton.get();
}
private static final Singleton IActivityManagerSingleton =
new Singleton() {
@Override
protected IActivityManager create() {
final IBinder b = ServiceManager.getService(Context.ACTIVITY_SERVICE);
final IActivityManager am = IActivityManager.Stub.asInterface(b);
return am;
}
};
public static final String ACTIVITY_SERVICE = "activity";
可以看到,最终也是通过 ServiceManager.getService("activity")获取的,只不过用了单例模式。
所以接下来看 ServiceManager.getService:
public static IBinder getService(String name) {
try {
IBinder service = sCache.get(name);
if (service != null) {
return service;
} else {
return Binder.allowBlocking(rawGetService(name));
}
} catch (RemoteException e) {
Log.e(TAG, "error in getService", e);
}
return null;
}
如果有cache命中,则从cache中获取,否则从 rawGetService(name)来获取:
private static IBinder rawGetService(String name) throws RemoteException {
..
final IBinder binder = getIServiceManager().getService(name);
...
return binder;
}
所以其实现是通过 getIServiceManager().getService(name)获取的,从前面一篇 Binder学习[2]:用户进程与ServiceManger通信:addService实现 的末尾,我们知道:
getIServiceManager返回的就是 new ServiceManagerProxy(new BinderProxy());
其中 BinderProxy对象中记录这 nativeData,而 nativeData的成员 mObject对应着 BpBinder(0);
---------------------
作者:大将军王虎剩
来源:CSDN
原文:https://blog.csdn.net/hl09083253cy/article/details/79234561
版权声明:本文为博主原创文章,转载请附上博文链接!
所以最终 ServiceManager.getService 就相当于 new ServiceManagerProxy(new BinderProxy()).getService;
所以我们看 ServiceManagerProxy.getService(name):
public IBinder getService(String name) throws RemoteException {
Parcel data = Parcel.obtain();
Parcel reply = Parcel.obtain();
data.writeInterfaceToken(IServiceManager.descriptor);
data.writeString(name);
mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);
IBinder binder = reply.readStrongBinder();
reply.recycle();
data.recycle();
return binder;
}
我们知道Java层的 Parcel是对应着一个Native层的 Parcel的,所以这些数据都会写入 Native层 Parcel 的 mData中,比如 writeInterfaceToken:
public final void writeInterfaceToken(String interfaceName) {
nativeWriteInterfaceToken(mNativePtr, interfaceName);
}
其实现是:
{"nativeWriteInterfaceToken", "(JLjava/lang/String;)V", (void*)android_os_Parcel_writeInterfaceToken},
static void android_os_Parcel_writeInterfaceToken(JNIEnv* env, jclass clazz, jlong nativePtr,
jstring name)
{
Parcel* parcel = reinterpret_cast(nativePtr);
if (parcel != NULL) {
// In the current implementation, the token is just the serialized interface name that
// the caller expects to be invoking
const jchar* str = env->GetStringCritical(name, 0);
if (str != NULL) {
parcel->writeInterfaceToken(String16(
reinterpret_cast(str),
env->GetStringLength(name)));
env->ReleaseStringCritical(name, str);
}
}
}
Native Parcel的 writeInterfaceToken如下:
status_t Parcel::writeInterfaceToken(const String16& interface)
{
writeInt32(IPCThreadState::self()->getStrictModePolicy() |
STRICT_MODE_PENALTY_GATHER);
// currently the interface identification token is just its name as a string
return writeString16(interface);
}
里面调用的函数,我们在前面一篇已经分析过了,这函数执行完成之后,Native Parcel的 mData指向的buffer如下:
policy | len | "android.os.IServiceManager" |
接下来是 data.writeString(name);,即写入想要获取的 Service的 那么,比如 "package",写完之后数据如下:
policy | len0 | "android.os.IServiceManager" | len0 | "package" |
再接下来就是 :
mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);
IBinder binder = reply.readStrongBinder();
显示进行通信,获取 reply,然后在从 reply.readStrongBinder获取对应的Service, return回去进行使用;
我们先说 mRemote.transact。看 ServiceManagerProxy的构造,
public ServiceManagerProxy(IBinder remote) {
mRemote = remote;
}
我们知道 mRemote是BinderProxy对象,所以会调用 BinderProxy.transact,第一个参数是: “GET_SERVICE_TRANSACTION”
public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {
...
return transactNative(code, data, reply, flags);
}
实际的 transact 过程就到 Native层了:
static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,
jint code, jobject dataObj, jobject replyObj, jint flags) // throws RemoteException
{
Parcel* data = parcelForJavaObject(env, dataObj); //从java parcel中拿 data
Parcel* reply = parcelForJavaObject(env, replyObj); //从java parcel中拿 reply
IBinder* target = getBPNativeData(env, obj)->mObject.get(); // 从 BinderProxy获取 BpBinder(0)
status_t err = target->transact(code, *data, reply, flags); // 调用 BpBinder.transact函数
if (err == NO_ERROR) {
return JNI_TRUE;
} else if (err == UNKNOWN_TRANSACTION) {
return JNI_FALSE;
}
}
接下来的调用关系如下:
status_t BpBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
...
// 这里 mHandle 是 0,对应着对端是 ServiceManager进程
status_t status = IPCThreadState::self()->transact(
mHandle, code, data, reply, flags);
}
status_t IPCThreadState::transact(int32_t handle,
uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)
{
...
// 这里会包装要传输的数据到一个 binder_transaction_data 数据结构中
// 在 mOut的数据开头会一个 BC_TRANSACTION 命令
// handle == 0 会写入 binder_transaction_data中
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
...
if ((flags & TF_ONE_WAY) == 0) {
// 可以看到不管是否需要 reply,只要不是 oneway,都需要传递一个 reply参数
if (reply) {
err = waitForResponse(reply);
} else {
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}
} else {
err = waitForResponse(NULL, NULL);
}
return err;
}
我们也知道是在 waitForResponse中进行与binder driver通信的:
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
...
while (1) {
//
if ((err=talkWithDriver()) < NO_ERROR) break;
//
cmd = (uint32_t)mIn.readInt32();
switch (cmd) {
...
case BR_REPLY: // 根据上一篇的分析推理,本次通信返回后,binder driver 写给我们的 cmd 是 BR_REPLY
{
binder_transaction_data tr;
err = mIn.read(&tr, sizeof(tr));
if (err != NO_ERROR) goto finish;
if (reply) {
if ((tr.flags & TF_STATUS_CODE) == 0) { // binder transaction没有错误
reply->ipcSetDataReference(
reinterpret_cast(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t),
freeBuffer, this);
} else { // 发生错误,则读取 err,并 free buffer
err = *reinterpret_cast(tr.data.ptr.buffer);
freeBuffer(NULL,
reinterpret_cast(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), this);
}
} else { // 如果是 oneway的通信,这里进行 free buffer,实际上是写入一个 BC_FREE_BUFFER命令和 buffer地址到 mOut中,真正执行应该是随下一次的 binder 通信告知 binder driver,因为这个命令和 buffer在 mOut的前端,所以会先执行这个命令,再执行正常的 binder 命令
freeBuffer(NULL,
reinterpret_cast(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), this);
continue;
}
}
goto finish;
default:
err = executeCommand(cmd); // 如果是其他一些 cmd,则在这执行,BR_SPAWN_LOOPEr就是在这执行的
if (err != NO_ERROR) goto finish;
break;
}
}
finish:
if (err != NO_ERROR) {
if (acquireResult) *acquireResult = err;
if (reply) reply->setError(err);
mLastError = err;
}
return err;
}
我们先不着急分析 talkWithDriver,在这里回忆一下上一篇中,在binder driver中,binder_thread_read函数返回之前有一段代码:
if (proc->requested_threads == 0 &&
list_empty(&thread->proc->waiting_threads) &&
proc->requested_threads_started < proc->max_threads &&
(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
BINDER_LOOPER_STATE_ENTERED)) /* the user-space code fails to */
/*spawn a new thread if we leave this out */) {
proc->requested_threads++;
binder_inner_proc_unlock(proc);
if (put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer))
return -EFAULT;
binder_stat_br(proc, thread, BR_SPAWN_LOOPER);
}
这个代码是说如果当前 proc没有空闲的 binder thread,且其数量没有达到 max_threads,则想 buffer中写入 "BR_SPAWN_LOOPER" 命令,在返回用户空间后由用户空间执行。如果不满足条件时,buffer的第一个cmd是 "BR_NOOP",就是什么都不干,实际是个占位cmd,主要就是为了在需要的时候被 BR_SPAWN_LOOPER 覆盖的。
而这个 BR_SPAWN_LOOPER命令就是在 executeCommand中执行的,如下:
status_t IPCThreadState::executeCommand(int32_t cmd)
{
...
case BR_SPAWN_LOOPER:
mProcess->spawnPooledThread(false);
break;
return result;
}
void ProcessState::spawnPooledThread(bool isMain)
{
if (mThreadPoolStarted) {
String8 name = makeBinderThreadName();
ALOGV("Spawning new pooled thread, name=%s\n", name.string());
sp t = new PoolThread(isMain);
t->run(name.string());
}
}
可以看到,从 ProcessState创建了一个新的 binder线程。
接下来我们继续讨论正题,从 talkWithDriver中把 GET_SERVICE_TRANSACTION 指令发出,并从 reply中获取Service。
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
...
bwr.write_buffer = (uintptr_t)mOut.data();
bwr.read_buffer = (uintptr_t)mIn.data();
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
else
err = -errno;
...
if (err >= NO_ERROR) {
if (bwr.write_consumed > 0) {
if (bwr.write_consumed < mOut.dataSize())
mOut.remove(0, bwr.write_consumed);
else {
mOut.setDataSize(0);
processPostWriteDerefs();
}
}
if (bwr.read_consumed > 0) {
mIn.setDataSize(bwr.read_consumed); // read_consumed是 read buffer中被写入的数据的size
mIn.setDataPosition(0);
}
return NO_ERROR;
}
return err;
}
接下来会通过 ioctl(fd, BINDER_WRITE_READ, &bwr) 陷入内核态,执行 binder_ioctl,接下来的调用过程有很多与上一篇基本相同,所以省略一些,总结性的过一下:
1.在 binder_ioctl中,由于 cmd是 BINDER_WRITE_READ,所以会执行 binder_ioctl_write_read,执行完毕后就return,返回用户态了
2.在 binder_ioctl_write_read,由于 write_size和 read_size都大于0,所以会先执行 binder_thread_write,再执行 binder_thread_read 等待回复
3.由于 bwr的 write buffer中命令是 BC_TRANSACTION,所以在binder_thread_write中会执行 binder_transaction函数
4.在 binder_transaction中,会从 binder_transaction_data中读出 handle == 0,确定了 target_proc是 service manager进程,然后会从svcmgr进程对应的binder buffer中 malloc一块合适大小的buffer,并把 transaction数据copy进去,数据中包含cmd: GET_SERVICE_TRANSACTION ,由于本次transaction没有binder object,所以不会再执行 binder_translate_binder;然后会创建一个 binder_transaction和与之关联的一个 binder_work,这个 binder work的 type是 BINDER_WORK_TRANSACTION,接下来通过调用 binder_proc_transaction在 target proc中选中一个合适的线程,把 binder work 插入其 todo队列,并 wakeup它;
5.target proc(即svcmgr进程)的主线程被wakeup后发现有一个 binder work,接下来就会进程处理,处理完成后,会在发送一个 reply 给对端
6.调用 getService的线程从 binder_thread_read 唤醒,读取 reply
我们这里主要看下 第5 和 第6 两点。
第5点:
ServiceManager进程一般是在 binder_thread_read中,可以查看其stack确认:
××××:/ # ps -e |grep servicemanager
system 569 1 10248 1816 binder_thread_read 7c98f30e04 S servicemanager
××××:/ # cat /proc/569/task/569/stack
[<0000000000000000>] __switch_to+0x88/0x94
[<0000000000000000>] binder_thread_read+0x328/0xe60
[<0000000000000000>] binder_ioctl_write_read+0x18c/0x2d0
[<0000000000000000>] binder_ioctl+0x1c0/0x5fc
[<0000000000000000>] do_vfs_ioctl+0x48c/0x564
[<0000000000000000>] SyS_ioctl+0x60/0x88
[<0000000000000000>] el0_svc_naked+0x24/0x28
[<0000000000000000>] 0xffffffffffffffff
当其被wakeup后,会继续执行 binder_thread_read:
static int binder_thread_read(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed, int non_block)
{
ret = binder_wait_for_work(thread, wait_for_proc_work);// 一般是在这里等待,被唤醒后从这里接着往下执行
thread->looper &= ~BINDER_LOOPER_STATE_WAITING;
while (1) {
uint32_t cmd;
// 在这里获得非空的 todo list
if (!binder_worklist_empty_ilocked(&thread->todo))
list = &thread->todo;
else if (!binder_worklist_empty_ilocked(&proc->todo) &&
wait_for_proc_work)
list = &proc->todo;
w = binder_dequeue_work_head_ilocked(list); // 取出一个 binder work
switch (w->type) {
case BINDER_WORK_TRANSACTION: { // 前面我们已经知道 work type是这个
binder_inner_proc_unlock(proc);
t = container_of(w, struct binder_transaction, work);
} break;
if (t->buffer->target_node) {
struct binder_node *target_node = t->buffer->target_node;
struct binder_priority node_prio;
tr.target.ptr = target_node->ptr;
tr.cookie = target_node->cookie;
node_prio.sched_policy = target_node->sched_policy;
node_prio.prio = target_node->min_priority;
binder_transaction_priority(current, t, node_prio,
target_node->inherit_rt);
cmd = BR_TRANSACTION;
} else { }
tr.code = t->code;
tr.flags = t->flags;
tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid);
t_from = binder_get_txn_from(t);
if (t_from) {
struct task_struct *sender = t_from->proc->tsk;
tr.sender_pid = task_tgid_nr_ns(sender,
task_active_pid_ns(current));
} else {
tr.sender_pid = 0;
}
tr.data_size = t->buffer->data_size;
tr.offsets_size = t->buffer->offsets_size;
tr.data.ptr.buffer = (binder_uintptr_t)
((uintptr_t)t->buffer->data +
binder_alloc_get_user_buffer_offset(&proc->alloc));
tr.data.ptr.offsets = tr.data.ptr.buffer +
ALIGN(t->buffer->data_size,
sizeof(void *));
if (put_user(cmd, (uint32_t __user *)ptr)) {
if (t_from)
binder_thread_dec_tmpref(t_from);
binder_cleanup_transaction(t, "put_user failed",
BR_FAILED_REPLY);
return -EFAULT;
}
ptr += sizeof(uint32_t);
if (copy_to_user(ptr, &tr, sizeof(tr))) {
if (t_from)
binder_thread_dec_tmpref(t_from);
binder_cleanup_transaction(t, "copy_to_user failed",
BR_FAILED_REPLY);
return -EFAULT;
}
ptr += sizeof(tr);
...
}
binder_thread_read 获取到一个类型为 BINDER_WORK_TRANSACTION类型的 binder work后,会构造一个 binder_transaction_data,其数据取指向 binder_transaction的数据buffer;然后把一个 "BR_TRANSACTION" cmd和这个 binder_transaction_data数据写会 read buffer中,然后从 binder_thread_read中返回到 binder_ioctl_write_read,再返回到binder_ioctl,在返回到 svcmgr进程的 binder_loop函数:
void binder_loop(struct binder_state *bs, binder_handler func)
{
...
for (;;) {
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (uintptr_t) readbuf;
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
}
}
接下来通过 binder_parse来解析 read buffer中的数据:
int binder_parse(struct binder_state *bs, struct binder_io *bio,
uintptr_t ptr, size_t size, binder_handler func)
{
int r = 1;
uintptr_t end = ptr + (uintptr_t) size;
while (ptr < end) {
uint32_t cmd = *(uint32_t *) ptr;
ptr += sizeof(uint32_t);
switch(cmd) {
case BR_NOOP: // 我们知道,第一个 cmd 是 BR_NOOP,会break并执行第二个 cmd
break;
case BR_TRANSACTION: { // read buffer中第二个 cmd是 BR_TRANSACTION
struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
if (func) {
unsigned rdata[256/4];
struct binder_io msg;
struct binder_io reply;
int res;
//初始化 reply,以便后续send
bio_init(&reply, rdata, sizeof(rdata), 4);
// 初始化 msg,数据指向 binder_transaction_data 的数据
bio_init_from_txn(&msg, txn);
// 调用 svcmgr_hander处理 msg 中的数据,并写好 reply
res = func(bs, txn, &msg, &reply);
if (txn->flags & TF_ONE_WAY) {
binder_free_buffer(bs, txn->data.ptr.buffer);
} else { // 非 oneway,需要 send reply
binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);
}
}
ptr += sizeof(*txn);
break;
}
接下来,要进入 svcmgr_handler 处理 binder_transaction_data 中的数据:
int svcmgr_handler(struct binder_state *bs,
struct binder_transaction_data *txn,
struct binder_io *msg,
struct binder_io *reply)
{
uint32_t handle;
...
strict_policy = bio_get_uint32(msg); // 读取 Policy
s = bio_get_string16(msg, &len); // 获取名称
//检查是否是 "android.os.IServiceManager"
if ((len != (sizeof(svcmgr_id) / 2)) ||
memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {
return -1;
}
//读取 binder_transaction_data 中记录的命令
switch(txn->code) {
case SVC_MGR_GET_SERVICE: // 这里是 get sevice的命令
case SVC_MGR_CHECK_SERVICE:
s = bio_get_string16(msg, &len); // 要get的service的名字
if (s == NULL) {
return -1;
}
// 查找 service,返回对应的 handle
handle = do_find_service(s, len, txn->sender_euid, txn->sender_pid);
if (!handle)
break;
bio_put_ref(reply, handle); // 查询到的 handle 写入 reply
return 0;
case SVC_MGR_ADD_SERVICE:
...
}
可以看到查询到 service 后实际是得到一个uint32_t 的 handle,包装到 flat_binder_object,然后写入 reply,最后 send_reply;
写入的操作如下:
void bio_put_ref(struct binder_io *bio, uint32_t handle)
{
struct flat_binder_object *obj;
if (handle)
obj = bio_alloc_obj(bio);
else
obj = bio_alloc(bio, sizeof(*obj));
if (!obj)
return;
obj->flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
obj->hdr.type = BINDER_TYPE_HANDLE;
obj->handle = handle;
obj->cookie = 0;
}
需要注意的是,getService是 reply写入的是一个 BINDER_TYPE_HANDLE的 flat_binder_object;
接下来我们看 do_find_service:
uint32_t do_find_service(const uint16_t *s, size_t len, uid_t uid, pid_t spid)
{
struct svcinfo *si = find_svc(s, len);
...
return si->handle;
}
struct svcinfo *find_svc(const uint16_t *s16, size_t len)
{
struct svcinfo *si;
for (si = svclist; si; si = si->next) {
if ((len == si->len) &&
!memcmp(s16, si->name, len * sizeof(uint16_t))) {
return si;
}
}
return NULL;
}
就是从 svclist中查找名称为 name的 Service,并返回其 handle,这个 handle 对应这svcmgr的 binder_proc的 refs_by_node和refs_by_desc中的一个ref,这个 ref 指向的 binder node 是其在通过 ServiceManager addService是 创建的binder node。
从上一篇addService的分析,可以知道,ServiceManager add 的每个 Service在 svcmgr 的 binder_proc都对应着一个单独的 handle,在 getService时把这个 handle写入reply,send到发起端进程。
接下来是 binder_send_reply 把 reply 通过 binder_ioctl send 到发起端进程,详细send过程在上一篇分析过,总的来讲就是,还会通过 binder_transaction,找到发起 binder 通信的线程,并在其对应的 binder_proc 分配 binder buffer,把reply copy过去,并创建一个 binder work 插入到发起通信的 binder thread 的 todo list, 并将其 wakeup。 这样 svcmgr 端的工作就完成了。
由于 binder_transaction数据中有个 BINDER_TYPE_HANDLE 类型的 flat_binder_object,在 binder_transaction时,会有下面这样的处理:
case BINDER_TYPE_HANDLE:
case BINDER_TYPE_WEAK_HANDLE: {
struct flat_binder_object *fp;
fp = to_flat_binder_object(hdr);
ret = binder_translate_handle(fp, t, thread);
if (ret < 0) {
return_error = BR_FAILED_REPLY;
return_error_param = ret;
return_error_line = __LINE__;
goto err_translate_failed;
}
} break;
这其中关键的点又是 binder_translate_handle:
static int binder_translate_handle(struct flat_binder_object *fp,
struct binder_transaction *t,
struct binder_thread *thread)
{
struct binder_proc *proc = thread->proc; // svcmgr 的 binder_proc
struct binder_proc *target_proc = t->to_proc;
struct binder_node *node;
struct binder_ref_data src_rdata;
int ret = 0;
// 在svcmgr的binder_proc中找到 handle对应的 binder ref,再找到 ref指向的 binder node
node = binder_get_node_from_ref(proc, fp->handle,
fp->hdr.type == BINDER_TYPE_HANDLE, &src_rdata);
// 如果 binder node就在发起 getService的进程中,则转换成binder实体,
// 因为 node->cookie实际就是 Service的地址,在同一进程中可直接访问
// 如果不在同一进程,则不可能直接访问 node->cookie指向的 Service地址
if (node->proc == target_proc) {
if (fp->hdr.type == BINDER_TYPE_HANDLE)
fp->hdr.type = BINDER_TYPE_BINDER;
else
fp->hdr.type = BINDER_TYPE_WEAK_BINDER;
fp->binder = node->ptr; // 指向Service->mRefs
fp->cookie = node->cookie; //指向 Service对象本身
if (node->proc)
binder_inner_proc_lock(node->proc);
// node->local_strong_refs++
binder_inc_node_nilocked(node,
fp->hdr.type == BINDER_TYPE_BINDER,
0, NULL);
if (node->proc)
binder_inner_proc_unlock(node->proc);
binder_node_unlock(node);
} else { // getService的进程与Service不在同一进程,只能返回引用
struct binder_ref_data dest_rdata;
binder_node_unlock(node);
// 根据 Service对应的 binder node 在 target_proc 查找 binder_ref
// 如果找不到,则创建一个 binder_ref,指向 Service的node,handle是 target proc的
// 如果 ref->data.strong == 0,node->local_strong_refs++
// ref->data.strong++
ret = binder_inc_ref_for_node(target_proc, node,
fp->hdr.type == BINDER_TYPE_HANDLE,
NULL, &dest_rdata);
if (ret)
goto done;
fp->binder = 0;
fp->handle = dest_rdata.desc; // handle 已经是发起 getService的进程的了
fp->cookie = 0;
}
done:
binder_put_node(node);
return ret;
}
意思就是根据要 get的 Service的实体和发起 getService的进程是否是同一个进程来区分对待。
我们需要记录一下 send_reply时的关键数据:
void binder_send_reply(struct binder_state *bs,
struct binder_io *reply,
binder_uintptr_t buffer_to_free,
int status)
{
struct {
uint32_t cmd_free;
binder_uintptr_t buffer;
uint32_t cmd_reply;
struct binder_transaction_data txn;
} __attribute__((packed)) data;
data.cmd_free = BC_FREE_BUFFER;
data.buffer = buffer_to_free;
data.cmd_reply = BC_REPLY;
data.txn.target.ptr = 0;
data.txn.cookie = 0;
data.txn.code = 0;
if (status) {
data.txn.flags = TF_STATUS_CODE;
data.txn.data_size = sizeof(int);
data.txn.offsets_size = 0;
data.txn.data.ptr.buffer = (uintptr_t)&status;
data.txn.data.ptr.offsets = 0;
} else {
data.txn.flags = 0;
data.txn.data_size = reply->data - reply->data0;
data.txn.offsets_size = ((char*) reply->offs) - ((char*) reply->offs0);
data.txn.data.ptr.buffer = (uintptr_t)reply->data0;
data.txn.data.ptr.offsets = (uintptr_t)reply->offs0;
}
binder_write(bs, &data, sizeof(data));
}
int binder_write(struct binder_state *bs, void *data, size_t len)
{
struct binder_write_read bwr;
int res;
bwr.write_size = len;
bwr.write_consumed = 0;
bwr.write_buffer = (uintptr_t) data;
bwr.read_size = 0;
bwr.read_consumed = 0;
bwr.read_buffer = 0;
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
if (res < 0) {
fprintf(stderr,"binder_write: ioctl failed (%s)\n",
strerror(errno));
}
return res;
}
总共两个 CMD,一个 BC_FREE_BUFFER,一个 BC_REPLY,还一点 data.txn.code = 0;,在binder_transaction函数中会将这个 code 赋值给 binder_transaction->code;
接下来就回到发起getService的进程一端了,前面我们提到其在 binder_ioctl_write_read中,由于 bwr.read_size > 0,从而在binder_thread_read中等待。binder_thread_read我们已经分析过多次,只不过此次reply中有我们要的关键数据。
在binder_thread_read中,会将 read buffer写入如下数据:
BR_NOOP | BR_REPLY | binder_transaction_data tr; |
其中 tr.data.ptr.buffer = (binder_uintptr_t)
((uintptr_t)t->buffer->data +
binder_alloc_get_user_buffer_offset(&proc->alloc));
指向的binder buffer中记录着的数据(数据中只有一个 flat_binder_object);
接下来就依次返回到 binder_ioctl_write_read,再返回到 binder_ioctl,再返回到用户空间 IPCThreadState::talkWithDriver,接着放回到 IPCThreadState::waitForResponse,进行读取 reply:
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
...
while (1) {
//
if ((err=talkWithDriver()) < NO_ERROR) break;
//
cmd = (uint32_t)mIn.readInt32();
switch (cmd) {
...
case BR_REPLY: // 根据上一篇的分析推理,本次通信返回后,binder driver 写给我们的 cmd 是 BR_REPLY
{
binder_transaction_data tr;
// 从 mIn中读取 binder_transaction_data
err = mIn.read(&tr, sizeof(tr));
if (reply) {
if ((tr.flags & TF_STATUS_CODE) == 0) { // binder transaction没有错误
reply->ipcSetDataReference(
reinterpret_cast(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t),
freeBuffer, this);
} else { // 发生错误,则读取 err,并 free buffer
err = *reinterpret_cast(tr.data.ptr.buffer);
freeBuffer(NULL,
reinterpret_cast(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), this);
}
} else { // 如果是 oneway的通信,这里进行 free buffer,实际上是写入一个 BC_FREE_BUFFER命令和 buffer地址到 mOut中,真正执行应该是随下一次的 binder 通信告知 binder driver,因为这个命令和 buffer在 mOut的前端,所以会先执行这个命令,再执行正常的 binder 命令
freeBuffer(NULL,
reinterpret_cast(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), this);
continue;
}
}
goto finish;
default:
err = executeCommand(cmd); // 如果是其他一些 cmd,则在这执行,BR_SPAWN_LOOPEr就是在这执行的
if (err != NO_ERROR) goto finish;
break;
}
}
finish:
if (err != NO_ERROR) {
if (acquireResult) *acquireResult = err;
if (reply) reply->setError(err);
mLastError = err;
}
return err;
}
我们知道 reply中是一个 flat_binder_obj,就存放在 tr.data.ptr.buffer中,我们主要关注 reply->ipcSetDataReference如何从buffer中读取 reply的:
void Parcel::ipcSetDataReference(const uint8_t* data, size_t dataSize,
const binder_size_t* objects, size_t objectsCount, release_func relFunc, void* relCookie)
{
binder_size_t minOffset = 0;
freeDataNoInit();
mError = NO_ERROR;
mData = const_cast(data);
mDataSize = mDataCapacity = dataSize;
mDataPos = 0;
mObjects = const_cast(objects);
mObjectsSize = mObjectsCapacity = objectsCount;
mNextObjectHint = 0;
mObjectsSorted = false;
mOwner = relFunc;
mOwnerCookie = relCookie;
scanForFds();
}
函数设定了 reply的数据buffer,数据大小,和objects,以及最后通过 scanForFds 判断传递的objects中是否存在 FD类型的 object:
void Parcel::scanForFds() const
{
bool hasFds = false;
for (size_t i=0; i(mData + mObjects[i]);
if (flat->hdr.type == BINDER_TYPE_FD) {
hasFds = true;
break;
}
}
mHasFds = hasFds;
mFdsKnown = true;
}
OK,到这里,reply数据已经完全准备好了。可以从 IPCThreadState::waitForResponse 函数返回了,接下来会返回到 IPCThreadState::transact 函数,再返回到 BpBinder::transact函数,再返回到 android_os_BinderProxy_transact 函数,再返回到 BinderProxy.transactNative,再返回到 BinderProxy.transact,接下来返回到最开始 Java 层的 ServiceManagerProxy.getService(name):
public IBinder getService(String name) throws RemoteException {
Parcel data = Parcel.obtain();
Parcel reply = Parcel.obtain();
data.writeInterfaceToken(IServiceManager.descriptor);
data.writeString(name);
mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);
// transact 完毕,reply也准备好了,接下来读取
IBinder binder = reply.readStrongBinder();
reply.recycle();
data.recycle();
return binder;
}
因为 reply中的数据已经读取好了,接下来我们看看如何读取Service对应的 IBinder的:
public final IBinder readStrongBinder() {
return nativeReadStrongBinder(mNativePtr);
}
{"nativeReadStrongBinder", "(J)Landroid/os/IBinder;", (void*)android_os_Parcel_readStrongBinder},
static jobject android_os_Parcel_readStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr)
{
Parcel* parcel = reinterpret_cast(nativePtr);
if (parcel != NULL) {
return javaObjectForIBinder(env, parcel->readStrongBinder());
}
return NULL;
}
可以看到,会返回一个 javaObjectForIBinder,我们先看参数 parcel->readStrongBinder():
sp Parcel::readStrongBinder() const
{
sp val;
readNullableStrongBinder(&val);
return val;
}
status_t Parcel::readNullableStrongBinder(sp* val) const
{
return unflatten_binder(ProcessState::self(), *this, val);
}
unflatten_binder 就是解析Parcel中的数据了:
status_t unflatten_binder(const sp& proc,
const Parcel& in, sp* out)
{
const flat_binder_object* flat = in.readObject(false);
if (flat) {
switch (flat->hdr.type) {
case BINDER_TYPE_BINDER:
*out = reinterpret_cast(flat->cookie);
return finish_unflatten_binder(NULL, *flat, in);
case BINDER_TYPE_HANDLE:
*out = proc->getStrongProxyForHandle(flat->handle);
return finish_unflatten_binder(
static_cast(out->get()), *flat, in);
}
}
return BAD_TYPE;
}
从上面 binder_translate_handle函数中,我们知道:
1.如果 getService发起者与 Service在同一进程,则 flat->hdr.type是 BINDER_TYPE_BINDER,会通过 *out = reinterpret_cast
2.否则,flat->hdr.type 就是 BINDER_TYPE_HANDLE,就会通过 *out = proc->getStrongProxyForHandle(flat->handle); 创建一个 BpBinder(flat->handle),将其转换为 IBinder
所以 android_os_Parcel_readStrongBinder的返回值就有两种情况:
1.javaObjectForIBinder(env, Service);
2.javaObjectForIBinder(env, BpBinder(flat->handle));
这里需要说明下 Service:如果是Native层通过 defaultServiceManager->addService("drm", new DrmManagerService()) 类似这种形式添加的Service,那么这个 Service是 BBinder,但不是 JavaBBinder;
而如果是Java层通过ServicManager.addService(name, Service)添加的 Service,则Service是BBinder的同时也是 JavaBBinder;
所以我们看下 javaObjectForIBinder:
jobject javaObjectForIBinder(JNIEnv* env, const sp& val)
{
if (val == NULL) return NULL;
if (val->checkSubclass(&gBinderOffsets)) { //如果 val 是 JavaBBinder 对象
// It's a JavaBBinder created by ibinderForJavaObject. Already has Java object.
// JavaBBinder中保存着 Java层 Service对象
jobject object = static_cast(val.get())->object();
return object; // 直接返回java Service对象
}
BinderProxyNativeData* nativeData = gNativeDataCache;
if (nativeData == nullptr) {
nativeData = new BinderProxyNativeData();
}
// gNativeDataCache is now logically empty.
jobject object = env->CallStaticObjectMethod(gBinderProxyOffsets.mClass,
gBinderProxyOffsets.mGetInstance, (jlong) nativeData, (jlong) val.get());
BinderProxyNativeData* actualNativeData = getBPNativeData(env, object);
if (actualNativeData == nativeData) {
// New BinderProxy; we still have exclusive access.
nativeData->mOrgue = new DeathRecipientList;
nativeData->mObject = val;
gNativeDataCache = nullptr;
++gNumProxies;
if (gNumProxies >= gProxiesWarned + PROXY_WARN_INTERVAL) {
ALOGW("Unexpectedly many live BinderProxies: %d\n", gNumProxies);
gProxiesWarned = gNumProxies;
}
} else {
// nativeData wasn't used. Reuse it the next time.
gNativeDataCache = nativeData;
}
return object;
}
所以,对于从 java层add的Service:
1.getService发起者和Service在同一个进程时,getService返回的就是 Java层的 Service对象本身
2.如果不在同一个进程中,则getService返回的是一个 java BinderProxy对象,BinderProxy对象记录着一个 BinderProxyNativeData,这个BinderProxyNativeData中会记录着 Service的引用对应的BpBinder(handle);
getService获得的Service在使用之前,还需要调用 XXXInterface.Stub.asInterface(Service)进程转换:
public static android.content.pm.IPackageManager asInterface(android.os.IBinder obj)
{
if ((obj==null)) {
return null;
}
android.os.IInterface iin = obj.queryLocalInterface(DESCRIPTOR);
if (((iin!=null)&&(iin instanceof android.content.pm.IPackageManager))) {
return ((android.content.pm.IPackageManager)iin);
}
return new android.content.pm.IPackageManager.Stub.Proxy(obj);
}
比如,如果Service是Service对象本身,那么queryLocalInterface的时候就能够找到,asInterface 返回这个 Service自己进行使用,调用函数的时候,就不再需要binder Driver了,可以直接调用;
而如果Service是一个Java层BinderProxy对象,那么将会返回一个 XXXInterface.Stub.Proxy(BinderProxy),调用函数的时候,就需要通过 binder Driver了,比如:
private static class Proxy implements android.content.pm.IPackageManager
{
private android.os.IBinder mRemote;
Proxy(android.os.IBinder remote)
{
mRemote = remote;
}
@Override public void checkPackageStartable(java.lang.String packageName, int userId) throws android.os.RemoteException
{
android.os.Parcel _data = android.os.Parcel.obtain();
android.os.Parcel _reply = android.os.Parcel.obtain();
try {
_data.writeInterfaceToken(DESCRIPTOR);
_data.writeString(packageName);
_data.writeInt(userId);
mRemote.transact(Stub.TRANSACTION_checkPackageStartable, _data, _reply, 0);
_reply.readException();
}
finally {
_reply.recycle();
_data.recycle();
}
}
可以看到,如果通过 XXXInterface.Stub.Proxy(BinderProxy)调用 checkPackageStartable这个函数,实际是调用了 BinderProxy的transact函数,然后读取 reply,从而获得结果的。
TODO:需要分析
如果是Native层通过 defaultServiceManager->addService("drm", new DrmManagerService()) 类似这种形式添加的Service,那么这个 Service是 BBinder,但不是 JavaBBinder;
那么 Native层的 defaultServiceManager()->getService返回的是个什么对象 ?
从上一篇可以知道:
gDefaultServiceManager = new BpServiceManager(new BpBinder(0));
所以defaultServiceManager()->getService对应的实现是 BpServiceManager::getService :
virtual sp getService(const String16& name) const
{
sp svc = checkService(name);
if (svc != NULL) return svc;
}
virtual sp checkService( const String16& name) const
{
Parcel data, reply;
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
data.writeString16(name);
remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
return reply.readStrongBinder();
}
根据前面 Parcel::readdStrongBinder实现,
对于 Native层的 getService:
1.如果发起者与Service服务在同一个进程,那么返回的是 Service本身,它是一个 BBinder的实例;
2.如果不在同一个进程,那么返回的是 getStrongProxyForHandler(handle),实际就是一个 BpBInder(handle)
在使用之前,与Java类似,需要先经过一个 interface_cast
template
inline sp interface_cast(const sp& obj)
{
return INTERFACE::asInterface(obj);
}
实际也是有一个 asInterface:
#define IMPLEMENT_META_INTERFACE(INTERFACE, NAME) \
const ::android::String16 I##INTERFACE::descriptor(NAME); \
const ::android::String16& \
I##INTERFACE::getInterfaceDescriptor() const { \
return I##INTERFACE::descriptor; \
} \
::android::sp I##INTERFACE::asInterface( \
const ::android::sp<::android::IBinder>& obj) \
{ \
::android::sp intr; \
if (obj != NULL) { \
//如果在同一个进程,能够 query到,会返回Service对象本身地址
intr = static_cast( \
obj->queryLocalInterface( \
I##INTERFACE::descriptor).get()); \
if (intr == NULL) { \
// 如果不在同一个进程,query到 NULL,obj是BpBinder,
// 会返回一个Bp##Interface(BpBinder(handle))
intr = new Bp##INTERFACE(obj); \
} \
} \
return intr; \
} \
I##INTERFACE::I##INTERFACE() { } \
I##INTERFACE::~I##INTERFACE() { } \
与 Java层类似的,如果Service是Service对象本身,那么queryLocalInterface的时候就能够找到,asInterface 返回这个 Service自己进行使用,调用函数的时候,就不再需要binder Driver了,可以直接调用;
而如果Service是一个Native层BpBinder对象,那么将会返回一个 Bp##Interface.(BpBinder(handle)),调用函数的时候,就需要通过 binder Driver了,会通过 BpBinder的 transact调用。