Binder(五)服务注册流程-发送注册请求

本文基于Android_9.0、kernel_3.18源码

简介

servermanager提供了服务注册、服务获取等功能,以AMS(ActivityManagerService)为例:
首先,AMS通过binder将自己注册到servermanager中;
然后,其他进程通过binder从servermanager获取到AMS服务(取到的其实是代理);
最后,通过获取到的AMS代理对象便能调用到AMS的方法。

本文主要介绍服务注册的流程。

调用SystemServer.java->main()流程

Binder(五)调用ServiceManager的main方法

frameworks/base/core/java/com/android/internal/os/ZygoteInit.java
frameworks/base/core/java/com/android/internal/os/RuntimeInit.java

1、上文回顾

由上文Binder(四)system_server中binder的初始化我们已经知道,手机启动后,会通过ZygoteInit.forkSystemServer()来启动system_server进程。

private static Runnable forkSystemServer(String abiList, String socketName,
        ZygoteServer zygoteServer) {
    ........
    // 启动system_server的参数
    String args[] = {
        "--setuid=1000",
        "--setgid=1000",
        "--setgroups=1001,1002,1003,1004,1005,1006,1007,1008,1009,1010,1018,1021,1023,1024,1032,1065,3001,3002,3003,3006,3007,3009,3010",
        "--capabilities=" + capabilities + "," + capabilities,
        "--nice-name=system_server",
        "--runtime-args",
        "--target-sdk-version=" + VMRuntime.SDK_VERSION_CUR_DEVELOPMENT,
        "com.android.server.SystemServer",
    };
    ZygoteConnection.Arguments parsedArgs = null;
    int pid;
    try {
        parsedArgs = new ZygoteConnection.Arguments(args);
        // 创建进程
        pid = Zygote.forkSystemServer(
                parsedArgs.uid, parsedArgs.gid,
                parsedArgs.gids,
                parsedArgs.runtimeFlags,
                null,
                parsedArgs.permittedCapabilities,
                parsedArgs.effectiveCapabilities);
    }
    if (pid == 0) {
        // 进程其他操作
        return handleSystemServerProcess(parsedArgs);
    }
    return null;
}

在创建完进程后,会通过handleSystemServerProcess()调用到ZygoteInit.zygoteInit()方法,在该方法中执行ZygoteInit.nativeZygoteInit()初始化Binder。

public static final Runnable zygoteInit(int targetSdkVersion, String[] argv, ClassLoader classLoader) {
    if (RuntimeInit.DEBUG) {
        Slog.d(RuntimeInit.TAG, "RuntimeInit: Starting application from zygote");
    }
    Trace.traceBegin(Trace.TRACE_TAG_ACTIVITY_MANAGER, "ZygoteInit");
    RuntimeInit.redirectLogStreams();
    RuntimeInit.commonInit();
    ZygoteInit.nativeZygoteInit();
    return RuntimeInit.applicationInit(targetSdkVersion, argv, classLoader);
}

在ZygoteInit.nativeZygoteInit()中,还做了另一个操作,RuntimeInit.applicationInit(),它的作用就是找到要启动的类,并调用它的main()方法。

2、RuntimeInit.applicationInit()分析

protected static Runnable applicationInit(int targetSdkVersion, String[] argv,
        ClassLoader classLoader) {
    ........  
    final Arguments args = new Arguments(argv);
    ........  
    // Remaining arguments are passed to the start class's static main
    return findStaticMain(args.startClass, args.startArgs, classLoader);
}

static class Arguments {
    String startClass;
    String[] startArgs;

    Arguments(String args[]) throws IllegalArgumentException {
        parseArgs(args);
    }
    // 解析参数,获取到startClass,
    private void parseArgs(String args[])
            throws IllegalArgumentException {
        int curArg = 0;
        for (; curArg < args.length; curArg++) {
            String arg = args[curArg];
            if (arg.equals("--")) {
                curArg++;
                break;
            } else if (!arg.startsWith("--")) {
                break;
            }
        }
        if (curArg == args.length) {
            throw new IllegalArgumentException("Missing classname argument to RuntimeInit!");
        }
        startClass = args[curArg++];
        startArgs = new String[args.length - curArg];
        System.arraycopy(args, curArg, startArgs, 0, startArgs.length);
    }
}

RuntimeInit.applicationInit()方法简单:
首先,通过new Arguments(argv)解析传过来的参数:设置args.startClass,即在ZygoteInit.forkSystemServer()中创建的参数"com.android.server.SystemServer"
然后,通过findStaticMain()方法调用main方法。

// Invokes a static "main(argv[]) method on class "className".
protected static Runnable findStaticMain(String className, String[] argv,
        ClassLoader classLoader) {

    // 加载class
    Class cl;
    try {
        cl = Class.forName(className, true, classLoader);
    } catch () {}
    ........

    // 获取main方法
    Method m;
    try {
        m = cl.getMethod("main", new Class[] { String[].class });
    }catch () {}
    ........

    // 方法校验
    int modifiers = m.getModifiers();
    if (! (Modifier.isStatic(modifiers) && Modifier.isPublic(modifiers))) {
        throw new RuntimeException(
                "Main method is not public and static on " + className);
    }
    return new MethodAndArgsCaller(m, argv);
}

在findStaticMain()方法中:
首先,通过Class.forName()的方式获取到Class对象;
然后,通过反射获取到main()方法;
最后,直接生成MethodAndArgsCaller对象,return回去。

static class MethodAndArgsCaller implements Runnable {
    private final Method mMethod;
    private final String[] mArgs;
    public MethodAndArgsCaller(Method method, String[] args) {
        mMethod = method;
        mArgs = args;
    }
    public void run() {
        try {
            mMethod.invoke(null, new Object[] { mArgs });
        } catch () {}
    }
}


public static void main(String argv[]) {
    ........
    if (startSystemServer) {
        Runrunnable r = forkSystemServer(abiList, socketName, zygoteServer);
        if (r != null) {
            r.run();
            return;
        }
    }
    ........
}

MethodAndArgsCaller继承自Runnable接口,实现了run()方法,在run()方法中,直接通过method.invoke()调用方法。回顾ZygoteInit.java->main()的内容,通过forkSystemServer()获取到Runrunnable对象之后,直接执行了它的run()方法。

至此,我们便知道从App启动->Zyogte进程启动->system_server进程启动->SystemServer的main方法的过程。下面我们详细分析SystemServer做了哪些事情。

服务注册流程

frameworks/base/services/java/com/android/server/SystemServer.java
frameworks/base/services/core/java/com/android/server/am/ActivityManagerService.java
frameworks/base/services/core/java/com/android/server/SystemServiceManager.java
frameworks/base/core/java/android/os/ServiceManager.java

frameworks/base/core/java/com/android/internal/os/BinderInternal.java
frameworks/base/core/jni/android_util_Binder.cpp
frameworks/native/libs/binder/ProcessState.cpp
frameworks/native/include/binder/ProcessState.h
frameworks/native/libs/binder/BpBinder.cpp
frameworks/native/libs/binder/IPCThreadState.cpp

frameworks/base/core/java/android/os/Binder.java
frameworks/base/core/java/android/os/ServiceManagerNative.java

SystemServer.main分析

public static void main(String[] args) {
    new SystemServer().run();
}

private void run() {
     ...
     try {
         ...
         mSystemServiceManager = new SystemServiceManager(mSystemContext);
         ...
     } finally {
         traceEnd();  // InitBeforeStartServices
     }
     ...
     // Start services.
     try {
         traceBeginAndSlog("StartServices");
         startBootstrapServices();
         startCoreServices();
         startOtherServices();
         SystemServerInitThreadPool.shutdown();
     } catch (Throwable ex) {
         Slog.e("System", "******************************************");
         Slog.e("System", "************ Failure starting system services", ex);
         throw ex;
     } finally {
         traceEnd();
     }
     ...
}

在main方法中,创建了SystemServer对象,然后执行了它的run()方法。在run方法中,启动了各种各样的services,如:AMS、PMS、WMS等。接下来以AMS为例进行分析。

startBootstrapServices()
private void startBootstrapServices() {
    ...
    mActivityManagerService = mSystemServiceManager.startService(
            ActivityManagerService.Lifecycle.class).getService();
    mActivityManagerService.setSystemServiceManager(mSystemServiceManager);
    mActivityManagerService.setInstaller(installer);
    ...
    mActivityManagerService.setSystemProcess();
    ...
}

在startBootstrapServices中,通过mSystemServiceManager.startService().getService()生成AMS,然后调用mActivityManagerService.setSystemProcess()进行注册。

AMS的创建

public  T startService(Class serviceClass) {
    try {
        // Create the service.
        ...
        final T service;
        try {
            Constructor constructor = serviceClass.getConstructor(Context.class);
            service = constructor.newInstance(mContext);
        } catch () {}
        startService(service);
        return service;
    }
}

查看SystemServiceManager的startService方法,方法中通过反射得到传入的类的对象,即ActivityManagerService$Lifecycle,然后返回。也就是上文中的getService()会调用到ActivityManagerService$Lifecycle的getService()。

public static final class Lifecycle extends SystemService {
    private final ActivityManagerService mService;
    public Lifecycle(Context context) {
        super(context);
        mService = new ActivityManagerService(context);
    }
    @Override
    public void onStart() {
        mService.start();
    }
    @Override
    public void onBootPhase(int phase) {
        mService.mBootPhase = phase;
        if (phase == PHASE_SYSTEM_SERVICES_READY) {
            mService.mBatteryStatsService.systemServicesReady();
            mService.mServices.systemServicesReady();
        }
    }
    @Override
    public void onCleanupUser(int userId) {
        mService.mBatteryStatsService.onCleanupUser(userId);
    }
    public ActivityManagerService getService() {
        return mService;
    }
}

可以看到,ActivityManagerService$Lifecycle就是对AMS的一个包装,在构造方法中,会生成AMS,然后通过geService()返回。

AMS注册

1、IServiceManager实例的创建

public void setSystemProcess() {
    try {
        ServiceManager.addService(Context.ACTIVITY_SERVICE, this, /* allowIsolated= */ true,
                DUMP_FLAG_PRIORITY_CRITICAL | DUMP_FLAG_PRIORITY_NORMAL | DUMP_FLAG_PROTO);
        ......
    } catch () {}
}

查看setSystemProcess,它通过调用ServiceManager.addService()进行注册。

/**
 * Place a new @a service called @a name into the service
 * manager.
 *
 * @param name the name of the new service
 * @param service the service object
 * @param allowIsolated set to true to allow isolated sandboxed processes
 * @param dumpPriority supported dump priority levels as a bitmask
 * to access this service
 */
public static void addService(String name, IBinder service, boolean allowIsolated,
        int dumpPriority) {
    try {
        getIServiceManager().addService(name, service, allowIsolated, dumpPriority);
    } catch (RemoteException e) {
        Log.e(TAG, "error in addService", e);
    }
}

addService通过getIserviceManager()获取IServiceManager实例,然后调用addService进行注册。

private static IServiceManager getIServiceManager() {
    if (sServiceManager != null) {
        return sServiceManager;
    }
    // Find the service manager
    sServiceManager = ServiceManagerNative
            .asInterface(Binder.allowBlocking(BinderInternal.getContextObject()));
    return sServiceManager;
}

getIServiceManager中生成了IServiceManager实例,它包含三个步骤:
1、BinderInternal.getContextObject()
2、Binder.allowBlocking()
3、ServiceManagerNative.asInterface()

Binder(五)IServiceManager创建

下面逐一进行解析:

1.1、BinderInternal.getContextObject()

BinderInternal.getContextObject()调用的是Native方法,该方法在android_util_Binder.cpp中。

static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz)
{
    sp b = ProcessState::self()->getContextObject(NULL);
    return javaObjectForIBinder(env, b);
}

此方法会调用ProcessState的getContextObject(),在ProcessState::getContextObject()中,会通过getStrongProxyForHandle(0)返回sp对象,此处的0代表servermanager的句柄。

sp ProcessState::getContextObject(const sp& /*caller*/)
{
    return getStrongProxyForHandle(0);
}


sp ProcessState::getStrongProxyForHandle(int32_t handle)
{
    sp result;

    AutoMutex _l(mLock);

    // 找到句柄为handle的handle_entry对象,如果找不到,则新建handle对应的handle_entry
    handle_entry* e = lookupHandleLocked(handle);

    if (e != NULL) {
        // 如果通过handle_entry找不到BpBinder,则创建一个
        IBinder* b = e->binder;
        if (b == NULL || !e->refs->attemptIncWeak(this)) {
            if (handle == 0) {
                Parcel data;
                // 执行一个假任务检测context manager是否注册成功
                status_t status = IPCThreadState::self()->transact(
                        0, IBinder::PING_TRANSACTION, data, NULL, 0);
                if (status == DEAD_OBJECT)
                   return NULL;
            }

            // 创建BpBinder
            b = BpBinder::create(handle);
            e->binder = b;
            if (b) e->refs = b->getWeakRefs();
            result = b;
        } else {
            // This little bit of nastyness is to allow us to add a primary
            // reference to the remote proxy when this team doesn't have one
            // but another team is sending the handle to us.
            result.force_set(b);
            e->refs->decWeak(this);
        }
    }

    return result;
}

getStrongProxyForHandle()的目的是返回句柄为handle的IBinder代理,这里是返回Service Manager的IBinder代理。
首先,通过lookupHandleLocked()得到handle_entry对象;
然后,如果handle == 0,则通过发送虚拟事务检测context manager是否注册成功;
最后,通过BpBinder::create(handle)创建BpBinder,并存入handle_entry中。

1.1.1、ProcessState::lookupHandleLocked()

VectormHandleToObject;

ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle)
{
    const size_t N=mHandleToObject.size();
    if (N <= (size_t)handle) {
        handle_entry e;
        e.binder = NULL;
        e.refs = NULL;
        status_t err = mHandleToObject.insertAt(e, N, handle+1-N);
        if (err < NO_ERROR) return NULL;
    }
    return &mHandleToObject.editItemAt(handle);
}

如果没有handle_entry,则创建新的handle_entry并将其插入到mHandleToObject中。mHandleToObject定义在ProcessState.h中,是一个Vector。

1.1.2、BpBinder::create(handle)

BpBinder* BpBinder::create(int32_t handle) {
    int32_t trackedUid = -1;
    ...
    return new BpBinder(handle, trackedUid);
}

BpBinder::BpBinder(int32_t handle, int32_t trackedUid)
    : mHandle(handle)
    , mAlive(1)
    , mObitsSent(0)
    , mObituaries(NULL)
    , mTrackedUid(trackedUid)
{
    ALOGV("Creating BpBinder %p handle %d\n", this, mHandle);

    extendObjectLifetime(OBJECT_LIFETIME_WEAK);
    IPCThreadState::self()->incWeakHandle(handle, this);
}

在BpBinder::create(handle)中,通过new BpBinder(handle, trackedUid)生成BpBinder对象。在BpBinder的构造函数中,调用了IPCThreadState::incWeakHandle方法。

1.1.3、IPCThreadState::incWeakHandle()
通过上一篇文章我们已经知道,IPCThreadState在初始化进程时已经创建,并且初始化了mOut、mIn。

void IPCThreadState::incWeakHandle(int32_t handle, BpBinder *proxy)
{
    LOG_REMOTEREFS("IPCThreadState::incWeakHandle(%d)\n", handle);
    // 增加binder引用的指令,还存在mOut中
    mOut.writeInt32(BC_INCREFS);
    mOut.writeInt32(handle);
    // Create a temp reference until the driver has handled this command.
    proxy->getWeakRefs()->incWeak(mProcess.get());
    mPostWriteWeakDerefs.push(proxy->getWeakRefs());
}

通过incWeakHandle增加引用计数,同时在mOut中缓存BC_INCREFS指令,等待下一次调用ioctl与binder驱动通信时,告诉驱动增加相应的binder_ref的弱引用计数。

引用计数与智能指针有关,想了解的小伙伴可以自行搜索。

1.1.4、javaObjectForIBinder()
通过上面的流程,我们得到了sp即BpBinder对象,在传递个java之前,做了一次封装。

struct BinderProxyNativeData {
    sp mObject;
    sp mOrgue; 
};

jobject javaObjectForIBinder(JNIEnv* env, const sp& val)
{
    ...
    // 生成包装类BinderProxyNativeData
    BinderProxyNativeData* nativeData = gNativeDataCache;
    if (nativeData == nullptr) {
        nativeData = new BinderProxyNativeData();
    }
    ...
    // 调用java方法生成java对象
    jobject object = env->CallStaticObjectMethod(gBinderProxyOffsets.mClass,
            gBinderProxyOffsets.mGetInstance, (jlong) nativeData, (jlong) val.get());
    ...
    BinderProxyNativeData* actualNativeData = getBPNativeData(env, object);
    if (actualNativeData == nativeData) {
        // New BinderProxy; we still have exclusive access.
        nativeData->mOrgue = new DeathRecipientList;
        // 将BpBinder赋值给nativeData
        nativeData->mObject = val;
        gNativeDataCache = nullptr;
        ++gNumProxies;
        if (gNumProxies >= gProxiesWarned + PROXY_WARN_INTERVAL) {
            ALOGW("Unexpectedly many live BinderProxies: %d\n", gNumProxies);
            gProxiesWarned = gNumProxies;
        }
    } else {
        // nativeData wasn't used. Reuse it the next time.
        gNativeDataCache = nativeData;
    }
    return object;
}

首先,得到包装类BinderProxyNativeData;
然后,通过CallStaticObjectMethod()生成java对象;
最后,如果BinderProxyNativeData是新生成的,将BpBinder赋值给它。

我们观察到有一个gBinderProxyOffsets参数,去看看它在哪里初始化的:

const char* const kBinderProxyPathName = "android/os/BinderProxy";

// 加载android/os/BinderProxy
static int int_register_android_os_BinderProxy(JNIEnv* env){
    ...
    clazz = FindClassOrDie(env, kBinderProxyPathName);
    gBinderProxyOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
    gBinderProxyOffsets.mGetInstance = GetStaticMethodIDOrDie(env, clazz, "getInstance",
            "(JJ)Landroid/os/BinderProxy;");
    ...
}

int register_android_os_Binder(JNIEnv* env){
    ...
    if (int_register_android_os_BinderProxy(env) < 0)
        return -1;
    ...
}

可知,在int_register_android_os_BinderProxy方法中会解析gBinderProxyOffsets的参数,mClass是android/os/BinderProxy,mGetInstance 是getInstance方法。

而int_register_android_os_BinderProxy是在register_android_os_Binder调用的。我们还记得native会通过AndroidRuntime::start()启动虚拟机,并拉起ZygoteInit。其实在start里还进行了android方法的注册

void AndroidRuntime::start(const char* className, const Vector& options, bool zygote){
    // 注册android方法
    if (startReg(env) < 0) {
        ALOGE("Unable to register all android natives\n");
        return;
    }
}

// 注册android静态方法
int AndroidRuntime::startReg(JNIEnv* env){
    ...  
    if (register_jni_procs(gRegJNI, NELEM(gRegJNI), env) < 0) {
        env->PopLocalFrame(NULL);
        return -1;
    }
    ...  
}

// 直接调用mProc
static int register_jni_procs(const RegJNIRec array[], size_t count, JNIEnv* env){
    for (size_t i = 0; i < count; i++) {
        if (array[i].mProc(env) < 0) {
            return -1;
        }
    }
    return 0;
}

AndroidRuntime::start()通过startReg()注册android方法,gRegJNI是一个RegJNIRec数组,数组中添加了register_android_os_Binder。下面是它的定义:

#define REG_JNI(name)      { name }
struct RegJNIRec {
    int (*mProc)(JNIEnv*);
};

static const RegJNIRec gRegJNI[] = {
    ...
    REG_JNI(register_android_os_Binder),
    ...
}

根据上文,我们知道在javaObjectForIBinder中,通过CallStaticObjectMethod会调用到java层BinderProxy.getInstance()方法。
传递的参数分别是nativeData和BpBinder。

BinderProxy

final class BinderProxy implements IBinder {
    // BinderProxyNativeData引用
    private final long mNativeData;

    private BinderProxy(long nativeData) {
        mNativeData = nativeData;
    }

    private static BinderProxy getInstance(long nativeData, long iBinder) {
        BinderProxy result;
        try {
            // 从缓存中查
            result = sProxyMap.get(iBinder);
            if (result != null) {
                return result;
            }
            // 查不到生成新的BinderProxy
            result = new BinderProxy(nativeData);
        } catch (Throwable e) {}
        ...
        // 添加缓存
        sProxyMap.set(iBinder, result);
        return result;
    }
}

BinderProxy.getInstance()会生成BinderProxy对象,并将其缓存在sProxyMap中。

至此,我们能够知道,通过BinderInternal.getContextObject()最终会获取到BinderProxy对象,它的持有关系如下:BinderProxy->BinderProxyNativeData->BpBinder->handle = 0,这样我们便能通过BinderProxy与servermanager通信了。

1.2、Binder.allowBlocking()
public static IBinder allowBlocking(IBinder binder) {
    try {
        if (binder instanceof BinderProxy) {
            ((BinderProxy) binder).mWarnOnBlocking = false;
        } else if (binder != null && binder.getInterfaceDescriptor() != null
                && binder.queryLocalInterface(binder.getInterfaceDescriptor()) == null) {
            Log.w(TAG, "Unable to allow blocking on interface " + binder);
        }
    } catch (RemoteException ignored) {
    }
    return binder;
}

设置binder的参数,对我们分析流程意义不大,可以看到它在设置完后,直接将binder返回,也就是我们上一个流程创建的BinderProxy。

1.3、ServiceManagerNative.asInterface()
static public IServiceManager asInterface(IBinder obj){
    if (obj == null) {
        return null;
    }
    IServiceManager in =
       (IServiceManager)obj.queryLocalInterface(descriptor);
    if (in != null) {
        return in;
    }
    return new ServiceManagerProxy(obj);
}

final class BinderProxy implements IBinder {
    public IInterface queryLocalInterface(String descriptor) {
        return null;
    }
}

先通过queryLocalInterface查询,如果为null就生成一个ServiceManagerProxy,通过查看BinderProxy的queryLocalInterface可知,最终会返回ServiceManagerProxy对象。

class ServiceManagerProxy implements IServiceManager {
    public ServiceManagerProxy(IBinder remote) {
        mRemote = remote;
    }

    public IBinder asBinder() {
        return mRemote;
    }
}

ServiceManagerProxy主要是对BinderProxy的包装。

至此,ServiceManager.getIServiceManager流程分析完毕,最终会返回ServiceManagerProxy,它是对BinderProxy的包装,内部持有BinderProxy对象。

2、getIServiceManager().addService()注册

Binder(五)发送注册请求
2.1、数据发送到binder驱动
public void addService(String name, IBinder service, boolean allowIsolated, int dumpPriority)
        throws RemoteException {
    Parcel data = Parcel.obtain();
    Parcel reply = Parcel.obtain();
    data.writeInterfaceToken(IServiceManager.descriptor);
    data.writeString(name);
    data.writeStrongBinder(service);
    data.writeInt(allowIsolated ? 1 : 0);
    data.writeInt(dumpPriority);
    mRemote.transact(ADD_SERVICE_TRANSACTION, data, reply, 0);
    reply.recycle();
    data.recycle();
}

查看ServiceManagerProxy的addService方法,它通过Parcel对服务内容进行了封装,然后通过mRemote(BinderProxy)的transact进行发送。

public final void writeStrongBinder(IBinder val) {
    nativeWriteStrongBinder(mNativePtr, val);
}

static const JNINativeMethod gParcelMethods[] = {
    {"nativeWriteStrongBinder",   "(JLandroid/os/IBinder;)V", (void*)android_os_Parcel_writeStrongBinder},
}

static void android_os_Parcel_writeStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr, jobject object)
{
    Parcel* parcel = reinterpret_cast(nativePtr);
    if (parcel != NULL) {
        const status_t err = parcel->writeStrongBinder(ibinderForJavaObject(env, object));
        if (err != NO_ERROR) {
            signalExceptionForError(env, clazz, err);
        }
    }
}

writeStrongBinder方法会调用到native层的android_os_Parcel_writeStrongBinder中,在此方法中会调用到native层parcel的writeStrongBinder方法。

status_t Parcel::writeStrongBinder(const sp& val)
{
    return flatten_binder(ProcessState::self(), val, this);
}

status_t flatten_binder(const sp& /*proc*/,
    const sp& binder, Parcel* out){
    flat_binder_object obj;
    ...
    obj.hdr.type = BINDER_TYPE_BINDER;
    obj.binder = reinterpret_cast(local->getWeakRefs());
    ...
    return finish_flatten_binder(binder, obj, out);
}

flat_binder_object中的type为BINDER_TYPE_BINDER。接下来查看transact方法:

public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {
    // 检查数据的大小 
    Binder.checkParcel(this, code, data, "Unreasonably large binder buffer");
    ...
    try {
        return transactNative(code, data, reply, flags);
    } finally {}
}

// 数据检测,不能超过800k
static void checkParcel(IBinder obj, int code, Parcel parcel, String msg) {
    if (CHECK_PARCEL_SIZE && parcel.dataSize() >= 800*1024) {
        // Trying to send > 800k, this is way too much
        StringBuilder sb = new StringBuilder();
        ...
        Slog.wtfStack(TAG, sb.toString());
    }
}

在BinderProxy的transact中,先调用Binder.checkParcel对数据大小进行检查,然后使用transactNative发送数据。

static const JNINativeMethod gBinderProxyMethods[] = {
    {"transactNative",      "(ILandroid/os/Parcel;Landroid/os/Parcel;I)Z", (void*)android_os_BinderProxy_transact},
};

static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,
        jint code, jobject dataObj, jobject replyObj, jint flags) // throws RemoteException{
    ...
    // 得到发送的parcel
    Parcel* data = parcelForJavaObject(env, dataObj);
    ...
    // 得到回复的parcel
    Parcel* reply = parcelForJavaObject(env, replyObj);
    ...
    // 先得到BinderProxyNativeData,然后获取到BpBinder
    IBinder* target = getBPNativeData(env, obj)->mObject.get();
    ...
    // 调用BpBinder的transact
    status_t err = target->transact(code, *data, reply, flags);
    ...
}

transactNative会调用到android_util_Binder的android_os_BinderProxy_transact方法,他首先通过java对象得到发送数据、接受返回的parcel对象,然后再得到BpBinder对象,调用BpBinder的transact进行数据发送。

status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}

在BpBinder中又通过IPCThreadState::self()->transact()进行数据传输。

status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags){
    status_t err;

    flags |= TF_ACCEPT_FDS;

    ...
    // 数据打包:handle是BpBinder的mHandle = 0(servermanager)
    // code = ADD_SERVICE_TRANSACTION,在getIServiceManager().addService()设置的code
    err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);

    if (err != NO_ERROR) {
        if (reply) reply->setError(err);
        return (mLastError = err);
    }

    if ((flags & TF_ONE_WAY) == 0) {
        ...
        if (reply) {
            ...
            // 数据发送给Binder驱动
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
    } else {
        err = waitForResponse(NULL, NULL);
    }

    return err;
}

IPCThreadState::self()->transact()中做了两步操作:
首先,通过writeTransactionData对数据进行打包,注意:这里的code = ADD_SERVICE_TRANSACTION,handle = 0;
然后,通过waitForResponse将数据发送给binder驱动。

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
    binder_transaction_data tr;

    tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
    tr.target.handle = handle;
    tr.code = code;
    tr.flags = binderFlags;
    tr.cookie = 0;
    tr.sender_pid = 0;
    tr.sender_euid = 0;

    const status_t err = data.errorCheck();
    if (err == NO_ERROR) {
        tr.data_size = data.ipcDataSize();
        tr.data.ptr.buffer = data.ipcData();
        tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
        tr.data.ptr.offsets = data.ipcObjects();
    } else if (statusBuffer) {} else {}

    mOut.writeInt32(cmd);
    mOut.write(&tr, sizeof(tr));

    return NO_ERROR;
}

writeTransactionData会将数据打包到binder_transaction_data中,并将BC_TRANSACTION和binder_transaction_data加入到mOut中进行缓存。

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    uint32_t cmd;
    int32_t err;

    while (1) {
        // 通过talkWithDriver与binder交互
        if ((err=talkWithDriver()) < NO_ERROR) break;
        ...
        // 读取返回结果
        cmd = (uint32_t)mIn.readInt32();

        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            ...
        case BR_DEAD_REPLY:
            ...
        case BR_FAILED_REPLY:
            ...
        case BR_ACQUIRE_RESULT:
            ...
        case BR_REPLY:
            ...
        default:
            ...
        }
    }

finish:
    ...
    return err;
}

在waitForResponse中,通过talkWithDriver与binder交互,在根据返回值进行相应的处理。

status_t IPCThreadState::talkWithDriver(bool doReceive){
    ... 
    binder_write_read bwr;

    const bool needRead = mIn.dataPosition() >= mIn.dataSize();
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
    bwr.write_size = outAvail;
    bwr.write_buffer = (uintptr_t)mOut.data();

    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (uintptr_t)mIn.data();
    } else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }
    ... 

    do {
        ...
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
        ...
    } while (err == -EINTR);

    return err;
}

talkWithDriver在Binder(四)system_server中binder的初始化中已经分析过:它先将数据封装在binder_write_read 中,然后调用ioctl与binder进行通信。由于mIn中还没有写入数据,因此needRead=true,bwr中write_size和read_size都>0。

至此,经过重重包装,我们看一下现在的数据结构:

Binder(五)包装后的数据结构.jpg

2.2、binder驱动收到数据后的处理

ioctl在Binder(三)servicemanager初始化中分析过:BINDER_WRITE_READ会走到binder_ioctl_write_read方法中,

static int binder_ioctl_write_read(struct file *filp,
                unsigned int cmd, unsigned long arg,
                struct binder_thread *thread){
    ...
    // 如果write_size>0,则进行写操作
    if (bwr.write_size > 0) {
        ret = binder_thread_write(proc, thread,bwr.write_buffer,bwr.write_size,&bwr.write_consumed);
        ...
    }

    // 如果read_size>0,则进行读操作
    if (bwr.read_size > 0) {
        ret = binder_thread_read(proc, thread, bwr.read_buffer,bwr.read_size,&bwr.read_consumed,filp->f_flags & O_NONBLOCK);
        ...
    }
    ...
}

在binder_ioctl_write_read中,会分别通过binder_thread_write读取发送过来的数据,通过binder_thread_read返回数据。

static int binder_thread_write(struct binder_proc *proc,
            struct binder_thread *thread,
            binder_uintptr_t binder_buffer, size_t size,
            binder_size_t *consumed)
{
    uint32_t cmd;
    void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
    void __user *ptr = buffer + *consumed;
    void __user *end = buffer + size;

    while (ptr < end && thread->return_error == BR_OK) {
        if (get_user(cmd, (uint32_t __user *)ptr))
            return -EFAULT;
        ptr += sizeof(uint32_t);
        ...
        switch (cmd) {
        ...
        case BC_TRANSACTION:
        case BC_REPLY: {
            struct binder_transaction_data tr;

            if (copy_from_user(&tr, ptr, sizeof(tr)))
                return -EFAULT;
            ptr += sizeof(tr);
            binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
            break;
        }
        ...
    }
    return 0;
}

在binder_thread_write中,对于BC_TRANSACTION指令,会交由binder_transaction处理。

static void binder_transaction(struct binder_proc *proc,
                   struct binder_thread *thread,
                   struct binder_transaction_data *tr, int reply){
    struct binder_transaction *t;
    struct binder_work *tcomplete;
    binder_size_t *offp, *off_end;
    binder_size_t off_min;
    struct binder_proc *target_proc;
    struct binder_thread *target_thread = NULL;
    struct binder_node *target_node = NULL;
    struct list_head *target_list;
    wait_queue_head_t *target_wait;
    struct binder_transaction *in_reply_to = NULL;
    struct binder_transaction_log_entry *e;
    uint32_t return_error;
    ...
    if (reply) {
        ...     
    } else {
        // handle = 0,为假
        if (tr->target.handle) {
            struct binder_ref *ref;
            // 通过handle查找binder_ref
            ref = binder_get_ref(proc, tr->target.handle);
            if (ref == NULL) {
                ...
                // 如果找不到说明没有注册
                goto err_invalid_target_handle;
            }
            target_node = ref->node;
        } else {
            // 如果handle为0,表示找的是servermanager
            target_node = binder_context_mgr_node;
            ...
        }
        e->to_node = target_node->debug_id;
        // 设置处理事务的目标进程
        target_proc = target_node->proc;
        ...
    }
    if (target_thread) {
        ...
    } else {
        target_list = &target_proc->todo;
        target_wait = &target_proc->wait;
    }
    e->to_proc = target_proc->pid;

    // 分配一个待处理事务binder_transaction
    t = kzalloc(sizeof(*t), GFP_KERNEL);
    ...
    // 分配一个待完成的工作binder_work
    tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
    ...
    if (!reply && !(tr->flags & TF_ONE_WAY))
        // 设置from,表示事务是从哪里发起的
        t->from = thread;
    else
        t->from = NULL;

    // 初始化事务
    t->sender_euid = task_euid(proc->tsk);
    t->to_proc = target_proc; // 设置目标进程
    t->to_thread = target_thread; // 设置目标线程
    t->code = tr->code;// 事务的code = ADD_SERVICE_TRANSACTION
    t->flags = tr->flags;
    t->priority = task_nice(current);
    t->buffer = binder_alloc_buf(target_proc, tr->data_size,tr->offsets_size, !reply && (t->flags & TF_ONE_WAY)); // 分配空间
    t->buffer->transaction = t; // 保存事务
    t->buffer->target_node = target_node; // 保存事务的目标binder_node
    ...
    // 拷贝数据
    if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)tr->data.ptr.buffer, tr->data_size)) {}
    if (copy_from_user(offp, (const void __user *)(uintptr_t)tr->data.ptr.offsets, tr->offsets_size)) {}
    ...
    for (; offp < off_end; offp++) {
        struct flat_binder_object *fp;
        ... 
        fp = (struct flat_binder_object *)(t->buffer->data + *offp);
        off_min = *offp + sizeof(struct flat_binder_object);
        switch (fp->type) {
        case BINDER_TYPE_BINDER:
        case BINDER_TYPE_WEAK_BINDER: {
            struct binder_ref *ref;
            // 在proc中找到parcel中binder对应的binder_node,即AMS的binder_node
            struct binder_node *node = binder_get_node(proc, fp->binder);
            // 如果没有找到,新建一个
            if (node == NULL) {
                node = binder_new_node(proc, fp->binder, fp->cookie);
                ...
                node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
                node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
            }

            ...
            // 在target_proc(servicemanager)中查找是否有该实体的引用
            // 如果没有,则添加到target_proc->refs_by_node红黑树中,便于servicemanager管理
            ref = binder_get_ref_for_node(target_proc, node);
            if (ref == NULL) {
                return_error = BR_FAILED_REPLY;
                goto err_binder_get_ref_for_node_failed;
            }

            // 修改type
            if (fp->type == BINDER_TYPE_BINDER)
                fp->type = BINDER_TYPE_HANDLE;
            else
                fp->type = BINDER_TYPE_WEAK_HANDLE;

            // 设置handle的值,通过handle可以从servicemanager中找到对应的binder_ref,从而找到binder_node
            fp->handle = ref->desc;
            // 增加引用计数
            binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE,
                       &thread->todo);
            ...
        } break;
        ...
        }
    }
    if (reply) {
        ...
    } else if (!(t->flags & TF_ONE_WAY)) {
        BUG_ON(t->buffer->async_transaction != 0);
        t->need_reply = 1;
        t->from_parent = thread->transaction_stack;
        // 将事务添加到当前线程的事务栈中
        thread->transaction_stack = t;
    } else {
        ...
    }
    // 设置事务类型,并将其加入队列
    t->work.type = BINDER_WORK_TRANSACTION;
    list_add_tail(&t->work.entry, target_list);
    tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
    // 将complete添加到thread->todo队列中
    list_add_tail(&tcomplete->entry, &thread->todo);
    // 唤醒目标进程
    if (target_wait)
        wake_up_interruptible(target_wait);
    return;
    ...
}

首先,通过handle = 0,找到处理这次事务binder_node,即servicemanager,从而对目标数据进程赋值:target_node、target_proc、target_list、target_wait;
然后,新建事务binder_transaction,对事务数据进行初始化;
再然后,解析flat_binder_object数据,这里面封装这AMS的数据,查找/创建AMS对应的binder_node,再通过binder_get_ref_for_node查找/创建该binder_node对应的binder_ref;新建好binder_ref后,将其添加到servicemanager的红黑树中,这样servicemanager就能对AMS进行管理了;
最后,设置待处理事务的类型为BINDER_WORK_TRANSACTION,将其加入目标进程的todo队列;待完成工作的类型为BINDER_WORK_TRANSACTION_COMPLETE,将complete添加到thread->todo队列中,然后唤醒等待队列。

注意,此时都运行在AMS的进程中,通过唤醒等待队列操作会将servicemanager进程唤醒进行处理,先将AMS进程的流程分析完毕。
static int binder_thread_read(struct binder_proc *proc,
                  struct binder_thread *thread,
                  binder_uintptr_t binder_buffer, size_t size,
                  binder_size_t *consumed, int non_block){
    void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
    void __user *ptr = buffer + *consumed;
    void __user *end = buffer + size;

    int ret = 0;
    int wait_for_proc_work;

    // 如果*consumed=0,则写入BR_NOOP到用户传进来的bwr.read_buffer缓存区
    if (*consumed == 0) {
        if (put_user(BR_NOOP, (uint32_t __user *)ptr))
            return -EFAULT;
        ptr += sizeof(uint32_t);
    }
    ...
retry:
    // 当线程的事务栈为空 且 待处理事务列表为空时,该标记位true。
    // binder_transaction中添加了待完成工作,因此hread->todo并不为空
    wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo);

    if (wait_for_proc_work) {
        ...
    } else {
        if (non_block) {
            ...
        } else
            // todo不为空,不需要等待
            ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));
    }
    ...
    while (1) {
        uint32_t cmd;
        struct binder_transaction_data tr;
        struct binder_work *w;
        struct binder_transaction *t = NULL;

        // 取出待完成工作
        if (!list_empty(&thread->todo)) {
            w = list_first_entry(&thread->todo, struct binder_work,
                         entry);
        } else if () {} else {}
        ...
        switch (w->type) {
        case BINDER_WORK_TRANSACTION_COMPLETE: {
            // 将BR_TRANSACTION_COMPLETE写入到用户缓冲空间中
            cmd = BR_TRANSACTION_COMPLETE;
            if (put_user(cmd, (uint32_t __user *)ptr))
                return -EFAULT;
            ptr += sizeof(uint32_t);
            binder_stat_br(proc, thread, cmd);
            list_del(&w->entry);
            kfree(w);
            binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
        } break;
        }
        ...
    }
    ...
    return 0;
}

首先,由于consumed=0,因此会先将BR_NOOP从内核空间拷贝到用户空间;
然后,由于binder_transaction中添加了待完成工作,因此hread->todo并不为空;因此会走到while循环,将BR_TRANSACTION_COMPLETE写入用户空间。

status_t IPCThreadState::talkWithDriver(bool doReceive){
    ...
    do {
        ...
        // 返回值 = 0
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
            ...
    } while (err == -EINTR);
    ...
    if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else {
                mOut.setDataSize(0);
                processPostWriteDerefs();
            }
        }
        if (bwr.read_consumed > 0) {
            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0);
        }
        return NO_ERROR;
    }

    return err;
}

返回到talkWithDriver;ioclt返回值=0;因此err = NO_ERROR;然后返回到waitForResponse中。在waitForResponse会根据cmd进行相应的处理。

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult){
    uint32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;
       ...
        cmd = (uint32_t)mIn.readInt32();
        ...
        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;
            break;
        case BR_DEAD_REPLY:
           ...
        case BR_FAILED_REPLY:
            ...
        case BR_ACQUIRE_RESULT:
           ...
        case BR_REPLY:
            ...
        default:
            err = executeCommand(cmd);
            if (err != NO_ERROR) goto finish;
            break;
        }
    }
    ...
    return err;
}

status_t IPCThreadState::executeCommand(int32_t cmd){
    BBinder* obj;
    RefBase::weakref_type* refs;
    status_t result = NO_ERROR;

    switch ((uint32_t)cmd) {
    ...
    case BR_NOOP:
        break;
    }
    ...
    return result;
}

首先,读取到cmd = BR_NOOP,然后再default中由executeCommand处理,在executeCommand中什么都没做;
然后,通过循环,再次进入talkWithDriver,由于数据未读完,会通过if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR返回;
最后,读取cmd = BR_TRANSACTION_COMPLETE,由于reply != 0,因此会再次循环进入talkWithDriver。

再次进入talkWithDriver中,write_size = 0,read_size != 0(needRead = true),因此会触发ioctl中的binder_thread_read,在当前线程中的事务栈和待处理事务都是空,会执行wait_event_interruptible_exclusive进行阻塞,等待servicemanager的反馈信息。

你可能感兴趣的:(Binder(五)服务注册流程-发送注册请求)