AIDL实现猜想

http://hi.baidu.com/yubing1015/blog/item/24a0fa02beb5e31e4bfb5118.html


     这篇文章是我AIDL底层实现的猜想。

还是拿AIDL实例做例子:

        首先,在onServiceConnected中,入参‍service作为一个IBinder,通过myService = IMyService.Stub.asInterface(service);来获得真正的service。看一下生成的IMyService.java中asInterface(android.os.IBinder obj)的实现:

...

‍package com.yubing;
public interface IMyService extends android.os.IInterface
{
/** Local-side IPC implementation stub class. */
public static abstract class Stub extends android.os.Binder implements com.yubing.IMyService
{
private static final java.lang.String DESCRIPTOR = "com.yubing.IMyService";
...

public static com.yubing.IMyService asInterface(android.os.IBinder obj)
{
if ((obj==null)) {
return null;
}
android.os.IInterface iin = (android.os.IInterface)obj.queryLocalInterface(DESCRIPTOR);
if (((iin!=null)&&(iin instanceof com.yubing.IMyService))) {
return ((com.yubing.IMyService)iin);
}
return new com.yubing.IMyService.Stub.Proxy(obj);
}
...

@Override public boolean onTransact(int code, android.os.Parcel data, android.os.Parcel reply, int flags) throws android.os.RemoteException
{
switch (code)
{
case INTERFACE_TRANSACTION:
{
reply.writeString(DESCRIPTOR);
return true;
}
case TRANSACTION_getValue:
{
data.enforceInterface(DESCRIPTOR);
java.lang.String _result = this.getValue();
reply.writeNoException();
reply.writeString(_result);
return true;
}
}
return super.onTransact(code, data, reply, flags);
}

private static class Proxy implements com.yubing.IMyService
{
private android.os.IBinder mRemote;
Proxy(android.os.IBinder remote)
{
mRemote = remote;
}
...

public java.lang.String getValue() throws android.os.RemoteException
{
android.os.Parcel _data = android.os.Parcel.obtain();
android.os.Parcel _reply = android.os.Parcel.obtain();
java.lang.String _result;
try {
_data.writeInterfaceToken(DESCRIPTOR);
mRemote.transact(Stub.TRANSACTION_getValue, _data, _reply, 0);
_reply.readException();
_result = _reply.readString();
}
finally {
_reply.recycle();
_data.recycle();
}
return _result;
}

}

...

}

我的第一个猜想:如果是本地调用,将会在return ((com.yubing.IMyService)iin);处返回;而如果是远程调用,将会在return new com.yubing.IMyService.Stub.Proxy(obj);处返回。

这样,在onServiceConnected中的myService成为一个com.yubing.IMyService$Stub$Proxy的实例。

       然后,我的第二个猜想:myService.getValue()通过transact将service需要的入参写到/dev/binder driver,并等待返回。service端有个循环,不断查看driver上的数据,一旦有服务请求,service会调用onTransact。这样,真正的service,也就是‍MyServiceImpl中的‍getValue()被调用。

‍package com.yubing;

...

public class MyService extends Service
{
    // IMyService.Stub类是根据IMyService.aidl文件生成的类,该类中包含了接口方法(getValue)
    public class MyServiceImpl extends IMyService.Stub
    {
        @Override
        public String getValue() throws RemoteException
        {
            return "从AIDL服务获得的值:OK.";
        }
    }
...

}

       最后,service执行的结果将会写到‍/dev/binder driver,transact返回,调用结束。

       基本流程如上所述,这里还有两个问题,第一是client端调用了transact后,是如何实现等待的,onTransact是如何被调用的?第二是server端得到结果后,是如何通知client端的,client端又是如何返回的?下面来处理这些问题。

       继续我的第三个猜想:transact调用的是BpBinder.cpp中的transact。

‍status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}

BpBinder调用‍IPCThreadState::transact。

status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    ...    
    if (err == NO_ERROR) {
        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    }
    ...    
    if ((flags & TF_ONE_WAY) == 0) {
        if (reply) {
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
       
        IF_LOG_TRANSACTIONS() {
            TextOutput::Bundle _b(alog);
            alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "
                << handle << ": ";
            if (reply) alog << indent << *reply << dedent << endl;
            else alog << "(none requested)" << endl;
        }
    } else {
        err = waitForResponse(NULL, NULL);
    }
   
    return err;
}

‍waitForResponse也在这里实现了,它通过talkWithDriver读写binder kernel driver:

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    ...

    while (1) {
‍        if ((err=talkWithDriver()) < NO_ERROR) break;
        ...

        switch (cmd) {
        ...

        case BR_REPLY:
            {
                binder_transaction_data tr;
                err = mIn.read(&tr, sizeof(tr));
                LOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                if (err != NO_ERROR) goto finish;

                if (reply) {
                    if ((tr.flags & TF_STATUS_CODE) == 0) {
                        reply->ipcSetDataReference(
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(size_t),
                            freeBuffer, this);
                    } else {
                        err = *static_cast<const status_t*>(tr.data.ptr.buffer);
                        freeBuffer(NULL,
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(size_t), this);
                    }
                } else {
                    freeBuffer(NULL,
                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                        tr.data_size,
                        reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
                        tr.offsets_size/sizeof(size_t), this);
                    continue;
                }
            }
            goto finish;

            ...

        }
    }

finish:
    if (err != NO_ERROR) {
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }
   
    return err;
}
在用‍IPCThreadState::transact中,通过writeTransactionData将需要的数据写到/dev/binder driver。以上都是client端的动作。

在server端,在service启动的时候,通过IPCThreadState::joinThreadPool ‍talkWithDriver并executeCommand(cmd)。

void IPCThreadState::joinThreadPool(bool isMain)
{

     ...

    do {
        ...

        // now get the next command to be processed, waiting if necessary
        result = talkWithDriver();
        if (result >= NO_ERROR) {
            size_t IN = mIn.dataAvail();
            if (IN < sizeof(int32_t)) continue;
            cmd = mIn.readInt32();
            IF_LOG_COMMANDS() {
                alog << "Processing top-level Command: "
                    << getReturnString(cmd) << endl;
            }


            result = executeCommand(cmd);
        }
        ...

    } while (result != -ECONNREFUSED && result != -EBADF);

    ...

}
我们这里的cmd是‍BR_TRANSACTION。

    case BR_TRANSACTION:
        {
            binder_transaction_data tr;
             ...

            Parcel reply;
            ...

            if (tr.target.ptr) {
                sp<BBinder> b((BBinder*)tr.cookie);
                const status_t error = b->transact(tr.code, buffer, &reply, 0);
                if (error < NO_ERROR) reply.setError(error);
               
            } else {
                const status_t error = the_context_object->transact(tr.code, buffer, &reply, 0);
                if (error < NO_ERROR) reply.setError(error);
            }
            ...

            if ((tr.flags & TF_ONE_WAY) == 0) {
                LOG_ONEWAY("Sending reply to %d!", mCallingPid);
                sendReply(reply, 0);
            } else {
                LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
            }
            ...
        }
        break;

这里‍b->transact调用‍BBinder::transact,并通过sendReply把输出结果写到binder kernel driver。

‍status_t BBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    data.setDataPosition(0);

    status_t err = NO_ERROR;
    switch (code) {
        case PING_TRANSACTION:
            reply->writeInt32(pingBinder());
            break;
        default:
            err = onTransact(code, data, reply, flags);
            break;
    }

    if (reply != NULL) {
        reply->setDataPosition(0);
    }

    return err;
}
在这里onTransact得到调用。到此,第一个问题解决了。

‍status_t IPCThreadState::sendReply(const Parcel& reply, uint32_t flags)
{
    status_t err;
    status_t statusBuffer;
    err = writeTransactionData(BC_REPLY, flags, -1, 0, reply, &statusBuffer);
    if (err < NO_ERROR) return err;
   
    return waitForResponse(NULL, NULL);
}

在sendReply中,通过waitForResponse把结果写到驱动。到此第二个问题解决了。
但是我们又有了几个新问题:第三:client端,为什么transact调用的是BpBinder.cpp中的transact?第四:server端,‍const status_t error = b->transact(tr.code, buffer, &reply, 0);中,b哪儿来的?第五:client和server端都是如何与驱动交互的?

继续调研:第四个猜想:client端调用‍getStrongProxyForHandle最终得到的是‍BpBinder。

sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
    sp<IBinder> result;

    AutoMutex _l(mLock);

    handle_entry* e = lookupHandleLocked(handle);

    if (e != NULL) {
        // We need to create a new BpBinder if there isn't currently one, OR we
        // are unable to acquire a weak reference on this current one. See comment
        // in getWeakProxyForHandle() for more info about this.
        IBinder* b = e->binder;
        if (b == NULL || !e->refs->attemptIncWeak(this)) {
            b = new BpBinder(handle);
            e->binder = b;
            if (b) e->refs = b->getWeakRefs();
            result = b;
        } else {
            // This little bit of nastyness is to allow us to add a primary
            // reference to the remote proxy when this team doesn't have one
            // but another team is sending the handle to us.
            result.force_set(b);
            e->refs->decWeak(this);
        }
    }

    return result;
}

第五个猜想:service继承自‍BnInterface,而‍BnInterface 继承自BBinder。在server启动的时候,已经把这个BBinder保存起来,当被调用时,从cookie中读取。

‍class BnInterface : public INTERFACE, public BBinder



你可能感兴趣的:(service,null,interface,reference,Descriptor,transactions)