之前几篇博客我们介绍了传统Binder的使用方法,包括c层和java层,这篇博客我们主要介绍下ProcessState和IPCThreadState类的相关方法。
在一个传统的demon中,如果我们要使用Binder通信,代码大致如下:
int main(int argc, char** argv) { sp<ProcessState> proc(ProcessState::self()); MediaPlayerService::instantiate(); ProcessState::self()->startThreadPool(); IPCThreadState::self()->joinThreadPool(); }
上面先调用了ProcessState的self方法,
sp<ProcessState> ProcessState::self() { Mutex::Autolock _l(gProcessMutex); if (gProcess != NULL) { return gProcess; } gProcess = new ProcessState; return gProcess; }
典型的单例模式,我们先来看看ProcessState的构造函数
ProcessState::ProcessState() : mDriverFD(open_driver()) , mVMStart(MAP_FAILED) , mThreadCountLock(PTHREAD_MUTEX_INITIALIZER) , mThreadCountDecrement(PTHREAD_COND_INITIALIZER) , mExecutingThreadsCount(0) , mMaxThreads(DEFAULT_MAX_BINDER_THREADS) , mManagesContexts(false) , mBinderContextCheckFunc(NULL) , mBinderContextUserData(NULL) , mThreadPoolStarted(false) , mThreadPoolSeq(1) { if (mDriverFD >= 0) { // XXX Ideally, there should be a specific define for whether we // have mmap (or whether we could possibly have the kernel module // availabla). #if !defined(HAVE_WIN32_IPC) // mmap the binder, providing a chunk of virtual address space to receive transactions. mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0); if (mVMStart == MAP_FAILED) { // *sigh* ALOGE("Using /dev/binder failed: unable to mmap transaction memory.\n"); close(mDriverFD); mDriverFD = -1; } #else mDriverFD = -1; #endif } LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened. Terminating."); }
我们在赋值的时候有如下函数 mDriverFD(open_driver())
static int open_driver() { int fd = open("/dev/binder", O_RDWR);//打开binder驱动 if (fd >= 0) { fcntl(fd, F_SETFD, FD_CLOEXEC); int vers = 0; status_t result = ioctl(fd, BINDER_VERSION, &vers); if (result == -1) { ALOGE("Binder ioctl to obtain version failed: %s", strerror(errno)); close(fd); fd = -1; } if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) { ALOGE("Binder driver protocol does not match user space protocol!"); close(fd); fd = -1; } size_t maxThreads = DEFAULT_MAX_BINDER_THREADS;//设置binder线程数,默认15 result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads); if (result == -1) { ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno)); } } else { ALOGW("Opening '/dev/binder' failed: %s\n", strerror(errno)); } return fd; }
上面这个函数打开了binder驱动节点,然后设置了binder线程数量。binder驱动打开的fd保存在mDriverFD 中。
具体关闭Binder中线程的问题,可以参考Binder通信过程中的用户空间线程池的管理 这篇博文。
上面函数中还通过mmap来把设备文件/dev/binder映射到内存中。
而instantiate函数一般是将service注册到serviceManager中去。
void MediaPlayerService::instantiate() { defaultServiceManager()->addService( String16("media.player"), new MediaPlayerService()); }
我们再来看startThreadPool函数
void ProcessState::startThreadPool() { AutoMutex _l(mLock); if (!mThreadPoolStarted) { mThreadPoolStarted = true; spawnPooledThread(true); } }而在spawnPooledThread函数中,先建了一个线程PoolThread
void ProcessState::spawnPooledThread(bool isMain) { if (mThreadPoolStarted) { String8 name = makeBinderThreadName(); ALOGV("Spawning new pooled thread, name=%s\n", name.string()); sp<Thread> t = new PoolThread(isMain); t->run(name.string()); } }我们看下这个PoolThread线程,最后还是调用了IPCThreadState::self()->joinThreadPool(mIsMain);
class PoolThread : public Thread { public: PoolThread(bool isMain) : mIsMain(isMain) { } protected: virtual bool threadLoop() { IPCThreadState::self()->joinThreadPool(mIsMain); return false; } const bool mIsMain; };
我们再来看看IPCThreadState的joinThreadPool函数,先看看其定义,参数默认是true。
void joinThreadPool(bool isMain = true);也就是在main函数中
这两个函数都是调用了joinThreadPool函数且参数都是true,只是上面的函数新建了一个thread。
我们再来看看这个函数 joinThreadPool:
void IPCThreadState::joinThreadPool(bool isMain) { LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid()); mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER); // This thread may have been spawned by a thread that was in the background // scheduling group, so first we will make sure it is in the foreground // one to avoid performing an initial transaction in the background. set_sched_policy(mMyThreadId, SP_FOREGROUND); status_t result; do { processPendingDerefs(); // now get the next command to be processed, waiting if necessary result = getAndExecuteCommand(); if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) { ALOGE("getAndExecuteCommand(fd=%d) returned unexpected error %d, aborting", mProcess->mDriverFD, result); abort(); } // Let this thread exit the thread pool if it is no longer // needed and it is not the main process thread. if(result == TIMED_OUT && !isMain) { break; } } while (result != -ECONNREFUSED && result != -EBADF); LOG_THREADPOOL("**** THREAD %p (PID %d) IS LEAVING THE THREAD POOL err=%p\n", (void*)pthread_self(), getpid(), (void*)result); mOut.writeInt32(BC_EXIT_LOOPER); talkWithDriver(false); }
这个函数就是一个死循环,不断从驱动获取数据,我们来看getAndExecuteCommand函数:
status_t IPCThreadState::getAndExecuteCommand() { status_t result; int32_t cmd; result = talkWithDriver();//获取binder驱动数据 if (result >= NO_ERROR) { size_t IN = mIn.dataAvail(); if (IN < sizeof(int32_t)) return result; cmd = mIn.readInt32(); IF_LOG_COMMANDS() { alog << "Processing top-level Command: " << getReturnString(cmd) << endl; } pthread_mutex_lock(&mProcess->mThreadCountLock); mProcess->mExecutingThreadsCount++; pthread_mutex_unlock(&mProcess->mThreadCountLock); result = executeCommand(cmd); pthread_mutex_lock(&mProcess->mThreadCountLock); mProcess->mExecutingThreadsCount--; pthread_cond_broadcast(&mProcess->mThreadCountDecrement); pthread_mutex_unlock(&mProcess->mThreadCountLock); // After executing the command, ensure that the thread is returned to the // foreground cgroup before rejoining the pool. The driver takes care of // restoring the priority, but doesn't do anything with cgroups so we // need to take care of that here in userspace. Note that we do make // sure to go in the foreground after executing a transaction, but // there are other callbacks into user code that could have changed // our group so we want to make absolutely sure it is put back. set_sched_policy(mMyThreadId, SP_FOREGROUND); } return result; }getAndExecuteCommand函数中先调用talkWithDriver就是从binder驱动获取数据,然后调用executeCommand执行命令
status_t IPCThreadState::executeCommand(int32_t cmd) { BBinder* obj; RefBase::weakref_type* refs; status_t result = NO_ERROR; switch ((uint32_t)cmd) { case BR_ERROR: result = mIn.readInt32(); break; case BR_OK: break; ...... case BR_TRANSACTION: { binder_transaction_data tr; result = mIn.read(&tr, sizeof(tr)); ALOG_ASSERT(result == NO_ERROR, "Not enough command data for brTRANSACTION"); if (result != NO_ERROR) break; Parcel buffer; buffer.ipcSetDataReference( reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), freeBuffer, this); const pid_t origPid = mCallingPid; const uid_t origUid = mCallingUid; const int32_t origStrictModePolicy = mStrictModePolicy; const int32_t origTransactionBinderFlags = mLastTransactionBinderFlags; mCallingPid = tr.sender_pid; mCallingUid = tr.sender_euid; mLastTransactionBinderFlags = tr.flags; int curPrio = getpriority(PRIO_PROCESS, mMyThreadId); if (gDisableBackgroundScheduling) { if (curPrio > ANDROID_PRIORITY_NORMAL) { // We have inherited a reduced priority from the caller, but do not // want to run in that state in this process. The driver set our // priority already (though not our scheduling class), so bounce // it back to the default before invoking the transaction. setpriority(PRIO_PROCESS, mMyThreadId, ANDROID_PRIORITY_NORMAL); } } else { if (curPrio >= ANDROID_PRIORITY_BACKGROUND) { // We want to use the inherited priority from the caller. // Ensure this thread is in the background scheduling class, // since the driver won't modify scheduling classes for us. // The scheduling group is reset to default by the caller // once this method returns after the transaction is complete. set_sched_policy(mMyThreadId, SP_BACKGROUND); } } //ALOGI(">>>> TRANSACT from pid %d uid %d\n", mCallingPid, mCallingUid); Parcel reply; status_t error; IF_LOG_TRANSACTIONS() { TextOutput::Bundle _b(alog); alog << "BR_TRANSACTION thr " << (void*)pthread_self() << " / obj " << tr.target.ptr << " / code " << TypeCode(tr.code) << ": " << indent << buffer << dedent << endl << "Data addr = " << reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer) << ", offsets addr=" << reinterpret_cast<const size_t*>(tr.data.ptr.offsets) << endl; } if (tr.target.ptr) { sp<BBinder> b((BBinder*)tr.cookie); error = b->transact(tr.code, buffer, &reply, tr.flags);//这个最终到service的onTransact函数 } else { error = the_context_object->transact(tr.code, buffer, &reply, tr.flags); } //ALOGI("<<<< TRANSACT from pid %d restore pid %d uid %d\n", // mCallingPid, origPid, origUid); if ((tr.flags & TF_ONE_WAY) == 0) { LOG_ONEWAY("Sending reply to %d!", mCallingPid); if (error < NO_ERROR) reply.setError(error); sendReply(reply, 0); } else { LOG_ONEWAY("NOT sending reply to %d!", mCallingPid); } mCallingPid = origPid; mCallingUid = origUid; mStrictModePolicy = origStrictModePolicy; mLastTransactionBinderFlags = origTransactionBinderFlags; IF_LOG_TRANSACTIONS() { TextOutput::Bundle _b(alog); alog << "BC_REPLY thr " << (void*)pthread_self() << " / obj " << tr.target.ptr << ": " << indent << reply << dedent << endl; } } break; ...... }
上面就是处理各种命令,最后BR_TRANSACTION命令的时候会调用BBinder的transact,最后调用service中的onTransact函数。
status_t BBinder::transact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { data.setDataPosition(0); status_t err = NO_ERROR; switch (code) { case PING_TRANSACTION: reply->writeInt32(pingBinder()); break; default: err = onTransact(code, data, reply, flags);//调用onTransact函数 break; } if (reply != NULL) { reply->setDataPosition(0); } return err; }
还记得在healthd中,我们没有使用Binder线程,我们看看代码是怎么写的。
static void binder_event(uint32_t /*epevents*/) { IPCThreadState::self()->handlePolledCommands(); } void healthd_mode_android_init(struct healthd_config* /*config*/) { ProcessState::self()->setThreadPoolMaxThreadCount(0); IPCThreadState::self()->disableBackgroundScheduling(true); IPCThreadState::self()->setupPolling(&gBinderFd); if (gBinderFd >= 0) { if (healthd_register_event(gBinderFd, binder_event)) KLOG_ERROR(LOG_TAG, "Register for binder events failed\n"); } gBatteryPropertiesRegistrar = new BatteryPropertiesRegistrar(); gBatteryPropertiesRegistrar->publish(); }
具体是调用healthd_mode_android_init函数,在这个函数先调用Binder接口,然后将fd 注册到epoll中去,处理函数就是binder_event函数。
先来看设置Binder最大线程数的函数:
status_t ProcessState::setThreadPoolMaxThreadCount(size_t maxThreads) { status_t result = NO_ERROR; if (ioctl(mDriverFD, BINDER_SET_MAX_THREADS, &maxThreads) != -1) { mMaxThreads = maxThreads; } else { result = -errno; ALOGE("Binder ioctl to set max threads failed: %s", strerror(-result)); } return result; }
最后是通过ioctl设置最大线程数。
下面我们再来看disableBackgroundScheduling函数,应该是禁止后台线程的意思
void IPCThreadState::disableBackgroundScheduling(bool disable) { gDisableBackgroundScheduling = disable; }
我们再来看看在哪里使用了gDisableBackgroundScheduling 这个变量, 还是在executeCommand函数中处理BR_TRANSACTION命令时有下面这段代码
case BR_TRANSACTION: { binder_transaction_data tr; result = mIn.read(&tr, sizeof(tr)); ALOG_ASSERT(result == NO_ERROR, "Not enough command data for brTRANSACTION"); if (result != NO_ERROR) break; Parcel buffer; buffer.ipcSetDataReference( reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), freeBuffer, this); const pid_t origPid = mCallingPid; const uid_t origUid = mCallingUid; const int32_t origStrictModePolicy = mStrictModePolicy; const int32_t origTransactionBinderFlags = mLastTransactionBinderFlags; mCallingPid = tr.sender_pid; mCallingUid = tr.sender_euid; mLastTransactionBinderFlags = tr.flags; int curPrio = getpriority(PRIO_PROCESS, mMyThreadId); if (gDisableBackgroundScheduling) { if (curPrio > ANDROID_PRIORITY_NORMAL) { // We have inherited a reduced priority from the caller, but do not // want to run in that state in this process. The driver set our // priority already (though not our scheduling class), so bounce // it back to the default before invoking the transaction. setpriority(PRIO_PROCESS, mMyThreadId, ANDROID_PRIORITY_NORMAL); } } else { if (curPrio >= ANDROID_PRIORITY_BACKGROUND) { // We want to use the inherited priority from the caller. // Ensure this thread is in the background scheduling class, // since the driver won't modify scheduling classes for us. // The scheduling group is reset to default by the caller // once this method returns after the transaction is complete. set_sched_policy(mMyThreadId, SP_BACKGROUND); } }
下面我们再来看看setupPolling函数
int IPCThreadState::setupPolling(int* fd) { if (mProcess->mDriverFD <= 0) { return -EBADF; } mOut.writeInt32(BC_ENTER_LOOPER); *fd = mProcess->mDriverFD; return 0; }
我们看代码这个函数只是获取Binder驱动的fd
然后我们把fd加入主线程的epoll进行监听,当Binder驱动有数据的时候,就会调用binder_event函数
static void binder_event(uint32_t /*epevents*/) { IPCThreadState::self()->handlePolledCommands(); }我们来看下handlePolledCommands函数:
status_t IPCThreadState::handlePolledCommands() { status_t result; do { result = getAndExecuteCommand(); } while (mIn.dataPosition() < mIn.dataSize()); processPendingDerefs(); flushCommands(); return result; }getAndExecuteCommand函数之前我们分析过,这里再来看下:
status_t IPCThreadState::getAndExecuteCommand() { status_t result; int32_t cmd; result = talkWithDriver();//获取binder驱动数据 if (result >= NO_ERROR) { size_t IN = mIn.dataAvail(); if (IN < sizeof(int32_t)) return result; cmd = mIn.readInt32(); IF_LOG_COMMANDS() { alog << "Processing top-level Command: " << getReturnString(cmd) << endl; } pthread_mutex_lock(&mProcess->mThreadCountLock); mProcess->mExecutingThreadsCount++; pthread_mutex_unlock(&mProcess->mThreadCountLock); result = executeCommand(cmd);//执行命令 pthread_mutex_lock(&mProcess->mThreadCountLock); mProcess->mExecutingThreadsCount--; pthread_cond_broadcast(&mProcess->mThreadCountDecrement); pthread_mutex_unlock(&mProcess->mThreadCountLock); // After executing the command, ensure that the thread is returned to the // foreground cgroup before rejoining the pool. The driver takes care of // restoring the priority, but doesn't do anything with cgroups so we // need to take care of that here in userspace. Note that we do make // sure to go in the foreground after executing a transaction, but // there are other callbacks into user code that could have changed // our group so we want to make absolutely sure it is put back. set_sched_policy(mMyThreadId, SP_FOREGROUND); } return result; }
先获取binder驱动数据,然后再执行executeCommand函数,和之前一样执行到BR_TRANSACTION命令会调用BBinder的transact,最终执行到service的onTransact函数中。
当然这些数据的处理都是在healthd的主线程中,是epoll在binder驱动有数据的时候执行的。
我们继续看handlePolledCommands函数
status_t IPCThreadState::handlePolledCommands() { status_t result; do { result = getAndExecuteCommand(); } while (mIn.dataPosition() < mIn.dataSize()); processPendingDerefs(); flushCommands(); return result; }
最后做一些清理工作,在flushCommands函数中将和binder驱动的交互关闭。
void IPCThreadState::flushCommands() { if (mProcess->mDriverFD <= 0) return; talkWithDriver(false); }
这篇博客我们主要讲了使用binder的demon的binder调用流程,以及不使用binder线程的代码调用方法,举例了healthd中的一个例子。