继续Android Handler揭秘(二),这里来继续分析MessageQueue.java对应在Native层的android_os_MessageQueue。
/frameworks/base/core/jni/android_os_MessageQueue.cpp
/frameworks/base/core/java/android/os/MessageQueue.java
/system/core/libutils/Looper.cpp
先来看android_os_MessageQueue的JNI注册代码:
frameworks/base/core/jni/android_os_MessageQueue.cpp
//对应MessageQueue.java里面的native方法
212 static const JNINativeMethod gMessageQueueMethods[] = {
213 /* name, signature, funcPtr */
214 { "nativeInit", "()J", (void*)android_os_MessageQueue_nativeInit },
215 { "nativeDestroy", "(J)V", (void*)android_os_MessageQueue_nativeDestroy },
216 { "nativePollOnce", "(JI)V", (void*)android_os_MessageQueue_nativePollOnce },
217 { "nativeWake", "(J)V", (void*)android_os_MessageQueue_nativeWake },
218 { "nativeIsPolling", "(J)Z", (void*)android_os_MessageQueue_nativeIsPolling },
219 { "nativeSetFileDescriptorEvents", "(JII)V",
220 (void*)android_os_MessageQueue_nativeSetFileDescriptorEvents },
221 };
222
223 int register_android_os_MessageQueue(JNIEnv* env) {
224 int res = RegisterMethodsOrDie(env, "android/os/MessageQueue", gMessageQueueMethods,
225 NELEM(gMessageQueueMethods));
226
227 jclass clazz = FindClassOrDie(env, "android/os/MessageQueue");
228 gMessageQueueClassInfo.mPtr = GetFieldIDOrDie(env, clazz, "mPtr", "J");//Java层保存的指针
229 gMessageQueueClassInfo.dispatchEvents = GetMethodIDOrDie(env, clazz,
230 "dispatchEvents", "(II)I");//用来回调Java层的dispatchEvents方法
231
232 return res;
233 }
在MessageQueue.java的构造函数里面,就会调用nativeInit方法:
70 MessageQueue(boolean quitAllowed) {
71 mQuitAllowed = quitAllowed;
72 mPtr = nativeInit();//保存了NativeMessageQueue的指针
73 }
对应android_os_MessageQueue_nativeInit:
172 static jlong android_os_MessageQueue_nativeInit(JNIEnv* env, jclass clazz) {
173 NativeMessageQueue* nativeMessageQueue = new NativeMessageQueue();
174 if (!nativeMessageQueue) {
175 jniThrowRuntimeException(env, "Unable to allocate native queue");
176 return 0;
177 }
178
179 nativeMessageQueue->incStrong(env);//强引用
180 return reinterpret_cast(nativeMessageQueue);//返回指针
181 }
再来继续看NativeMessageQueue的构造函数:
78 NativeMessageQueue::NativeMessageQueue() :
79 mPollEnv(NULL), mPollObj(NULL), mExceptionObj(NULL) {
80 mLooper = Looper::getForThread();//确保单例,每个线程只会有一个Looper对象
81 if (mLooper == NULL) {//如果这个线程没有绑定Looper对象,就new一个。
82 mLooper = new Looper(false);
83 Looper::setForThread(mLooper);
84 }
85 }
这里又有一个Looper,在前面文章里面提到过,Java层也有一个Looper,每个线程只有一个Looper对象。那这里的Looper是不是也是一样?来看看Looper::getForThread()函数:
system/core/libutils/Looper.cpp
109 sp Looper::getForThread() {
110 int result = pthread_once(& gTLSOnce, initTLSKey);
111 LOG_ALWAYS_FATAL_IF(result != 0, "pthread_once failed");
112
113 return (Looper*)pthread_getspecific(gTLSKey);
114 }
这里又是一个TLS的应用,保证每个线程只有一个Looper对象。换一个说法就是:以线程为单位的单例模式。
在前一篇文章中,enqueueMessage方法里面,有调用一个nativeWake(mPtr),现在来继续分析这个nativeWake的流程。
nativeWake最终调用到Looper.cpp里面的wake函数:
/system/core/libutils/Looper.cpp
398 void Looper::wake() {
399 #if DEBUG_POLL_AND_WAKE
400 ALOGD("%p ~ wake", this);
401 #endif
402
403 uint64_t inc = 1;
404 ssize_t nWrite = TEMP_FAILURE_RETRY(write(mWakeEventFd, &inc, sizeof(uint64_t)));
405 if (nWrite != sizeof(uint64_t)) {
406 if (errno != EAGAIN) {
407 LOG_ALWAYS_FATAL("Could not write wake signal to fd %d: %s",
408 mWakeEventFd, strerror(errno));
409 }
410 }
411 }
很简单,就是往mWakeEventFd里面写了一个1。mWakeEventFd是什么?谁又会监听这个mWakeEventFd。顺着mWakeEventFd的赋值的地方和使用的地方来分析。先看赋值的地方:
/system/core/libutils/Looper.cpp
63 Looper::Looper(bool allowNonCallbacks) :
64 mAllowNonCallbacks(allowNonCallbacks), mSendingMessage(false),
65 mPolling(false), mEpollFd(-1), mEpollRebuildRequired(false),
66 mNextRequestSeq(0), mResponseIndex(0), mNextMessageUptime(LLONG_MAX) {
67 mWakeEventFd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);//创建EventFd
68 LOG_ALWAYS_FATAL_IF(mWakeEventFd < 0, "Could not make wake event fd: %s",
69 strerror(errno));
70
71 AutoMutex _l(mLock);
72 rebuildEpollLocked();//重点
73 }
mWakeEventFd赋值很简单,在Looper构造函数里面建了一个eventfd。继续看用这个mWakeEventFd的地方:
134 void Looper::rebuildEpollLocked() {
135 // Close old epoll instance if we have one.
136 if (mEpollFd >= 0) {
137 #if DEBUG_CALLBACKS
138 ALOGD("%p ~ rebuildEpollLocked - rebuilding epoll set", this);
139 #endif
140 close(mEpollFd);
141 }
142
143 // Allocate the new epoll instance and register the wake pipe.
144 mEpollFd = epoll_create(EPOLL_SIZE_HINT);
145 LOG_ALWAYS_FATAL_IF(mEpollFd < 0, "Could not create epoll instance: %s", strerror(errno));
146
//封装唤醒mWakeEventFd,加入到mEpollFd。
147 struct epoll_event eventItem;
148 memset(& eventItem, 0, sizeof(epoll_event)); // zero out unused members of data field union
149 eventItem.events = EPOLLIN;
150 eventItem.data.fd = mWakeEventFd;
151 int result = epoll_ctl(mEpollFd, EPOLL_CTL_ADD, mWakeEventFd, & eventItem);
152 LOG_ALWAYS_FATAL_IF(result != 0, "Could not add wake event fd to epoll instance: %s",
153 strerror(errno));
154
//封装mRequests成struct epoll_event,加入到mEpollFd里面。
155 for (size_t i = 0; i < mRequests.size(); i++) {
156 const Request& request = mRequests.valueAt(i);
157 struct epoll_event eventItem;
158 request.initEventItem(&eventItem);
159
160 int epollResult = epoll_ctl(mEpollFd, EPOLL_CTL_ADD, request.fd, & eventItem);
161 if (epollResult < 0) {
162 ALOGE("Error adding epoll events for fd %d while rebuilding epoll set: %s",
163 request.fd, strerror(errno));
164 }
165 }
166 }
新建了mWakeEventFd后,封装成struct epoll_event,通过EPOLL_CTL_ADD的方式,加入mEpollFd。这样,mEpollFd就可以知道mWakeEventFd有消息了。这里也同时把mRequests封装,加入了mEpollFd里面监听。
到这里epoll终于露面了,先简单介绍epoll。epoll是linux平台上,最高效的I/O复用机制,可以在同一个地方监听多个文件句柄的I/O事件。