Android synchronized实现原理

  synchronized关键字可以用在两处:1.同步代码块,锁住的是任意的object,也可以是类;2.同步方法,其中普通同步方法锁住的是类的实例对象,静态同步方法锁住的是这个类。在Android中,它们的实现原理都是通过monitor实现的。大致过程是:monitor-enter(加锁)–>执行同步代码块或同步方法–>monitor-exit(释放锁)。
  举个例子。

/frameworks/base/services/core/java/com/android/server/UiThread.java

    public static Handler getHandler() {
        synchronized (UiThread.class) {
            ensureThreadLocked();
            return sHandler;
        }
    }

  上面的同步代码块对应的Davilk字节码指令如下。可以看到,synchronized里面的代码被嵌入到了monitor-enter和monitor-exit之间。

  3: android.os.Handler com.android.server.UiThread.getHandler() (dex_method_idx=6892)
    DEX CODE:
      0x0000: const-class v1, com.android.server.UiThread // type@1398
      0x0002: monitor-enter v1
      0x0003: invoke-static {}, void com.android.server.UiThread.ensureThreadLocked() // method@6890
      0x0006: sget-object  v0, Landroid/os/Handler; com.android.server.UiThread.sHandler // field@2629
      0x0008: monitor-exit v1
      0x0009: return-object v0
      0x000a: move-exception v0
      0x000b: monitor-exit v1
      0x000c: throw v0

  Android运行时初始化时会创建一个monitor池。

/platform/android/art/runtime/runtime.cc

  monitor_pool_ = MonitorPool::Create();

/android/art/runtime/monitor_pool.h

  static MonitorPool* Create() {
#ifndef __LP64__
    return nullptr;
#else
    return new MonitorPool();
#endif
  }

  monitor池使用一个chunk对应一个monitor。num_chunks_ 记录了当前chunk的数量,capacity_记录了当前chunk的容量,first_free_记录了当前第一个可用的chunk地址。初始化monitor池的时候,会调用AllocateChunk分配一个chunk,以后每次需要使用新的monitor的时候,也会调用AllocateChunk分配一个chunk。

/art/runtime/monitor_pool.cc

MonitorPool::MonitorPool()
    : num_chunks_(0), capacity_(0), first_free_(nullptr) {
  AllocateChunk();  // Get our first chunk.
}

// Assumes locks are held appropriately when necessary.
// We do not need a lock in the constructor, but we need one when in CreateMonitorInPool.
void MonitorPool::AllocateChunk() {
  DCHECK(first_free_ == nullptr);

  // Do we need to resize?
  if (num_chunks_ == capacity_) {
    if (capacity_ == 0U) {
      // Initialization.
      capacity_ = kInitialChunkStorage;
      uintptr_t* new_backing = new uintptr_t[capacity_];
      monitor_chunks_.StoreRelaxed(new_backing);
    } else {
      size_t new_capacity = 2 * capacity_;
      uintptr_t* new_backing = new uintptr_t[new_capacity];
      uintptr_t* old_backing = monitor_chunks_.LoadRelaxed();
      memcpy(new_backing, old_backing, sizeof(uintptr_t) * capacity_);
      monitor_chunks_.StoreRelaxed(new_backing);
      capacity_ = new_capacity;
      old_chunk_arrays_.push_back(old_backing);
      VLOG(monitor) << "Resizing to capacity " << capacity_;
    }
  }

  // Allocate the chunk.
  void* chunk = allocator_.allocate(kChunkSize);
  // Check we allocated memory.
  CHECK_NE(reinterpret_cast(nullptr), reinterpret_cast(chunk));
  // Check it is aligned as we need it.
  CHECK_EQ(0U, reinterpret_cast(chunk) % kMonitorAlignment);

  // Add the chunk.
  *(monitor_chunks_.LoadRelaxed() + num_chunks_) = reinterpret_cast(chunk);
  num_chunks_++;

  // Set up the free list
  Monitor* last = reinterpret_cast(reinterpret_cast(chunk) +
                                             (kChunkCapacity - 1) * kAlignedMonitorSize);
  last->next_free_ = nullptr;
  // Eagerly compute id.
  last->monitor_id_ = OffsetToMonitorId((num_chunks_ - 1) * kChunkSize +
                                        (kChunkCapacity - 1) * kAlignedMonitorSize);
  for (size_t i = 0; i < kChunkCapacity - 1; ++i) {
    Monitor* before = reinterpret_cast(reinterpret_cast(last) -
                                                 kAlignedMonitorSize);
    before->next_free_ = last;
    // Derive monitor_id from last.
    before->monitor_id_ = OffsetToMonitorId(MonitorIdToOffset(last->monitor_id_) -
                                            kAlignedMonitorSize);

    last = before;
  }
  DCHECK(last == reinterpret_cast(chunk));
  first_free_ = last;
}

  在ART对object的实现中,有一个uint32_t类型的成员monitor_,用来记载该objetc的LockWord。LockWord的用途Google也在lock_word.h注明了。
  对于thinlock,LockWord头两位是00,其后14位是加锁次数,最后是归属的线程id。对于fatlock,LockWord头两位是01,剩下的是对应的monitor的id。LockWord是0的时候,表示该object未被加锁,这是每个objectd的monitor初始化的状态。
  thinlock的加锁过程:进入到MonitorEnter后,说明即将要对该object进行加锁。LockWord在初始化时是0,于是通过线程id号和加锁次数(0,表示首次加锁)生成一个LockWord,通过CAS(Compare And Set)将LockWord设置成新生成的LockWord。这个过程就是thinlock的加锁过程。
  thinlock的访问过程:如果访问该object的是thinlock的归属线程,将加锁次数加1后,更新LockWord。加锁次数有限制,当到达2的14次方时(这种情况几乎不会出现吧),调用InflateThinLocked通过锁膨胀将thinlock升级为fatlock。如果访问该object的是其他线程,将会调用sched_yield放弃处理器,让CPU选择合适的其他线程执行。contention_count记录了该线程尝试访问该object但未能成功的次数,但当contention_count超过某个阈值时,会调用InflateThinLocked通过锁膨胀将thinlock升级为fatlock。这个阈值默认是50,也可以通过
“-XX:MaxSpinsBeforeThinLockInflation=”指定这个阈值。
  可以看出,thinlock是一个自旋锁。在等待锁释放的过程中,线程并不会睡眠,只是暂时让出处理器,然后通过continue重新执行循环,检查LockWord对应的状态是否是kUnlocked(释放锁)。在锁被短时间占用的情况下,自旋锁是比较好的选择。但当contention_count超过一定程度时,说明该锁被长时间占用,使用自旋锁会带来额外的开销(CAS操作和忙等待),就会将thinlock升级为fatlock。
  与thinlock不同的是,非持有者线程在访问fatlock锁住的代码块时,是通过条件变量monitor_contenders_ 实现同步的。fatlock是个重量级锁,不持有锁的线程会被阻塞,直到锁释放将其唤醒。准确地说,thinlock并没有用到monitor,用到monitor的是fatlock。

/art/runtime/lock_word.h

/* The lock value itself as stored in mirror::Object::monitor_.  The two most significant bits of
 * the state. The three possible states are fat locked, thin/unlocked, and hash code.
 * When the lock word is in the "thin" state and its bits are formatted as follows:
 *
 *  |33|22222222221111|1111110000000000|
 *  |10|98765432109876|5432109876543210|
 *  |00| lock count   |thread id owner |
 *
 * When the lock word is in the "fat" state and its bits are formatted as follows:
 *
 *  |33|222222222211111111110000000000|
 *  |10|987654321098765432109876543210|
 *  |01| MonitorId                    |
 *
 * When the lock word is in hash state and its bits are formatted as follows:
 *
 *  |33|222222222211111111110000000000|
 *  |10|987654321098765432109876543210|
 *  |10| HashCode                     |
 */

/art/runtime/monitor.cc

mirror::Object* Monitor::MonitorEnter(Thread* self, mirror::Object* obj) {
  DCHECK(self != NULL);
  DCHECK(obj != NULL);
  obj = FakeLock(obj);
  uint32_t thread_id = self->GetThreadId();
  size_t contention_count = 0;
  StackHandleScope<1> hs(self);
  Handle<mirror::Object> h_obj(hs.NewHandle(obj));
  while (true) {
    LockWord lock_word = h_obj->GetLockWord(true);
    switch (lock_word.GetState()) {
      case LockWord::kUnlocked: {
        LockWord thin_locked(LockWord::FromThinLockId(thread_id, 0));
        if (h_obj->CasLockWordWeakSequentiallyConsistent(lock_word, thin_locked)) {
          // CasLockWord enforces more than the acquire ordering we need here.
          return h_obj.Get();  // Success!
        }
        continue;  // Go again.
      }
      case LockWord::kThinLocked: {
        uint32_t owner_thread_id = lock_word.ThinLockOwner();
        if (owner_thread_id == thread_id) {
          // We own the lock, increase the recursion count.
          uint32_t new_count = lock_word.ThinLockCount() + 1;
          if (LIKELY(new_count <= LockWord::kThinLockMaxCount)) {
            LockWord thin_locked(LockWord::FromThinLockId(thread_id, new_count));
            h_obj->SetLockWord(thin_locked, true);
            return h_obj.Get();  // Success!
          } else {
            // We'd overflow the recursion count, so inflate the monitor.
            InflateThinLocked(self, h_obj, lock_word, 0);
          }
        } else {
          // Contention.
          contention_count++;
          Runtime* runtime = Runtime::Current();
          if (contention_count <= runtime->GetMaxSpinsBeforeThinkLockInflation()) {
            // TODO: Consider switching the thread state to kBlocked when we are yielding.
            // Use sched_yield instead of NanoSleep since NanoSleep can wait much longer than the
            // parameter you pass in. This can cause thread suspension to take excessively long
            // and make long pauses. See b/16307460.
            sched_yield();
          } else {
            contention_count = 0;
            InflateThinLocked(self, h_obj, lock_word, 0);
          }
        }
        continue;  // Start from the beginning.
      }
      case LockWord::kFatLocked: {
        Monitor* mon = lock_word.FatLockMonitor();
        mon->Lock(self);
        return h_obj.Get();  // Success!
      }
      case LockWord::kHashCode:
        // Inflate with the existing hashcode.
        Inflate(self, nullptr, h_obj.Get(), lock_word.GetHashCode());
        continue;  // Start from the beginning.
      default: {
        LOG(FATAL) << "Invalid monitor state " << lock_word.GetState();
        return h_obj.Get();
      }
    }
  }
}

  锁的膨胀:如果当前线程就是持有锁的线程,直接执行锁膨胀操作。如果当前线程不是持有锁的线程,先要阻塞持有锁的线程,再进行锁膨胀操作。

/art/runtime/monitor.cc

void Monitor::InflateThinLocked(Thread* self, Handle<mirror::Object> obj, LockWord lock_word,
                                uint32_t hash_code) {
  DCHECK_EQ(lock_word.GetState(), LockWord::kThinLocked);
  uint32_t owner_thread_id = lock_word.ThinLockOwner();
  if (owner_thread_id == self->GetThreadId()) {
    // We own the monitor, we can easily inflate it.
    Inflate(self, self, obj.Get(), hash_code);
  } else {
    ThreadList* thread_list = Runtime::Current()->GetThreadList();
    // Suspend the owner, inflate. First change to blocked and give up mutator_lock_.
    self->SetMonitorEnterObject(obj.Get());
    bool timed_out;
    Thread* owner;
    {
      ScopedThreadStateChange tsc(self, kBlocked);
      // Take suspend thread lock to avoid races with threads trying to suspend this one.
      MutexLock mu(self, *Locks::thread_list_suspend_thread_lock_);
      owner = thread_list->SuspendThreadByThreadId(owner_thread_id, false, &timed_out);
    }
    if (owner != nullptr) {
      // We succeeded in suspending the thread, check the lock's status didn't change.
      lock_word = obj->GetLockWord(true);
      if (lock_word.GetState() == LockWord::kThinLocked &&
          lock_word.ThinLockOwner() == owner_thread_id) {
        // Go ahead and inflate the lock.
        Inflate(self, owner, obj.Get(), hash_code);
      }
      thread_list->Resume(owner, false);
    }
    self->SetMonitorEnterObject(nullptr);
  }
}

  锁的膨胀:MonitorPool::CreateMonitor会创建一个新的monitor。接下来的Monitor::Install会通过CAS将加锁的object的LockWord改写成fatlock对应的LockWord,即头部标记”01”和刚创建的monitor id组合而成的LockWord。这样,当读取这个object的锁时,会发现这是个fatlock,于是进入到Monitor::Lock的流程中。

/art/runtime/monitor.cc

void Monitor::Inflate(Thread* self, Thread* owner, mirror::Object* obj, int32_t hash_code) {
  DCHECK(self != nullptr);
  DCHECK(obj != nullptr);
  // Allocate and acquire a new monitor.
  Monitor* m = MonitorPool::CreateMonitor(self, owner, obj, hash_code);
  DCHECK(m != nullptr);
  if (m->Install(self)) {
    if (owner != nullptr) {
      VLOG(monitor) << "monitor: thread" << owner->GetThreadId()
          << " created monitor " << m << " for object " << obj;
    } else {
      VLOG(monitor) << "monitor: Inflate with hashcode " << hash_code
          << " created monitor " << m << " for object " << obj;
    }
    Runtime::Current()->GetMonitorList()->Add(m);
    CHECK_EQ(obj->GetLockWord(true).GetState(), LockWord::kFatLocked);
  } else {
    MonitorPool::ReleaseMonitor(self, m);
  }
}

  Monitor::MonitorExit:对于thinlock,若LockWord中记录的加锁次数不为0,就将LockWord中记录的加锁次数减1。若LockWord中记录的加锁次数为0,则将将LockWord清0,这样以后有线程在获取这个object的锁时,会发现这个锁是
kUnlocked状态的,可以直接占有这个锁。对于fatlock,就是通过条件变量monitor_contenders_ 的signal函数唤醒一个阻塞在这个锁的线程。

/art/runtime/monitor.cc

bool Monitor::MonitorExit(Thread* self, mirror::Object* obj) {
  DCHECK(self != NULL);
  DCHECK(obj != NULL);
  obj = FakeUnlock(obj);
  LockWord lock_word = obj->GetLockWord(true);
  StackHandleScope<1> hs(self);
  Handle<mirror::Object> h_obj(hs.NewHandle(obj));
  switch (lock_word.GetState()) {
    case LockWord::kHashCode:
      // Fall-through.
    case LockWord::kUnlocked:
      FailedUnlock(h_obj.Get(), self, nullptr, nullptr);
      return false;  // Failure.
    case LockWord::kThinLocked: {
      uint32_t thread_id = self->GetThreadId();
      uint32_t owner_thread_id = lock_word.ThinLockOwner();
      if (owner_thread_id != thread_id) {
        // TODO: there's a race here with the owner dying while we unlock.
        Thread* owner =
            Runtime::Current()->GetThreadList()->FindThreadByThreadId(lock_word.ThinLockOwner());
        FailedUnlock(h_obj.Get(), self, owner, nullptr);
        return false;  // Failure.
      } else {
        // We own the lock, decrease the recursion count.
        if (lock_word.ThinLockCount() != 0) {
          uint32_t new_count = lock_word.ThinLockCount() - 1;
          LockWord thin_locked(LockWord::FromThinLockId(thread_id, new_count));
          h_obj->SetLockWord(thin_locked, true);
        } else {
          h_obj->SetLockWord(LockWord(), true);
        }
        return true;  // Success!
      }
    }
    case LockWord::kFatLocked: {
      Monitor* mon = lock_word.FatLockMonitor();
      return mon->Unlock(self);
    }
    default: {
      LOG(FATAL) << "Invalid monitor state " << lock_word.GetState();
      return false;
    }
  }
}

  通常,锁膨胀的操作是单向的,即thinlock可以膨胀成fatlock,但是fatlock不能收缩成thinlock。但是在后台进程进行堆裁剪时,会将所有的fatlock收缩成thinlock。
  此外,我们常用Object.wait()和Object.notify()来进行线程的同步操作。这两个方法必须使用在以同一个Object为加锁对象的synchronized语句块中,而且都是native方法。原因请看下面的代码分析。

/art/runtime/native/java_lang_Object.cc

static void Object_wait(JNIEnv* env, jobject java_this) {
  ScopedFastNativeObjectAccess soa(env);
  mirror::Object* o = soa.Decode(java_this);
  o->Wait(soa.Self());
}

/art/runtime/mirror/object-inl.h

inline void Object::Wait(Thread* self) {
  Monitor::Wait(self, this, 0, 0, true, kWaiting);
}

inline void Object::Wait(Thread* self, int64_t ms, int32_t ns) {
  Monitor::Wait(self, this, ms, ns, true, kTimedWaiting);
}

  可以看到,wait()的内部实现中会对object的Monitor的LockWord进行检查。若不是加锁状态,会直接抛出异常”object not locked by thread before wait()”,说明使用wait()函数前需要使用synchronized进行加锁,这也是我们看到的Object.wait()是使用在synchronized语句内部的原因。如果LockWord表明加的锁是ThinLock,若锁的所属线程不是当前线程,也会抛出异常”object not locked by thread before wait()”。若锁的所属线程是当前线程,将ThinLock锁膨胀为FatLock。由于膨胀过程需要用到CAS,所以可能会”fail spuriously”,于是重新执行while循环再次进行锁膨胀。锁膨胀成功后,调用Monitor的重载版本的wait函数。

/art/runtime/monitor.cc

/*
 * Object.wait().  Also called for class init.
 */
void Monitor::Wait(Thread* self, mirror::Object *obj, int64_t ms, int32_t ns,
                   bool interruptShouldThrow, ThreadState why) {
  DCHECK(self != nullptr);
  DCHECK(obj != nullptr);
  LockWord lock_word = obj->GetLockWord(true);
  while (lock_word.GetState() != LockWord::kFatLocked) {
    switch (lock_word.GetState()) {
      case LockWord::kHashCode:
        // Fall-through.
      case LockWord::kUnlocked:
        ThrowIllegalMonitorStateExceptionF("object not locked by thread before wait()");
        return;  // Failure.
      case LockWord::kThinLocked: {
        uint32_t thread_id = self->GetThreadId();
        uint32_t owner_thread_id = lock_word.ThinLockOwner();
        if (owner_thread_id != thread_id) {
          ThrowIllegalMonitorStateExceptionF("object not locked by thread before wait()");
          return;  // Failure.
        } else {
          // We own the lock, inflate to enqueue ourself on the Monitor. May fail spuriously so
          // re-load.
          Inflate(self, self, obj, 0);
          lock_word = obj->GetLockWord(true);
        }
        break;
      }
      case LockWord::kFatLocked:  // Unreachable given the loop condition above. Fall-through.
      default: {
        LOG(FATAL) << "Invalid monitor state " << lock_word.GetState();
        return;
      }
    }
  }
  Monitor* mon = lock_word.FatLockMonitor();
  mon->Wait(self, ms, ns, interruptShouldThrow, why);
}

  由于调用Object.wait()函数时没有指定时间参数,所以参数ms,ns都是0。

/art/runtime/monitor.cc

void Monitor::Wait(Thread* self, int64_t ms, int32_t ns,
                   bool interruptShouldThrow, ThreadState why) {
  DCHECK(self != NULL);
  DCHECK(why == kTimedWaiting || why == kWaiting || why == kSleeping);

  //monitor_lock_是一个互斥锁,使用Lock和Unlock来加锁一段代码
  monitor_lock_.Lock(self);

  // Make sure that we hold the lock.
  if (owner_ != self) {
    monitor_lock_.Unlock(self);
    ThrowIllegalMonitorStateExceptionF("object not locked by thread before wait()");
    return;
  }

  // We need to turn a zero-length timed wait into a regular wait because
  // Object.wait(0, 0) is defined as Object.wait(0), which is defined as Object.wait().
  //设置线程状态为无限阻塞的kWaiting
  if (why == kTimedWaiting && (ms == 0 && ns == 0)) {
    why = kWaiting;
  }

  // Enforce the timeout range.
  if (ms < 0 || ns < 0 || ns > 999999) {
    monitor_lock_.Unlock(self);
    ThrowLocation throw_location = self->GetCurrentLocationForThrow();
    self->ThrowNewExceptionF(throw_location, "Ljava/lang/IllegalArgumentException;",
                             "timeout arguments out of range: ms=%" PRId64 " ns=%d", ms, ns);
    return;
  }

  /*
   * Add ourselves to the set of threads waiting on this monitor, and
   * release our hold.  We need to let it go even if we're a few levels
   * deep in a recursive lock, and we need to restore that later.
   *
   * We append to the wait set ahead of clearing the count and owner
   * fields so the subroutine can check that the calling thread owns
   * the monitor.  Aside from that, the order of member updates is
   * not order sensitive as we hold the pthread mutex.
   */
   //将当前线程加入到wait_set_的链表末端
  AppendToWaitSet(self);
  //将等待者数量加1,因为该线程将要被阻塞
  ++num_waiters_;
  //保存一些参数,然后清空
  int prev_lock_count = lock_count_;
  lock_count_ = 0;
  owner_ = NULL;
  mirror::ArtMethod* saved_method = locking_method_;
  locking_method_ = NULL;
  uintptr_t saved_dex_pc = locking_dex_pc_;
  locking_dex_pc_ = 0;

  /*
   * Update thread state. If the GC wakes up, it'll ignore us, knowing
   * that we won't touch any references in this state, and we'll check
   * our suspend mode before we transition out.
   */
   //改变线程的状态
  self->TransitionFromRunnableToSuspended(why);

  bool was_interrupted = false;
  {
    // Pseudo-atomically wait on self's wait_cond_ and release the monitor lock.
    MutexLock mu(self, *self->GetWaitMutex());

    // Set wait_monitor_ to the monitor object we will be waiting on. When wait_monitor_ is
    // non-NULL a notifying or interrupting thread must signal the thread's wait_cond_ to wake it
    // up.
    DCHECK(self->GetWaitMonitor() == nullptr);
    //设置线程的wait_monitor_为当前的monitor,表示因为这个monitor阻塞了
    self->SetWaitMonitor(this);

    // Release the monitor lock.
    //唤醒一个阻塞在monitor_contenders_的线程,如上所述,要获得已被占用的FatLock时,会阻塞在monitor_contenders_条件变量
    monitor_contenders_.Signal(self);
    monitor_lock_.Unlock(self);

    // Handle the case where the thread was interrupted before we called wait().
    if (self->IsInterruptedLocked()) {
      was_interrupted = true;
    } else {
      // Wait for a notification or a timeout to occur.
      if (why == kWaiting) {
        //真正的阻塞,使用的是线程内部自带的条件变量
        self->GetWaitConditionVariable()->Wait(self);
      } else {
        DCHECK(why == kTimedWaiting || why == kSleeping) << why;
        self->GetWaitConditionVariable()->TimedWait(self, ms, ns);
      }
      if (self->IsInterruptedLocked()) {
        was_interrupted = true;
      }
      self->SetInterruptedLocked(false);
    }
  }
  //现在线程继续执行,如果带有kSuspendRequest标志,则会阻塞在这里;否则修改线程状态为Runnable状态(一般情况)。
  // Set self->status back to kRunnable, and self-suspend if needed.
  self->TransitionFromSuspendedToRunnable();

  {
    // We reset the thread's wait_monitor_ field after transitioning back to runnable so
    // that a thread in a waiting/sleeping state has a non-null wait_monitor_ for debugging
    // and diagnostic purposes. (If you reset this earlier, stack dumps will claim that threads
    // are waiting on "null".)
    MutexLock mu(self, *self->GetWaitMutex());
    DCHECK(self->GetWaitMonitor() != nullptr);
    ////清空线程的wait_monitor_
    self->SetWaitMonitor(nullptr);
  }

  // Re-acquire the monitor and lock.
  //重新加锁
  Lock(self);
  monitor_lock_.Lock(self);
  self->GetWaitMutex()->AssertNotHeld(self);

  /*
   * We remove our thread from wait set after restoring the count
   * and owner fields so the subroutine can check that the calling
   * thread owns the monitor. Aside from that, the order of member
   * updates is not order sensitive as we hold the pthread mutex.
   */
   //恢复数据,就像什么事都没发生过一样
  owner_ = self;
  lock_count_ = prev_lock_count;
  locking_method_ = saved_method;
  locking_dex_pc_ = saved_dex_pc;
  --num_waiters_;
  RemoveFromWaitSet(self);

  monitor_lock_.Unlock(self);

  if (was_interrupted) {
    /*
     * We were interrupted while waiting, or somebody interrupted an
     * un-interruptible thread earlier and we're bailing out immediately.
     *
     * The doc sayeth: "The interrupted status of the current thread is
     * cleared when this exception is thrown."
     */
    {
      MutexLock mu(self, *self->GetWaitMutex());
      self->SetInterruptedLocked(false);
    }
    if (interruptShouldThrow) {
      ThrowLocation throw_location = self->GetCurrentLocationForThrow();
      self->ThrowNewException(throw_location, "Ljava/lang/InterruptedException;", NULL);
    }
  }
}

  上文提到,Monitor::Wait中会释放Fatlock锁,让竞争线程拿到锁执行。在wait_set_的头部拿出一个线程,如果该线程是因为Object.wait()(或其他wait重载版本)阻塞的话,唤醒它。所以,Object.notify()是按进入阻塞状态的先后顺序来决定唤醒的先后顺序的,谁先阻塞,就会被先唤醒。但是,Object.notify()不表明其他唤醒的线程能拿回锁,要在notify所在的synchronized语句块执行完,唤醒的线程才能重新加锁,否在会再次阻塞在Monitor::Lock上。就我所见,一般notify调用都是synchronized语句块的最后一句。

/art/runtime/monitor.cc

void Monitor::Notify(Thread* self) {
  DCHECK(self != NULL);
  MutexLock mu(self, monitor_lock_);
  // Make sure that we hold the lock.
  if (owner_ != self) {
    ThrowIllegalMonitorStateExceptionF("object not locked by thread before notify()");
    return;
  }
  // Signal the first waiting thread in the wait set.
  while (wait_set_ != NULL) {
    Thread* thread = wait_set_;
    wait_set_ = thread->GetWaitNext();
    thread->SetWaitNext(nullptr);

    // Check to see if the thread is still waiting.
    MutexLock mu(self, *thread->GetWaitMutex());
    if (thread->GetWaitMonitor() != nullptr) {
      thread->GetWaitConditionVariable()->Signal(self);
      return;
    }
  }
}

你可能感兴趣的:(ART)