在上一篇文章中介绍了guava cache是怎么构造的,接下来cache中两个重要的方法,put、get,我们先来看看put方法:
@Override
public void put(K key, V value) {
localCache.put(key, value);
}
/**
* 代理到Segment的put方法
* @param key
* @param value
* @return
*/
@Override
public V put(K key, V value) {
checkNotNull(key);
checkNotNull(value);
int hash = hash(key);
return segmentFor(hash).put(key, hash, value, false);
}
@Nullable
V put(K key, int hash, V value, boolean onlyIfAbsent) {
//保证线程安全,加锁
lock();
try {
//获取当前的时间
long now = map.ticker.read();
//清除队列中的元素
preWriteCleanup(now);
//localCache的Count+1
int newCount = this.count + 1;
//扩容操作
if (newCount > this.threshold) { // ensure capacity
expand();
newCount = this.count + 1;
}
//获取当前Entry中的HashTable的Entry数组
AtomicReferenceArray> table = this.table;
//定位
int index = hash & (table.length() - 1);
//获取第一个元素
ReferenceEntry first = table.get(index);
//遍历整个Entry链表
// Look for an existing entry.
for (ReferenceEntry e = first; e != null; e = e.getNext()) {
K entryKey = e.getKey();
if (e.getHash() == hash
&& entryKey != null
&& map.keyEquivalence.equivalent(key, entryKey)) {
// We found an existing entry.
//如果找到相应的元素
ValueReference valueReference = e.getValueReference();
//获取value
V entryValue = valueReference.get();
//如果entry的value为null,可能被GC掉了
if (entryValue == null) {
++modCount;
if (valueReference.isActive()) {
enqueueNotification( //减小锁时间的开销
key, hash, entryValue, valueReference.getWeight(), RemovalCause.COLLECTED);
//利用原来的key并且刷新value
setValue(e, key, value, now);//存储数据,并且将新增加的元素写入两个队列中
newCount = this.count; // count remains unchanged
} else {
setValue(e, key, value, now);//存储数据,并且将新增加的元素写入两个队列中
newCount = this.count + 1;
}
this.count = newCount; // write-volatile,保证内存可见性
//淘汰缓存
evictEntries(e);
return null;
} else if (onlyIfAbsent) {//原来的Entry中包含指定key的元素,所以读取一次,读取操作需要更新Access队列
// Mimic
// "if (!map.containsKey(key)) ...
// else return map.get(key);
recordLockedRead(e, now);
return entryValue;
} else {
//如果value不为null,那么更新value
// clobber existing entry, count remains unchanged
++modCount;
//将replace的Cause添加到队列中
enqueueNotification(
key, hash, entryValue, valueReference.getWeight(), RemovalCause.REPLACED);
setValue(e, key, value, now);//存储数据,并且将新增加的元素写入两个队列中
//数据的淘汰
evictEntries(e);
return entryValue;
}
}
}
//如果目标的entry不存在,那么新建entry
// Create a new entry.
++modCount;
ReferenceEntry newEntry = newEntry(key, hash, first);
setValue(newEntry, key, value, now);
table.set(index, newEntry);
newCount = this.count + 1;
this.count = newCount; // write-volatile
//淘汰多余的entry
evictEntries(newEntry);
return null;
} finally {
//解锁
unlock();
//处理刚刚的remove Cause
postWriteCleanup();
}
}
说明
- put方法一开始就加锁是为了保证线程安全,这点和ConcurrentHashMap一致
- preWriteCleanup在每次put之前都会清理动作,清理啥呢?
@GuardedBy("this")
void preWriteCleanup(long now) {
runLockedCleanup(now);
}
void runLockedCleanup(long now) {
if (tryLock()) {
try {
drainReferenceQueues();
expireEntries(now); // calls drainRecencyQueue
readCount.set(0);
} finally {
unlock();
}
}
}
@GuardedBy("this")
void drainReferenceQueues() {
if (map.usesKeyReferences()) {
drainKeyReferenceQueue();
}
if (map.usesValueReferences()) {
drainValueReferenceQueue();
}
}
@GuardedBy("this")
void drainKeyReferenceQueue() {
Reference extends K> ref;
int i = 0;
while ((ref = keyReferenceQueue.poll()) != null) {
@SuppressWarnings("unchecked")
ReferenceEntry entry = (ReferenceEntry) ref;
map.reclaimKey(entry);
if (++i == DRAIN_MAX) {
break;
}
}
}
清理的是keyReferenceQueue和valueReferenceQueue这两个队列,这两个队列是引用队列,Guava Cache为了支持弱引用和软引用,引入了引用清空队列,同时将key和value包装成了KeyReference 和 ValueReference,在构造cache缓存器一文中我们知道在创建Entry的时候有:
@GuardedBy("this")
ReferenceEntry newEntry(K key, int hash, @Nullable ReferenceEntry next) {
return map.entryFactory.newEntry(this, checkNotNull(key), hash, next);
}
利用map.entryFactory创建Entry。Factory的初始化是通过
entryFactory = EntryFactory.getFactory(keyStrength, usesAccessEntries(), usesWriteEntries());
可选的工厂有:
static final EntryFactory[] factories = {
STRONG,
STRONG_ACCESS,
STRONG_WRITE,
STRONG_ACCESS_WRITE,
WEAK,
WEAK_ACCESS,
WEAK_WRITE,
WEAK_ACCESS_WRITE,
};
通过相应的工厂创建对应的Entry,这里主要说一下WeakEntry:
WEAK {
@Override
ReferenceEntry newEntry(
Segment segment, K key, int hash, @Nullable ReferenceEntry next) {
return new WeakEntry(segment.keyReferenceQueue, key, hash, next);
}
},
/**
* Used for weakly-referenced keys.
*/
static class WeakEntry extends WeakReference implements ReferenceEntry {
WeakEntry(ReferenceQueue queue, K key, int hash, @Nullable ReferenceEntry next) {
super(key, queue);
this.hash = hash;
this.next = next;
}
@Override
public K getKey() {
return get();
}
/*
* It'd be nice to get these for free from AbstractReferenceEntry, but we're already extending
* WeakReference.
*/
// null access
@Override
public long getAccessTime() {
throw new UnsupportedOperationException();
}
WeakEntry继承了WeakReference实现了ReferenceEntry,也就是说这个引用是弱引用。WeakEntry引用的key和Value随时可能会被回收。构造的时候参数里面有ReferenceQueue
@GuardedBy("this")
void drainKeyReferenceQueue() {
Reference extends K> ref;
int i = 0;
while ((ref = keyReferenceQueue.poll()) != null) {
@SuppressWarnings("unchecked")
ReferenceEntry entry = (ReferenceEntry) ref;
map.reclaimKey(entry);
if (++i == DRAIN_MAX) {
break;
}
}
}
void reclaimKey(ReferenceEntry entry) {
int hash = entry.getHash();
segmentFor(hash).reclaimKey(entry, hash);
}
/**
* Removes an entry whose key has been garbage collected.
*/
boolean reclaimKey(ReferenceEntry entry, int hash) {
lock();
try {
int newCount = count - 1;
AtomicReferenceArray> table = this.table;
int index = hash & (table.length() - 1);
ReferenceEntry first = table.get(index);
for (ReferenceEntry e = first; e != null; e = e.getNext()) {
if (e == entry) {
++modCount;
ReferenceEntry newFirst =
removeValueFromChain(
first,
e,
e.getKey(),
hash,
e.getValueReference().get(),
e.getValueReference(),
RemovalCause.COLLECTED);
newCount = this.count - 1;
table.set(index, newFirst);
this.count = newCount; // write-volatile
return true;
}
}
return false;
} finally {
unlock();
postWriteCleanup();
}
}
上面就是清理过程了,如果发现key或者value被GC了,那么会在put的时候触发清理。
接下来是setvalue方法了,它做的是将value写入Entry
/**
* Sets a new value of an entry. Adds newly created entries at the end of the access queue.
*/
@GuardedBy("Segment.this")
void setValue(ReferenceEntry entry, K key, V value, long now) {
ValueReference previous = entry.getValueReference();
int weight = map.weigher.weigh(key, value);
checkState(weight >= 0, "Weights must be non-negative");
ValueReference valueReference =
map.valueStrength.referenceValue(this, entry, value, weight);
entry.setValueReference(valueReference);
recordWrite(entry, weight, now);
previous.notifyNewValue(value);
}
/**
* Updates eviction metadata that {@code entry} was just written. This currently amounts to
* adding {@code entry} to relevant eviction lists.
*/
@GuardedBy("Segment.this")
void recordWrite(ReferenceEntry entry, int weight, long now) {
// we are already under lock, so drain the recency queue immediately
drainRecencyQueue();
totalWeight += weight;
if (map.recordsAccess()) {
entry.setAccessTime(now);
}
if (map.recordsWrite()) {
entry.setWriteTime(now);
}
accessQueue.add(entry);
writeQueue.add(entry);
}
GuavaCache会维护两个队列,一个Write队列、一个Access队列,用这两个队列来实现最近读和最近写的清楚操作,来看看AccessQueue的实现:
/**
* A custom queue for managing access order. Note that this is tightly integrated with
* {@code ReferenceEntry}, upon which it reliese to perform its linking.
*
* Note that this entire implementation makes the assumption that all elements which are in
* the map are also in this queue, and that all elements not in the queue are not in the map.
*
*
The benefits of creating our own queue are that (1) we can replace elements in the middle
* of the queue as part of copyWriteEntry, and (2) the contains method is highly optimized
* for the current model.
*/
static final class AccessQueue extends AbstractQueue> {
final ReferenceEntry head = new AbstractReferenceEntry() {
@Override
public long getAccessTime() {
return Long.MAX_VALUE;
}
@Override
public void setAccessTime(long time) {}
ReferenceEntry nextAccess = this;
@Override
public ReferenceEntry getNextInAccessQueue() {
return nextAccess;
}
@Override
public void setNextInAccessQueue(ReferenceEntry next) {
this.nextAccess = next;
}
ReferenceEntry previousAccess = this;
@Override
public ReferenceEntry getPreviousInAccessQueue() {
return previousAccess;
}
@Override
public void setPreviousInAccessQueue(ReferenceEntry previous) {
this.previousAccess = previous;
}
};
// implements Queue
@Override
public boolean offer(ReferenceEntry entry) {
// unlink
connectAccessOrder(entry.getPreviousInAccessQueue(), entry.getNextInAccessQueue());
// add to tail
connectAccessOrder(head.getPreviousInAccessQueue(), entry);
connectAccessOrder(entry, head);
return true;
}
@Override
public ReferenceEntry peek() {
ReferenceEntry next = head.getNextInAccessQueue();
return (next == head) ? null : next;
}
@Override
public ReferenceEntry poll() {
ReferenceEntry next = head.getNextInAccessQueue();
if (next == head) {
return null;
}
remove(next);
return next;
}
@Override
@SuppressWarnings("unchecked")
public boolean remove(Object o) {
ReferenceEntry e = (ReferenceEntry) o;
ReferenceEntry previous = e.getPreviousInAccessQueue();
ReferenceEntry next = e.getNextInAccessQueue();
connectAccessOrder(previous, next);
nullifyAccessOrder(e);
return next != NullEntry.INSTANCE;
}
重点看下offer方法做了啥:
1.将Entry和它的前节点后节点的关联断开,这样就需要Entry中维护它的前向和后向引用。
2.将新增加的节点加入到队列的尾部,寻找尾节点用了head.getPreviousInAccessQueue()。可以看出来是个环形队列。
3.将新增加的节点,或者新调整出来的节点设为尾部节点。
可以得知,最近更新的节点一定是在尾部的,head后面的节点一定是不活跃的,在每一次清除过期节点的时候一定清除head之后的超时的节点。
接下来看看evictEntry,当Cache中设置了缓存最大条目前提下可能会触发:
/**
* Performs eviction if the segment is full. This should only be called prior to adding a new
* entry and increasing {@code count}.
*/
@GuardedBy("Segment.this")
void evictEntries() {
if (!map.evictsBySize()) {
return;
}
drainRecencyQueue();
while (totalWeight > maxSegmentWeight) {
ReferenceEntry e = getNextEvictable();
if (!removeEntry(e, e.getHash(), RemovalCause.SIZE)) {
throw new AssertionError();
}
}
}
先判断是否开启淘汰,之后清理ReferenceQueue,然后判断是否超过权重,超过的话就清理最近最少使用的Entry。
最后再看看在finall执行的postWriteCleanup,回调相应的listener方法。
void processPendingNotifications() {
RemovalNotification notification;
while ((notification = removalNotificationQueue.poll()) != null) {
try {
removalListener.onRemoval(notification);
} catch (Throwable e) {
logger.log(Level.WARNING, "Exception thrown by removal listener", e);
}
}
}
get 方法源码
get 流程
:
- 获取对象引用(引用可能是非alive的,比如是需要失效的、比如是loading的);
- 判断对象引用是否是alive的(如果entry是非法的、部分回收的、loading状态、需要失效的,则认为不是alive)。
- 如果对象是alive的,如果设置refresh,则异步刷新查询value,然后等待返回最新value。
- 针对不是alive的,但却是在loading的,等待loading完成(阻塞等待)。
- 这里如果value还没有拿到,则查询loader方法获取对应的值(阻塞获取)。
// LoadingCache methods
//local cache的代理
@Override
public V get(K key) throws ExecutionException {
return localCache.getOrLoad(key);
}
/**
* 根据key获取value,如果获取不到进行load
* @param key
* @return
* @throws ExecutionException
*/
V getOrLoad(K key) throws ExecutionException {
return get(key, defaultLoader);
}
V get(K key, CacheLoader super K, V> loader) throws ExecutionException {
int hash = hash(checkNotNull(key));//hash——>rehash
return segmentFor(hash).get(key, hash, loader);
}
// loading
//进行指定key对应的value的获取,读取不加锁
V get(K key, int hash, CacheLoader super K, V> loader) throws ExecutionException {
//保证key-value不为null
checkNotNull(key);
checkNotNull(loader);
try {
if (count != 0) { // read-volatile volatile读会刷新缓存,尽量保证可见性,如果为0那么直接load
// don't call getLiveEntry, which would ignore loading values
ReferenceEntry e = getEntry(key, hash);
//如果对应的Entry不为Null,证明值还在
if (e != null) {
long now = map.ticker.read();//获取当前的时间,根据当前的时间进行Live的数据的读取
V value = getLiveValue(e, now); // 判断是否为alive(此处是懒失效,在每次get时才检查是否达到失效时机)
//元素不为null的话可以不刷新
if (value != null) {
recordRead(e, now);//为entry增加accessTime,同时加入recencyQueue
statsCounter.recordHits(1);//更新当前的状态,增加为命中,可以用于计算命中率
// 如果指定了定时刷新 就尝试刷新value
return scheduleRefresh(e, key, hash, value, now, loader);
}
//value为null,如果此时value正在刷新,那么此时等待刷新结果
ValueReference valueReference = e.getValueReference();
if (valueReference.isLoading()) {
// 如果正在加载的,等待加载完成后,返回加载的值。(阻塞,future的get)
return waitForLoadingValue(e, key, valueReference);
}
}
}
//如果取不到值,那么进行统一的加锁get
// at this point e is either null or expired; 此处或者为null,或者已经被失效。
return lockedGetOrLoad(key, hash, loader);
} catch (ExecutionException ee) {
Throwable cause = ee.getCause();
if (cause instanceof Error) {
throw new ExecutionError((Error) cause);
} else if (cause instanceof RuntimeException) {
throw new UncheckedExecutionException(cause);
}
throw ee;
} finally {
postReadCleanup();//每次Put和get之后都要进行一次Clean
}
}
get的实现和JDK1.6的ConcurrentHashMap思想一致,都是put加锁,但是get是用volatile保证。这里主要做了:
- 首先获取Entry,Entry不为null获取对应的Value,如果Value不为空,那么证明值还在,那么这时候判断一下是否要刷新直接返回了。否则判断目前引用是否在Loading,如果是就等待Loading结束。
- 如果取不到Entry或者Value为null 并且没有在Loading,那么这时候进行lockedGetOrLoad(),这是一个大活儿。
V lockedGetOrLoad(K key, int hash, CacheLoader super K, V> loader) throws ExecutionException {
ReferenceEntry e;
ValueReference valueReference = null;
LoadingValueReference loadingValueReference = null;
boolean createNewEntry = true;
lock();//加锁,因为会改变数据结构
try {
// re-read ticker once inside the lock
long now = map.ticker.read();
preWriteCleanup(now);//清除引用队列,Acess队列和Write队列中过期的数据,这算是一次put操作
int newCount = this.count - 1;
AtomicReferenceArray> table = this.table;
int index = hash & (table.length() - 1);
ReferenceEntry first = table.get(index);
//定位目标元素
for (e = first; e != null; e = e.getNext()) {
K entryKey = e.getKey();
if (e.getHash() == hash
&& entryKey != null
&& map.keyEquivalence.equivalent(key, entryKey)) {
valueReference = e.getValueReference();
//如果目前处在loading状态,不创建新元素
if (valueReference.isLoading()) {
createNewEntry = false;
} else {
V value = valueReference.get();
if (value == null) { //可能被GC掉了,加入removeListener
enqueueNotification(
entryKey, hash, value, valueReference.getWeight(), RemovalCause.COLLECTED);
} else if (map.isExpired(e, now)) { //可能过期了
// This is a duplicate check, as preWriteCleanup already purged expired
// entries, but let's accomodate an incorrect expiration queue.
enqueueNotification(
entryKey, hash, value, valueReference.getWeight(), RemovalCause.EXPIRED);
} else {//目前就已经加载过了,返回
recordLockedRead(e, now);
statsCounter.recordHits(1);
// we were concurrent with loading; don't consider refresh
return value;
}
//删除在队列中相应的引用,因为后面要新创建
// immediately reuse invalid entries
writeQueue.remove(e);
accessQueue.remove(e);
this.count = newCount; // write-volatile
}
break;
}
}
//创建新的Entry,但是此时是没有值的
if (createNewEntry) {
loadingValueReference = new LoadingValueReference();
if (e == null) {
e = newEntry(key, hash, first);
e.setValueReference(loadingValueReference);
table.set(index, e);
} else {
// 到此,说e找到,但是是非法的,数据已被移除。e放入新建的引用
e.setValueReference(loadingValueReference);
}
}
} finally {
unlock();
postWriteCleanup();
}
// 上面加锁部分建完了新的entry,设置完valueReference(isAlive为false,isLoading 为false),到此,锁已经被释放,其他线程可以拿到一个loading状态的引用。这就符合get时,拿到loading状态引用后,阻塞等待加载的逻辑了。
if (createNewEntry) {
try {
// Synchronizes on the entry to allow failing fast when a recursive load is
// detected. This may be circumvented when an entry is copied, but will fail fast most
// of the time.
// 这里只对e加锁,而不是segment,允许get操作进入。
synchronized (e) {
return loadSync(key, hash, loadingValueReference, loader);
}
} finally {
statsCounter.recordMisses(1);
}
} else {
// The entry already exists. Wait for loading.
return waitForLoadingValue(e, key, valueReference);
}
}
说明:
- load算是一个写操作,需要加锁,
- 为了避免缓存被击穿,每个线程都去load容易导致数据库雪崩
因为是写所以要进行preWriteCleanup,根据key定位一下Entry,如果能定位到,那么判断是否在Loading,如果是的话不创建新的Entry并且等待Loading结束。如果不是那么判断value是否为null和是否过期,如果是的话都要进行创建新Entry,如果都不是证明value是加载过了,那么更新下Access队列然后返回,
// at most one of loadSync/loadAsync may be called for any given LoadingValueReference
//同步刷新
V loadSync(
K key,
int hash,
LoadingValueReference loadingValueReference,
CacheLoader super K, V> loader)
throws ExecutionException {
ListenableFuture loadingFuture = loadingValueReference.loadFuture(key, loader);
return getAndRecordStats(key, hash, loadingValueReference, loadingFuture);
}
这里创建了一个loadingReference,这也就是之前看到的判断是否在Loading。如果是Loading状态那么表明有一个线程正在更新Cache,其他的线程等待就可以了。
从上面我们可以看到,对于每一次get都会去进行Access队列的更新,同时对于多线程的更新只会引起一个线程去load数据,对于不存在的数据,get时也会进行一次load操作。同时通过同步操作解决了缓存击穿的问题。
这里讲下引用队列(Reference Queue)
- 引用队列可以很容易地实现跟踪不需要的引用。
- 一旦弱引用对象开始返回null,该弱引用指向的对象就被标记成了垃圾。
- 当你在构造WeakReference时传入一个ReferenceQueue对象,当该引用指向的对象被标记为垃圾的时候,这个引用对象会自动地加入到引用队列里面。
ReferenceEntry
- Cache中的所有Entry都是基于ReferenceEntry的实现。
- 信息包括:自身hash值,写入时间,读取时间。每次写入和读取的队列。以及链表指针。
- 每个Entry中均包含一个ValueReference类型来表示值。
ValueReference的类图
对于ValueReference,有三个实现类:StrongValueReference、SoftValueReference、WeakValueReference。为了支持动态加载机制,它还有一个LoadingValueReference,在需要动态加载一个key的值时,先把该值封装在LoadingValueReference中,以表达该key对应的值已经在加载了,如果其他线程也要查询该key对应的值,就能得到该引用,并且等待改值加载完成,从而保证该值只被加载一次(可以在evict以后重新加载)。在该值加载完成后,将LoadingValueReference替换成其他ValueReference类型。
每个ValueReference都纪录了weight值,所谓weight从字面上理解是“该值的重量”,它由Weighter接口计算而得。
还定义了Stength枚举类型作为ValueReference的factory类,它有三个枚举值:Strong、Soft、Weak,这三个枚举值分别创建各自的ValueReference。
WeakEntry为例子
在cache的put操作和带CacheBuilder中的都有newEntry的操作。newEntry根据cache builder的配置生成不用级别的引用,比如put操作:
// Create a new entry.
++modCount;
// 新建一个entry
ReferenceEntry newEntry = newEntry(key, hash, first);
// 设置值,也就是valueRerence
setValue(newEntry, key, value, now);
newEntry方法
根据cache创建时的配置(代码中生成的工厂),生成不同的Entry。
ReferenceEntry newEntry(K key, int hash, @Nullable ReferenceEntry next) {
return map.entryFactory.newEntry(this, checkNotNull(key), hash, next);
}
调用WEAK的newEntry,其中segment.keyReferenceQueue是key的引用队列。还有一个value的引用队列,valueReferenceQueue一会会出现。
WEAK {
@Override
ReferenceEntry newEntry(
Segment segment, K key, int hash, @Nullable ReferenceEntry next) {
return new WeakEntry(segment.keyReferenceQueue, key, hash, next);
}
},
setValue方法
首先要生成一个valueReference,然后set到entry中。
ValueReference valueReference =
map.valueStrength.referenceValue(this, entry, value, weight);
entry.setValueReference(valueReference);
Value的WEAK跟key的WEAK形式很像。只不过,增加了weight值(cachebuilder复写不同k,v对应的权重)和value的比较方法。
WEAK {
@Override
ValueReference referenceValue(
Segment segment, ReferenceEntry entry, V value, int weight) {
return (weight == 1)
? new WeakValueReference(segment.valueReferenceQueue, value, entry)
: new WeightedWeakValueReference(
segment.valueReferenceQueue, value, entry, weight);
}
@Override
Equivalence
cache如何基于引用做清理
如果key或者value的引用不是Strong类型,那么它们必然会被gc回收掉。回收掉后,引用对象依然存在,只是值为null了,这也是上文提到从entry中得到的ValueReference要判断的原因了。
**
* Drain the key and value reference queues, cleaning up internal entries containing garbage
* collected keys or values.
*/
@GuardedBy("this")
void drainReferenceQueues() {
if (map.usesKeyReferences()) {
drainKeyReferenceQueue();
}
if (map.usesValueReferences()) {
drainValueReferenceQueue();
}
}
如何失效,因为k和v的失效方法基本一样,只举例drainValueReferenceQueue。(执行前都会tryLock,执行时保证有锁)
void drainValueReferenceQueue() {
Reference extends V> ref;
int i = 0;
while ((ref = valueReferenceQueue.poll()) != null) {
@SuppressWarnings("unchecked")
ValueReference valueReference = (ValueReference) ref;
// 回收
map.reclaimValue(valueReference);
if (++i == DRAIN_MAX) {
break;
}
}
}
如何回收呢?
- map是segment维护的cache的引用,再次hash到segment。
- 找到segment后,加锁,hash找到entry table。遍历链表,根据key找到一个entry。
- 如果找到,且跟入参的valueReference==比较相等,执行removeValueFromChain
- 如果没找到,返回false。
- 如果找到,不等,返回false。
removeValueFromChain
ReferenceEntry removeValueFromChain(ReferenceEntry first,
ReferenceEntry entry, @Nullable K key, int hash, ValueReference valueReference,
RemovalCause cause) {
enqueueNotification(key, hash, valueReference, cause);
writeQueue.remove(entry);
accessQueue.remove(entry);
if (valueReference.isLoading()) {
valueReference.notifyNewValue(null);
return first;
} else {
return removeEntryFromChain(first, entry);
}
}
- 需要执行remove的通知,入队列。
- 针对LoadingValueReference,直接返回。
- 非loading执行移除。
具体如何执行remove呢?removeEntryFromChain
ReferenceEntry removeEntryFromChain(ReferenceEntry first,
ReferenceEntry entry) {
int newCount = count;
ReferenceEntry newFirst = entry.getNext();
for (ReferenceEntry e = first; e != entry; e = e.getNext()) {
// 这个方法是copy当前结点(e),然后将新的结点指向newFirst,返回copy得到的结点(next)。
// 如果改entry是需要回收的,那么该方法返回null。
ReferenceEntry next = copyEntry(e, newFirst);
if (next != null) {
newFirst = next;
} else {
// 如果偶遇k或者v已经回收了的entry,进入需要通知的队列。
removeCollectedEntry(e);
newCount--;
}
}
this.count = newCount;
return newFirst;
}
这段逻辑是,从first开始,一直到要remove结点(entry)的next(newFirst),依次copy每个结点,指向newFirst,然后将newFirst变成自身。最后这条链表的头就变成,最后copy的那个结点,也就是entry的上一个结点。
Cache的失效和回调
基于读写时间失效
失效逻辑和过程:
- Entry在进行一次读写操作后,会标识accessTime和writeTime。
f (map.recordsAccess()) {
entry.setAccessTime(now);
}
if (map.recordsWrite()) {
entry.setWriteTime(now);
}
- accessQueue和writeQueue分别会在读写操作适时的添加。
accessQueue.add(entry);
writeQueue.add(entry);
- 遍历accessQueue和writeQueue
void expireEntries(long now) {
drainRecencyQueue();
ReferenceEntry e;
// 取出entry,判断是否失效
while ((e = writeQueue.peek()) != null && map.isExpired(e, now)) {
if (!removeEntry(e, e.getHash(), RemovalCause.EXPIRED)) {
throw new AssertionError();
}
}
while ((e = accessQueue.peek()) != null && map.isExpired(e, now)) {
if (!removeEntry(e, e.getHash(), RemovalCause.EXPIRED)) {
throw new AssertionError();
}
}
}
- 判断是否失效
if (expiresAfterAccess()
&& (now - entry.getAccessTime() >= expireAfterAccessNanos)) {
return true;
}
- removeEntry就是调用上文的removeValueFromChain。
write链(writeQueue)和access链(accessQueue)
这两条链都是一个双向链表,通过ReferenceEntry中的previousInWriteQueue、nextInWriteQueue和previousInAccessQueue、nextInAccessQueue链接而成,但是以Queue的形式表达。
WriteQueue和AccessQueue都是自定义了offer、add(直接调用offer)、remove、poll等操作的逻辑。
- 对于offer(add)操作,如果是新加的节点,则直接加入到该链的队首,如果是已存在的节点,则将该节点链接的链首。(head始终保持在队首,新节点不断插入到队首。逻辑上最先插入的结点保持在,允许访问的头部)
- 对remove操作,直接从该链中移除该节点;
- 对poll操作,将头节点的下一个节点移除,并返回。
@Override
public boolean offer(ReferenceEntry entry) {
// unlink
connectAccessOrder(entry.getPreviousInAccessQueue(), entry.getNextInAccessQueue());
// add to tail
connectAccessOrder(head.getPreviousInAccessQueue(), entry);
connectAccessOrder(entry, head);
return true;
}
失效的通知回调
void enqueueNotification(@Nullable K key, int hash, ValueReference valueReference,
RemovalCause cause) {
totalWeight -= valueReference.getWeight();
if (cause.wasEvicted()) {
statsCounter.recordEviction();
}
if (map.removalNotificationQueue != DISCARDING_QUEUE) {
V value = valueReference.get();
RemovalNotification notification = new RemovalNotification(key, value, cause);
map.removalNotificationQueue.offer(notification);
}
}
首先判断移除的原因RemovalCause:EXPLICIT(remove、clear等用户有预期的操作),REPLACED(put、replace),COLLECTED,EXPIRED,SIZE。RemovalCause有个方法wasEvicted,表示是否是被驱逐的。前两种是false,后三种是true。
生成一个notification对象,入队列。
removalNotificationQueue何时被清理
在读写操作的finally阶段,都会执行。
void processPendingNotifications() {
RemovalNotification notification;
while ((notification = removalNotificationQueue.poll()) != null) {
try {
// 这里就是回调构造cache时注册的listener了
removalListener.onRemoval(notification);
} catch (Throwable e) {
logger.log(Level.WARNING, "Exception thrown by removal listener", e);
}
}
}
至此,guava cache的核心源码分析完了,感觉挺恶心的,,挺复杂的,还需要多复习巩固才能真正掌握。