LruCache是面试常客,你说你知道它是干啥的,却不知道它是怎么干的,你说这有啥用,知其然得知其所以然,所以一起看了看源码,一定会恍然大悟,以后面试可以有底气的说出它的工作原理了,那么一起来看看吧,代码不多,加上注释也就300多行。
我们可以打开LruCache.java的源码,第一行代码就是声明了一个变量
private final LinkedHashMap map;
没错了,LinkedHashMap就是LruCache这个类的精髓了,因为LruCache中Lru算法的实现就是通过LinkedHashMap来实现的。LinkedHashMap继承于HashMap,它使用了一个双向链表来存储Map中的Entry顺序关系,这种顺序有两种,一种是LRU顺序,一种是插入顺序,这可以由其构造函数
public LinkedHashMap(int initialCapacity,float loadFactor, boolean accessOrder)
中的accessOrder来指定,当accessOrder为true的时候,则按照LRU顺序,当accessOrder为false的时候,则按照插入顺序。所以,对于get、put、remove等操作,LinkedHashMap除了要做HashMap做的事情,还做些调整Entry顺序链表的工作。LruCache中将LinkedHashMap的顺序设置为LRU顺序来实现LRU缓存,每次调用get(也就是从内存缓存中取图片),则将该对象移到链表的尾端。调用put插入新的对象也是存储在链表尾端,这样当内存缓存达到设定的最大值时,将链表头部的对象(近期最少用到的)移除。这就符合了LRU的原则,最近最少使用的算法,位于头部容易被移除。
那么我们写个小程序来验证一下这个结论:
public static void main(String[] args){
LinkedHashMap map = new LinkedHashMap<>(0,0.75f,true);
map.put(0, 0);
map.put(1, 1);
map.put(2, 2);
map.put(3, 3);
map.put(4, 4);
map.put(5, 5);
map.put(6, 6);
map.get(3);
map.get(4);
for(Map.Entry entry : map.entrySet()){
System.out.println(entry.getKey() + ":" + entry.getValue());
}
}
int initialCapacity
表示初始值,这里设置的从0开始
float loadFactor
表示加载因子,这里设置的0.75f,表示当内存超过最大值的75%时,这个时候会申请增加内存
boolean accessOrder
当accessOrder为true的时候,则按照LRU顺序,当accessOrder为false的时候,则按照插入顺序
首先运行accessOrder为true的情况:
因为我们在最后调用了map.get(3)和map.get(4),所以3和4位置的元素就被移到了链表的最末端,当内存不够需要移除元素时,也不容易移除我们近期使用过的元素,这就是LRU的原理
下面来看看accessOrder为false的情况:
可以看到虽然我们调用了map.get(3)和map.get(4),但是链表的顺序并没有发生改变,这并不能满足最少最近使用原则,所以在LruCache.java中,构造函数初始化中就将accessOrder设置为了true,就是因为这个原因。
下面我们来看看LruCache类的全部源码,这里直接粘贴过来代码,删除注释,加上自己理解的注释,大家在阅读的过程中相信不会难以理解。
public class LruCache<K, V> {
//定义一个LinkedHashMap
private final LinkedHashMap map;
private int size;//初始大小
private int maxSize;//最大容量
private int putCount;//插入个数
private int createCount;//创建个数
private int evictionCount;//回收个数
private int hitCount;//找到key的个数
private int missCount;//没找到key的个数
//构造函数,传递进来一个最大容量值
public LruCache(int maxSize) {
if (maxSize <= 0) {
throw new IllegalArgumentException("maxSize <= 0");
}
//赋值,初始化
this.maxSize = maxSize;
this.map = new LinkedHashMap(0, 0.75f, true);
}
//设置cache的大小
public void resize(int maxSize) {
if (maxSize <= 0) {
throw new IllegalArgumentException("maxSize <= 0");
}
synchronized (this) {
this.maxSize = maxSize;
}
trimToSize(maxSize);
}
//如果通过key查找到value存在于cache中就直接返回或者通过create方法创建一个然后返回
//如果这个值被返回了,那么它将移动到队列的头部
//如果一个值没有被缓存同时也不能被创建则返回null
public final V get(K key) {
if (key == null) {
throw new NullPointerException("key == null");
}
V mapValue;
synchronized (this) {
mapValue = map.get(key);
//找到对应值,命中+1,直接返回该值
if (mapValue != null) {
hitCount++;
return mapValue;
}
//否则未命中+1
missCount++;
}
//如果没找到key对应的值那么就尝试创建一个,也许花费较长的时间
//并且创建后返回的map也许和之前的不同,如果创建的值和map中有冲突的话
//那么我们就释放掉创建的值,保留map中的值。
V createdValue = create(key);
//通过观察后面的create()方法,可以看到直接return null;
//那么我们需要想一想为什么源码中是直接返回null呢?
//因为LruCache常常作为内存缓存而存在,所以当我们查找key找不到对应的value时
//这个时候我们应该从其他方面,比如文件缓存或者网络中请求数据
//而不是我们随便赋值创建一个值返回,所以这里返回null是合理的。
//如果自己真的有需要的话,自己需要重写create方法,手动创建一个值返回
if (createdValue == null) {
return null;
}
//走到这儿说明创建了一个不为null的值
synchronized (this) {
createCount++;//创建个数+1
//把创建的value插入到map对应的key中
//并且将原来键为key的对象保存到mapValue
mapValue = map.put(key, createdValue);
if (mapValue != null) {
//如果mapValue不为空,说明原来key对应的是有值的,则撤销上一步的put操作。
map.put(key, mapValue);
} else {
//加入新创建的对象之后需要重新计算size大小
size += safeSizeOf(key, createdValue);
}
}
if (mapValue != null) {
entryRemoved(false, key, createdValue, mapValue);
return mapValue;
} else {
//每次新加入对象都需要调用trimToSize方法看是否需要回收
trimToSize(maxSize);
return createdValue;
}
}
//将key对应的value缓存起来,放在队列的头部
//返回key对应的之前的旧值
public final V put(K key, V value) {
if (key == null || value == null) {
throw new NullPointerException("key == null || value == null");
}
V previous;
synchronized (this) {
putCount++;//插入数量+1
size += safeSizeOf(key, value);//重新计算大小
//得到key对应的前一个value,如果之前无值,返回null,如果有值,返回前一个值
previous = map.put(key, value);
if (previous != null) {
size -= safeSizeOf(key, previous);
}
}
if (previous != null) {
entryRemoved(false, key, previous, value);
}
trimToSize(maxSize);
//返回之前key对应的旧值value
return previous;
}
//根据maxSize来调整内存cache的大小,如果maxSize传入-1,则清空缓存中的所有对象
public void trimToSize(int maxSize) {
while (true) {
K key;
V value;
synchronized (this) {
if (size < 0 || (map.isEmpty() && size != 0)) {
throw new IllegalStateException(getClass().getName()
+ ".sizeOf() is reporting inconsistent results!");
}
//如果当前size小于maxSize或者map没有任何对象,则结束循环
if (size <= maxSize) {
break;
}
Map.Entry toEvict = map.eldest();
if (toEvict == null) {
break;
}
key = toEvict.getKey();
value = toEvict.getValue();
map.remove(key);
size -= safeSizeOf(key, value);
evictionCount++;//回收个数+1
}
entryRemoved(true, key, value, null);
}
}
//从内存缓存中根据key值移除某个对象并返回该对象
public final V remove(K key) {
if (key == null) {
throw new NullPointerException("key == null");
}
V previous;
synchronized (this) {
previous = map.remove(key);
if (previous != null) {
size -= safeSizeOf(key, previous);
}
}
if (previous != null) {
entryRemoved(false, key, previous, null);
}
return previous;
}
/**
* Called for entries that have been evicted or removed. This method is
* invoked when a value is evicted to make space, removed by a call to
* {@link #remove}, or replaced by a call to {@link #put}. The default
* implementation does nothing.
*
* The method is called without synchronization: other threads may
* access the cache while this method is executing.
*
* @param evicted true if the entry is being removed to make space, false
* if the removal was caused by a {@link #put} or {@link #remove}.
* @param newValue the new value for {@code key}, if it exists. If non-null,
* this removal was caused by a {@link #put}. Otherwise it was caused by
* an eviction or a {@link #remove}.
*/
protected void entryRemoved(boolean evicted, K key, V oldValue, V newValue) {}
/**
* Called after a cache miss to compute a value for the corresponding key.
* Returns the computed value or null if no value can be computed. The
* default implementation returns null.
*
* The method is called without synchronization: other threads may
* access the cache while this method is executing.
*
*
If a value for {@code key} exists in the cache when this method
* returns, the created value will be released with {@link #entryRemoved}
* and discarded. This can occur when multiple threads request the same key
* at the same time (causing multiple values to be created), or when one
* thread calls {@link #put} while another is creating a value for the same
* key.
*/
protected V create(K key) {
return null;
}
private int safeSizeOf(K key, V value) {
int result = sizeOf(key, value);
if (result < 0) {
throw new IllegalStateException("Negative size: " + key + "=" + value);
}
return result;
}
/**
* Returns the size of the entry for {@code key} and {@code value} in
* user-defined units. The default implementation returns 1 so that size
* is the number of entries and max size is the maximum number of entries.
*
* An entry's size must not change while it is in the cache.
*/
//一般需要重写该方法来计算对象的大小
protected int sizeOf(K key, V value) {
return 1;
}
/**
* Clear the cache, calling {@link #entryRemoved} on each removed entry.
*/
//回收所有对象
public final void evictAll() {
trimToSize(-1); // -1 will evict 0-sized elements
}
/**
* For caches that do not override {@link #sizeOf}, this returns the number
* of entries in the cache. For all other caches, this returns the sum of
* the sizes of the entries in this cache.
*/
public synchronized final int size() {
return size;
}
/**
* For caches that do not override {@link #sizeOf}, this returns the maximum
* number of entries in the cache. For all other caches, this returns the
* maximum sum of the sizes of the entries in this cache.
*/
public synchronized final int maxSize() {
return maxSize;
}
/**
* Returns the number of times {@link #get} returned a value that was
* already present in the cache.
*/
public synchronized final int hitCount() {
return hitCount;
}
/**
* Returns the number of times {@link #get} returned null or required a new
* value to be created.
*/
public synchronized final int missCount() {
return missCount;
}
/**
* Returns the number of times {@link #create(Object)} returned a value.
*/
public synchronized final int createCount() {
return createCount;
}
/**
* Returns the number of times {@link #put} was called.
*/
public synchronized final int putCount() {
return putCount;
}
/**
* Returns the number of values that have been evicted.
*/
public synchronized final int evictionCount() {
return evictionCount;
}
/**
* Returns a copy of the current contents of the cache, ordered from least
* recently accessed to most recently accessed.
*/
public synchronized final Map snapshot() {
return new LinkedHashMap(map);
}
@Override public synchronized final String toString() {
int accesses = hitCount + missCount;
int hitPercent = accesses != 0 ? (100 * hitCount / accesses) : 0;
return String.format("LruCache[maxSize=%d,hits=%d,misses=%d,hitRate=%d%%]",
maxSize, hitCount, missCount, hitPercent);
}
}
通过浏览分析了一遍LruCache源码,可以发现最核心的东西就是LinkedHashMap,通过对LinkedHashMap的put、get、remove来实现元素的增删,并且能保证最近访问过的元素会移动到链表的末端从而避免优先被回收,现在回过来看看,LruCache的实现原理是不是挺简单的呢?