使用LRU(Least recently used,最近最少使用)算法缓存技术能大大提升程序性能。
原理:
1. 新数据插入到链表头部;
2. 每当缓存命中(即缓存数据被访问),则将数据移到链表头部;
3. 当缓存内容超过指定大小的时候,将链表尾部的数据丢弃。
了解原理后,我们看下android中LruCache 的源代码实现。
public LruCache(int maxSize) {
if (maxSize <= 0) {
throw new IllegalArgumentException("maxSize <= 0");
}
this.maxSize = maxSize;
this.map = new LinkedHashMap(0, 0.75f, true);
}
然后我们看put方法
public final V put(K key, V value) {
if (key == null || value == null) {
throw new NullPointerException("key == null || value == null");
}
V previous;
synchronized (this) {
putCount++;
size += safeSizeOf(key, value);
previous = map.put(key, value);
if (previous != null) {
size -= safeSizeOf(key, previous);
}
}
if (previous != null) {
entryRemoved(false, key, previous, value);
}
trimToSize(maxSize);
return previous;
}
public void trimToSize(int maxSize) {
while (true) {
K key;
V value;
synchronized (this) {
if (size < 0 || (map.isEmpty() && size != 0)) {
throw new IllegalStateException(getClass().getName()
+ ".sizeOf() is reporting inconsistent results!");
}
if (size <= maxSize || map.isEmpty()) {
break;
}
Map.Entry toEvict = map.entrySet().iterator().next();
key = toEvict.getKey();
value = toEvict.getValue();
map.remove(key);
size -= safeSizeOf(key, value);
evictionCount++;
}
entryRemoved(true, key, value, null);
}
}
这个方法比较简单,一个while循环,如果超过指定大小,不断得丢掉链表尾巴上的对象。
然后还有一个重要的get方法。
public final V get(K key) {
if (key == null) {
throw new NullPointerException("key == null");
}
V mapValue;
synchronized (this) {
mapValue = map.get(key);
if (mapValue != null) {
hitCount++;
return mapValue;
}
missCount++;
}
/*
* Attempt to create a value. This may take a long time, and the map
* may be different when create() returns. If a conflicting value was
* added to the map while create() was working, we leave that value in
* the map and release the created value.
*/
V createdValue = create(key);
if (createdValue == null) {
return null;
}
synchronized (this) {
createCount++;
mapValue = map.put(key, createdValue);
if (mapValue != null) {
// There was a conflict so undo that last put
map.put(key, mapValue);
} else {
size += safeSizeOf(key, createdValue);
}
}
if (mapValue != null) {
entryRemoved(false, key, createdValue, mapValue);
return mapValue;
} else {
trimToSize(maxSize);
return createdValue;
}
}
这个create方法可以算第二层缓存逻辑,比如去本地文件加载进来。
以上,代码实现都比较简单,或者说过于简单了,有一段时间我就纳闷,这代码里没实现 LRU原理的其中一点(命中数据提到表头)这个逻辑啊,
/** * Returns the value for {@code key} if it exists in the cache or can be * created by {@code #create}. If a value was returned, it is moved to the * head of the queue. This returns null if a value is not cached and cannot * be created. */一看注释,注释也都表明有实现这个逻辑。然后找啊找,终于明白。
在lruCache的构造函数中
this.map = new LinkedHashMap第三个参数是true,这个参数表明这个linkedMap的排列规则(0, 0.75f, true);
跑到LinkedHashMap中查看get方法实现如下,
public V get(Object key) { /* * This method is overridden to eliminate the need for a polymorphic * invocation in superclass at the expense of code duplication. */ if (key == null) { HashMapEntrye = entryForNullKey; if (e == null) return null; if (accessOrder) makeTail((LinkedEntry ) e); return e.value; } int hash = Collections.secondaryHash(key); HashMapEntry[] tab = table; for (HashMapEntry e = tab[hash & (tab.length - 1)]; e != null; e = e.next) { K eKey = e.key; if (eKey == key || (e.hash == hash && key.equals(eKey))) { if (accessOrder) makeTail((LinkedEntry ) e); return e.value; } } return null; }
注意到标红的部分,原来是在这儿实现的
在看put方法,put方法是hashmap中实现的:
public V put(K key, V value) { if (key == null) { return putValueForNullKey(value); } int hash = Collections.secondaryHash(key); HashMapEntry然后linkedHashmap重写了preModify方法[] tab = table; int index = hash & (tab.length - 1); for (HashMapEntry e = tab[index]; e != null; e = e.next) { if (e.hash == hash && key.equals(e.key)) { preModify(e); V oldValue = e.value; e.value = value; return oldValue; } } // No entry for (non-null) key is present; create one modCount++; if (size++ > threshold) { tab = doubleCapacity(); index = hash & (tab.length - 1); } addNewEntry(key, value, hash, index); return null; }
void preModify(HashMapEntry e) {
if (accessOrder) {
makeTail((LinkedEntry) e);
}
}
===========================华丽的分割线===============================
LruCache能解决内存缓存的缓存优化,但同样,我们的文件缓存(硬盘缓存)也同样需要如此,否则用户的硬盘空间被我们的程序越用越小要骂娘了0.0
针对这个Google又提供了一套硬盘缓存的解决方案:DiskLruCache(非Google官方编写,但获得官方认证)。只可惜,Android Doc中并没有对DiskLruCache的用法给出详细的说明 http://www.mobile-open.com/2014/3104.html 这个连接介绍的比较好,可以学习。