LruCache 原理简单分析

最近挺有意思的,面试了好几个人后聊到了 LruCache缓存机制。

部分人还是对这个不太了解,今天就结合源码,简单的分享一下这个缓存机制的核心原理。

概念

缓存算法为LRU(Least Recently Used),最近最少使用算法。核心思想是当缓存满时,会优先淘汰那些近期最少使用的缓存对象。

LruCache

LruCache位于android.util.下,在support也有一份。


package android.util;

import java.util.LinkedHashMap;
import java.util.Map;

public class LruCache {

    //核心就在这里了
    private final LinkedHashMap map;

     private int size;
    private int maxSize;

    private int putCount;
    private int createCount;
    private int evictionCount;
    private int hitCount;
    private int missCount;

    /**
     * @param maxSize for caches that do not override {@link #sizeOf}, this is
     *     the maximum number of entries in the cache. For all other caches,
     *     this is the maximum sum of the sizes of the entries in this cache.
     */
    public LruCache(int maxSize) {
        if (maxSize <= 0) {
            throw new IllegalArgumentException("maxSize <= 0");
        }
        this.maxSize = maxSize;
        this.map = new LinkedHashMap(0, 0.75f, true);
    }

    /**
     * Sets the size of the cache.
     *
     * @param maxSize The new maximum size.
     */
    public void resize(int maxSize) {
        if (maxSize <= 0) {
            throw new IllegalArgumentException("maxSize <= 0");
        }

        synchronized (this) {
            this.maxSize = maxSize;
        }
        trimToSize(maxSize);
    }

    /**
     * Returns the value for {@code key} if it exists in the cache or can be
     * created by {@code #create}. If a value was returned, it is moved to the
     * head of the queue. This returns null if a value is not cached and cannot
     * be created.
     */
    public final V get(K key) {
        if (key == null) {
            throw new NullPointerException("key == null");
        }

        V mapValue;
        synchronized (this) {
            mapValue = map.get(key);
            if (mapValue != null) {
                hitCount++;
                return mapValue;
            }
            missCount++;
        }

        /*
         * Attempt to create a value. This may take a long time, and the map
         * may be different when create() returns. If a conflicting value was
         * added to the map while create() was working, we leave that value in
         * the map and release the created value.
         */

        V createdValue = create(key);
        if (createdValue == null) {
            return null;
        }

        synchronized (this) {
            createCount++;
            mapValue = map.put(key, createdValue);

            if (mapValue != null) {
                // There was a conflict so undo that last put
                map.put(key, mapValue);
            } else {
                size += safeSizeOf(key, createdValue);
            }
        }

        if (mapValue != null) {
            entryRemoved(false, key, createdValue, mapValue);
            return mapValue;
        } else {
            trimToSize(maxSize);
            return createdValue;
        }
    }

 //...
}

核心就在于其内部实现了一个LinkedHashMap并且,其实现这个类的构造方法是

 this.map = new LinkedHashMap(0, 0.75f, true);
LinkedHashMap

关于LinkedHashMap,先提两点:

1、LinkedHashMap可以认为是HashMap+LinkedList,即它既使用HashMap操作数据结构,又使用LinkedList维护插入元素的先后顺序。

2、LinkedHashMap的基本实现思想就是----多态。可以说,理解多态,再去理解LinkedHashMap原理会事半功倍;反之也是,对于LinkedHashMap原理的学习,也可以促进和加深对于多态的理解。

可以找到LinkedHashMap这个三个参数的构造方法

  /**
     * Constructs an empty LinkedHashMap instance with the
     * specified initial capacity, load factor and ordering mode.
     *
     * @param  initialCapacity the initial capacity
     * @param  loadFactor      the load factor
     * @param  accessOrder     the ordering mode - true for
     *         access-order, false for insertion-order
     * @throws IllegalArgumentException if the initial capacity is negative
     *         or the load factor is nonpositive
     */
    public LinkedHashMap(int initialCapacity,
                         float loadFactor,
                         boolean accessOrder) {
        super(initialCapacity, loadFactor);
        this.accessOrder = accessOrder;
    }

其中accessOrder = true,这个参数具体有什么用?

核心原理

LinkedHashMap重写了父类HashMap的get方法,实际在调用父类getEntry()方法取得查找的元素后,再判断当排序模式accessOrder为true时(即按访问顺序排序),先将当前节点从链表中移除,然后再将当前节点插入到链表尾部。由于的链表的增加、删除操作是常量级的,故并不会带来性能的损失。

/**
 * 通过key获取value,与HashMap的区别是:当LinkedHashMap按访问顺序排序的时候,会将访问的当前节点移到链表尾部(头结点的前一个节点)
 */
public V get(Object key) {
    // 调用父类HashMap的getEntry()方法,取得要查找的元素。  
    Entry e = (Entry)getEntry(key);
    if (e == null)
        return null;
    // 记录访问顺序。
    e.recordAccess(this);
    return e.value;
}

/**
 * 在HashMap的put和get方法中,会调用该方法,在HashMap中该方法为空
 * 在LinkedHashMap中,当按访问顺序排序时,该方法会将当前节点插入到链表尾部(头结点的前一个节点),否则不做任何事
 */
void recordAccess(HashMap m) {
    LinkedHashMap lm = (LinkedHashMap)m;
    //当LinkedHashMap按访问排序时
    if (lm.accessOrder) {
        lm.modCount++;
        //移除当前节点
        remove();
        //将当前节点插入到头结点前面
        addBefore(lm.header);
    }
}

/**
 * 移除节点,并修改前后引用
 */
private void remove() {
    before.after = after;
    after.before = before;
}

private void addBefore(Entry existingEntry) {
    after  = existingEntry;
    before = existingEntry.before;
    before.after = this;
    after.before = this;
}

我们可以看到看到每次recordAccess的时候做了两件事情:

  • 把待移动的Entry的前后Entry相连
  • 把待移动的Entry移动到尾部

小结

LruCache内部维护了一个有序LinkedHashMap,当每次使用其实的对象的时候,就将这个对象移除,然后将这个对象移动到链表的尾部。当每次去 get 或者 put 的时候都会调用trimToSize(maxSize)这个方法。

  /**
     * Remove the eldest entries until the total of remaining entries is at or
     * below the requested size.
     *
     * @param maxSize the maximum size of the cache before returning. May be -1
     *            to evict even 0-sized elements.
     */
    public void trimToSize(int maxSize) {
        while (true) {
            K key;
            V value;
            synchronized (this) {
                if (size < 0 || (map.isEmpty() && size != 0)) {
                    throw new IllegalStateException(getClass().getName()
                            + ".sizeOf() is reporting inconsistent results!");
                }

                if (size <= maxSize) {
                    break;
                }

                Map.Entry toEvict = map.eldest();
                if (toEvict == null) {
                    break;
                }

                key = toEvict.getKey();
                value = toEvict.getValue();
                map.remove(key);
                size -= safeSizeOf(key, value);
                evictionCount++;
            }

            entryRemoved(true, key, value, null);
        }
    }

里面有一个 while (true)的循环,如果内存不足的情况下回直接删除位于队首的数据。只有当前缓存容量小于maxSize的时候才break。

所以这样就达到了LRU 算法和 LruCache。

你可能感兴趣的:(LruCache 原理简单分析)