LRUCache 原理

目录

0、相关文章:

1、源码分析:

2、为什么用LinkedHashMap


0、相关文章:

Glide--LruCache源码分析(文章一:阅读量152,1赞)

LruCache原理和用法与LinkedHashMap(文章二:阅读量6472,7赞)

java容器类LinkedHashMap源码分析

1、源码分析:

LruCache算法,又称为近期最少使用算法。主要算法原理就是把最近所使用的对象的强引用存储在LinkedHashMap上,并且,把最近最少使用的对象在缓存池达到预设值之前从内存中移除。

Glide中LruCache的源码:

public class LruCache {
  private final LinkedHashMap cache = new LinkedHashMap<>(100, 0.75f, true);
  private final int initialMaxSize;
  private int maxSize;
  private int currentSize = 0;

  /**
   * Constructor for LruCache.
   *
   * @param size The maximum size of the cache, the units must match the units used in {@link
   *             #getSize(Object)}.
   */
  public LruCache(int size) {
    this.initialMaxSize = size;
    this.maxSize = size;
  }

  /**
   * Sets a size multiplier that will be applied to the size provided in the constructor to put the
   * new size of the cache. If the new size is less than the current size, entries will be evicted
   * until the current size is less than or equal to the new size.
   *
   * @param multiplier The multiplier to apply.
   */
  public synchronized void setSizeMultiplier(float multiplier) {
    if (multiplier < 0) {
      throw new IllegalArgumentException("Multiplier must be >= 0");
    }
    maxSize = Math.round(initialMaxSize * multiplier);
    evict();
  }

  /**
   * Returns the size of a given item, defaulting to one. The units must match those used in the
   * size passed in to the constructor. Subclasses can override this method to return sizes in
   * various units, usually bytes.
   *
   * @param item The item to get the size of.
   */
  protected int getSize(Y item) {
    return 1;
  }

  /**
   * A callback called whenever an item is evicted from the cache. Subclasses can override.
   *
   * @param key  The key of the evicted item.
   * @param item The evicted item.
   */
  protected void onItemEvicted(T key, Y item) {
    // optional override
  }

  /**
   * Returns the current maximum size of the cache in bytes.
   */
  public synchronized int getMaxSize() {
    return maxSize;
  }

  /**
   * Returns the sum of the sizes of all items in the cache.
   */
  public synchronized int getCurrentSize() {
    return currentSize;
  }

  /**
   * Returns true if there is a value for the given key in the cache.
   *
   * @param key The key to check.
   */

  public synchronized boolean contains(T key) {
    return cache.containsKey(key);
  }

  /**
   * Returns the item in the cache for the given key or null if no such item exists.
   *
   * @param key The key to check.
   */
  @Nullable
  public synchronized Y get(T key) {
    return cache.get(key);
  }

  /**
   * Adds the given item to the cache with the given key and returns any previous entry for the
   * given key that may have already been in the cache.
   *
   * 

If the size of the item is larger than the total cache size, the item will not be added to * the cache and instead {@link #onItemEvicted(Object, Object)} will be called synchronously with * the given key and item.

* * @param key The key to add the item at. * @param item The item to add. */ public synchronized Y put(T key, Y item) { final int itemSize = getSize(item); if (itemSize >= maxSize) { onItemEvicted(key, item); return null; } final Y result = cache.put(key, item); if (item != null) { currentSize += getSize(item); } if (result != null) { // TODO: should we call onItemEvicted here? currentSize -= getSize(result); } evict(); return result; } /** * Removes the item at the given key and returns the removed item if present, and null otherwise. * * @param key The key to remove the item at. */ @Nullable public synchronized Y remove(T key) { final Y value = cache.remove(key); if (value != null) { currentSize -= getSize(value); } return value; } /** * Clears all items in the cache. */ public void clearMemory() { trimToSize(0); } /** * Removes the least recently used items from the cache until the current size is less than the * given size. * * @param size The size the cache should be less than. */ protected synchronized void trimToSize(int size) { Map.Entry last; while (currentSize > size) { last = cache.entrySet().iterator().next(); final Y toRemove = last.getValue(); currentSize -= getSize(toRemove); final T key = last.getKey(); cache.remove(key); onItemEvicted(key, toRemove); } } private void evict() { trimToSize(maxSize); } }

LruCache采用的集合是LinkedHashMap,这个集合是HashMap的基础上增加了 数据链表的功能,可以看到下面这个构造函数,第一个是初始容量100, 第二个是碰撞因子0.75(即真实容量到达总容量的75%就开始扩容),第三个是链表顺序是否按访问顺序,关于这个容器的代码分析我们放在下一篇文章,在这里我们只需要知道这个集合能记录到你访问数据的次序,最近的访问的会放在链表的前面

 private final LinkedHashMap cache = new LinkedHashMap<>(100, 0.75f, true);

initialMaxSize:初始大小,maxSize:最大,currentSize:当前 三个成员变量,创建时this.initialMaxSize 和this.maxSize 一样。 注意setSizeMultiplier函数的作用是传入一个变化乘数,改变当前的最大容量

 private final int initialMaxSize;
  private int maxSize;
  private int currentSize = 0;
  public LruCache(int size) {
    this.initialMaxSize = size;
    this.maxSize = size;
  }

  public synchronized void setSizeMultiplier(float multiplier) {
    if (multiplier < 0) {
      throw new IllegalArgumentException("Multiplier must be >= 0");
    }
    maxSize = Math.round(initialMaxSize * multiplier);
    evict();
  }

evict意思是驱逐,就是把数据清除出缓存,把容量缩减到小于等于maxSize,while循环,处理当前大小大于入参的情况,取出链表中的一个,获取其value的大小,清除,更新当前容器容量,直到符合要求


 protected synchronized void trimToSize(int size) {
    Map.Entry last;
    while (currentSize > size) {
      last = cache.entrySet().iterator().next();
      final Y toRemove = last.getValue();
      currentSize -= getSize(toRemove);
      final T key = last.getKey();
      cache.remove(key);
      onItemEvicted(key, toRemove);
    }
  }

  private void evict() {
    trimToSize(maxSize);
  }

获取item的大小,默认现在是1 。比如要实现一个Bitmap 缓存是需要返回大小的,缓存的大小是取决于所有bitmap的总大小和,而不是总个数

protected int getSize(Y item) {
    return 1;
}

一个清除元素发生的回调,让LruCache 的继承者选择自己做要的事

  protected void onItemEvicted(T key, Y item) {
    // optional override
  }

常规操作不解释

  public synchronized int getMaxSize() {
    return maxSize;
  }

  public synchronized int getCurrentSize() {
    return currentSize;
  }

  public synchronized boolean contains(T key) {
    return cache.containsKey(key);
  }

  @Nullable
  public synchronized Y get(T key) {
    return cache.get(key);
  }

put操作,首先获取待加入的item 大小,如果大于缓存最大容量,就不放进去,直接调用onItemEvicte. 小于缓存最大容量,执行放入,item不为空,更新缓存当前大小。 执行放入的结果result就是说如果之前在容器内key存在,会执行替换value的操作,这时候result!=null,需要把替换出来的item的大小减去, 作者弄了个//TODO,不知道这里是否加上onItemEvicted 的回调,个人感觉应该加上,毕竟数据被清除缓存了,通知下,怎么处理交给继承者。 最后 evict(),因为单个待处理的item大小小于缓存最大容量,但是加入后,有可能超出,这里加个维护容量的代码

  public synchronized Y put(T key, Y item) {
    final int itemSize = getSize(item);
    if (itemSize >= maxSize) {
      onItemEvicted(key, item);
      return null;
    }

    final Y result = cache.put(key, item);
    if (item != null) {
      currentSize += getSize(item);
    }
    if (result != null) {
      // TODO: should we call onItemEvicted here?
      currentSize -= getSize(result);
    }
    evict();

    return result;
  }

清除key出缓存,常规操作,维护当前缓存大小

 public synchronized Y remove(T key) {
    final Y value = cache.remove(key);
    if (value != null) {
      currentSize -= getSize(value);
    }
    return value;
  }

让缓存容量小于等于0,起到clearMemory的作用

  public void clearMemory() {
    trimToSize(0);
  }

2、为什么用LinkedHashMap

LruCache原理和用法与LinkedHashMap(文章二:阅读量6472,7赞)

为什么要用LinkedHashMap来存缓存呢,这个跟算法有关,LinkedHashMap刚好能提供LRUCache需要的算法。

这个集合内部本来就有个排序功能,当第三个参数是true的时候,数据在被访问的时候就会排序,这个排序的结果就是把最近访问的数据放到集合的最后面。

到时候删除的时候就从前面开始删除。
 

 

 

 

 

 

 

 

你可能感兴趣的:(LRUCache 原理)