android源码解析之(七)-->LruCache

android开发过程中经常会用到缓存,现在主流的app中图片等资源的缓存策略一般是分两级,一个是内存级别的缓存,一个是磁盘级别的缓存。

作为android系统的维护者google也开源了其缓存方案,LruCache和DiskLruCache。从android3.1开始LruCache已经作为android源码的一部分维护在android系统中,为了兼容以前的版本android的support-v4包也提供了LruCache的维护,如果App需要兼容到android3.1之前的版本就需要使用support-v4包中的LruCache,如果不需要兼容到android3.1则直接使用android源码中的LruCache即可,这里需要注意的是DiskLruCache并不是android源码的一部分。

在LruCache的源码中,关于LruCache有这样的一段介绍:

A cache that holds strong references to a limited number of values. Each time a value is accessed, it is moved to the head of a queue. When a value is added to a full cache, the value at the end of that queue is evicted and may become eligible for garbage collection.

cache对象通过一个强引用来访问内容。每次当一个item被访问到的时候,这个item就会被移动到一个队列的队首。当一个item被添加到已经满了的队列时,这个队列的队尾的item就会被移除。

其实这个实现的过程就是LruCache的缓存策略,即Lru–>(Least recent used)最少最近使用算法。

下面我们具体看一下LruCache的实现:

public class LruCache<K, V> {
    private final LinkedHashMap<K, V> map;

    /** Size of this cache in units. Not necessarily the number of elements. */
    private int size;
    private int maxSize;

    private int putCount;
    private int createCount;
    private int evictionCount;
    private int hitCount;
    private int missCount;

    /** * @param maxSize for caches that do not override {@link #sizeOf}, this is * the maximum number of entries in the cache. For all other caches, * this is the maximum sum of the sizes of the entries in this cache. */
    public LruCache(int maxSize) {
        if (maxSize <= 0) {
            throw new IllegalArgumentException("maxSize <= 0");
        }
        this.maxSize = maxSize;
        this.map = new LinkedHashMap<K, V>(0, 0.75f, true);
    }

    /** * Sets the size of the cache. * * @param maxSize The new maximum size. */
    public void resize(int maxSize) {
        if (maxSize <= 0) {
            throw new IllegalArgumentException("maxSize <= 0");
        }

        synchronized (this) {
            this.maxSize = maxSize;
        }
        trimToSize(maxSize);
    }

    /** * Returns the value for {@code key} if it exists in the cache or can be * created by {@code #create}. If a value was returned, it is moved to the * head of the queue. This returns null if a value is not cached and cannot * be created. */
    public final V get(K key) {
        if (key == null) {
            throw new NullPointerException("key == null");
        }

        V mapValue;
        synchronized (this) {
            mapValue = map.get(key);
            if (mapValue != null) {
                hitCount++;
                return mapValue;
            }
            missCount++;
        }

        /* * Attempt to create a value. This may take a long time, and the map * may be different when create() returns. If a conflicting value was * added to the map while create() was working, we leave that value in * the map and release the created value. */

        V createdValue = create(key);
        if (createdValue == null) {
            return null;
        }

        synchronized (this) {
            createCount++;
            mapValue = map.put(key, createdValue);

            if (mapValue != null) {
                // There was a conflict so undo that last put
                map.put(key, mapValue);
            } else {
                size += safeSizeOf(key, createdValue);
            }
        }

        if (mapValue != null) {
            entryRemoved(false, key, createdValue, mapValue);
            return mapValue;
        } else {
            trimToSize(maxSize);
            return createdValue;
        }
    }

    /** * Caches {@code value} for {@code key}. The value is moved to the head of * the queue. * * @return the previous value mapped by {@code key}. */
    public final V put(K key, V value) {
        if (key == null || value == null) {
            throw new NullPointerException("key == null || value == null");
        }

        V previous;
        synchronized (this) {
            putCount++;
            size += safeSizeOf(key, value);
            previous = map.put(key, value);
            if (previous != null) {
                size -= safeSizeOf(key, previous);
            }
        }

        if (previous != null) {
            entryRemoved(false, key, previous, value);
        }

        trimToSize(maxSize);
        return previous;
    }

    /** * Remove the eldest entries until the total of remaining entries is at or * below the requested size. * * @param maxSize the maximum size of the cache before returning. May be -1 * to evict even 0-sized elements. */
    public void trimToSize(int maxSize) {
        while (true) {
            K key;
            V value;
            synchronized (this) {
                if (size < 0 || (map.isEmpty() && size != 0)) {
                    throw new IllegalStateException(getClass().getName()
                            + ".sizeOf() is reporting inconsistent results!");
                }

                if (size <= maxSize) {
                    break;
                }

                Map.Entry<K, V> toEvict = map.eldest();
                if (toEvict == null) {
                    break;
                }

                key = toEvict.getKey();
                value = toEvict.getValue();
                map.remove(key);
                size -= safeSizeOf(key, value);
                evictionCount++;
            }

            entryRemoved(true, key, value, null);
        }
    }

    /** * Removes the entry for {@code key} if it exists. * * @return the previous value mapped by {@code key}. */
    public final V remove(K key) {
        if (key == null) {
            throw new NullPointerException("key == null");
        }

        V previous;
        synchronized (this) {
            previous = map.remove(key);
            if (previous != null) {
                size -= safeSizeOf(key, previous);
            }
        }

        if (previous != null) {
            entryRemoved(false, key, previous, null);
        }

        return previous;
    }

    /** * Called for entries that have been evicted or removed. This method is * invoked when a value is evicted to make space, removed by a call to * {@link #remove}, or replaced by a call to {@link #put}. The default * implementation does nothing. * * <p>The method is called without synchronization: other threads may * access the cache while this method is executing. * * @param evicted true if the entry is being removed to make space, false * if the removal was caused by a {@link #put} or {@link #remove}. * @param newValue the new value for {@code key}, if it exists. If non-null, * this removal was caused by a {@link #put}. Otherwise it was caused by * an eviction or a {@link #remove}. */
    protected void entryRemoved(boolean evicted, K key, V oldValue, V newValue) {}

    /** * Called after a cache miss to compute a value for the corresponding key. * Returns the computed value or null if no value can be computed. The * default implementation returns null. * * <p>The method is called without synchronization: other threads may * access the cache while this method is executing. * * <p>If a value for {@code key} exists in the cache when this method * returns, the created value will be released with {@link #entryRemoved} * and discarded. This can occur when multiple threads request the same key * at the same time (causing multiple values to be created), or when one * thread calls {@link #put} while another is creating a value for the same * key. */
    protected V create(K key) {
        return null;
    }

    private int safeSizeOf(K key, V value) {
        int result = sizeOf(key, value);
        if (result < 0) {
            throw new IllegalStateException("Negative size: " + key + "=" + value);
        }
        return result;
    }

    /** * Returns the size of the entry for {@code key} and {@code value} in * user-defined units. The default implementation returns 1 so that size * is the number of entries and max size is the maximum number of entries. * * <p>An entry's size must not change while it is in the cache. */
    protected int sizeOf(K key, V value) {
        return 1;
    }

    /** * Clear the cache, calling {@link #entryRemoved} on each removed entry. */
    public final void evictAll() {
        trimToSize(-1); // -1 will evict 0-sized elements
    }

    /** * For caches that do not override {@link #sizeOf}, this returns the number * of entries in the cache. For all other caches, this returns the sum of * the sizes of the entries in this cache. */
    public synchronized final int size() {
        return size;
    }

    /** * For caches that do not override {@link #sizeOf}, this returns the maximum * number of entries in the cache. For all other caches, this returns the * maximum sum of the sizes of the entries in this cache. */
    public synchronized final int maxSize() {
        return maxSize;
    }

    /** * Returns the number of times {@link #get} returned a value that was * already present in the cache. */
    public synchronized final int hitCount() {
        return hitCount;
    }

    /** * Returns the number of times {@link #get} returned null or required a new * value to be created. */
    public synchronized final int missCount() {
        return missCount;
    }

    /** * Returns the number of times {@link #create(Object)} returned a value. */
    public synchronized final int createCount() {
        return createCount;
    }

    /** * Returns the number of times {@link #put} was called. */
    public synchronized final int putCount() {
        return putCount;
    }

    /** * Returns the number of values that have been evicted. */
    public synchronized final int evictionCount() {
        return evictionCount;
    }

    /** * Returns a copy of the current contents of the cache, ordered from least * recently accessed to most recently accessed. */
    public synchronized final Map<K, V> snapshot() {
        return new LinkedHashMap<K, V>(map);
    }

    @Override public synchronized final String toString() {
        int accesses = hitCount + missCount;
        int hitPercent = accesses != 0 ? (100 * hitCount / accesses) : 0;
        return String.format("LruCache[maxSize=%d,hits=%d,misses=%d,hitRate=%d%%]",
                maxSize, hitCount, missCount, hitPercent);
    }
}

可以看到LruCache初始化的时候需要使用泛型,一般的我们这样初始化LruCache对象:

// 获取应用程序最大可用内存 
        int maxMemory = (int) Runtime.getRuntime().maxMemory();  
        int cacheSize = maxMemory / 8;  
        // 设置图片缓存大小为程序最大可用内存的1/8 
        mMemoryCache = new LruCache<String, Bitmap>(cacheSize) {  
            @Override  
            protected int sizeOf(String key, Bitmap bitmap) {  
                return bitmap.getByteCount();  
            }  
        };  

这里我们假设通过String作为key保存bitmap对象,同时需要传递一个int型的maxSize数值,主要用于设置LruCache链表的最大值。

查看其构造方法:

// 获取应用程序最大可用内存 
        int maxMemory = (int) Runtime.getRuntime().maxMemory();  
        int cacheSize = maxMemory / 8;  
        // 设置图片缓存大小为程序最大可用内存的1/8 
        mMemoryCache = new LruCache<String, Bitmap>(cacheSize) {  
            @Override  
            protected int sizeOf(String key, Bitmap bitmap) {  
                return bitmap.getByteCount();  
            }  
        };  

可以看到其主要的是初始化了maxSize和map链表对象。

然后查看put方法:

public final V put(K key, V value) {
        if (key == null || value == null) {
            throw new NullPointerException("key == null || value == null");
        }

        V previous;
        synchronized (this) {
            putCount++;
            size += safeSizeOf(key, value);
            previous = map.put(key, value);
            if (previous != null) {
                size -= safeSizeOf(key, previous);
            }
        }

        if (previous != null) {
            entryRemoved(false, key, previous, value);
        }

        trimToSize(maxSize);
        return previous;
    }

需要传递两个参数:K和V,首先做了一下参数的判断,然后定义一个保存前一个Value值得临时变量,让putCount(put执行的次数)自增,让map的size大小自增。
需要注意的是这里的

previous = map.put(key, value);

我们看一下这里的map.put()的具体实现:

@Override public V put(K key, V value) {
        if (key == null) {
            return putValueForNullKey(value);
        }

        int hash = Collections.secondaryHash(key);
        HashMapEntry<K, V>[] tab = table;
        int index = hash & (tab.length - 1);
        for (HashMapEntry<K, V> e = tab[index]; e != null; e = e.next) {
            if (e.hash == hash && key.equals(e.key)) {
                preModify(e);
                V oldValue = e.value;
                e.value = value;
                return oldValue;
            }
        }

        // No entry for (non-null) key is present; create one
        modCount++;
        if (size++ > threshold) {
            tab = doubleCapacity();
            index = hash & (tab.length - 1);
        }
        addNewEntry(key, value, hash, index);
        return null;
    }

将Key与Value的值压入Map中,这里判断了一下如果map中已经存在该key,value键值对,则不再压入map,并将Value值返回,否则将该键值对压入Map中,并返回null;

返回继续put方法:

previous = map.put(key, value);
            if (previous != null) {
                size -= safeSizeOf(key, previous);
            }

可以看到这里我们判断map.put方法的返回值是否为空,如果不为空的话,则说明我们刚刚并没有将我么你的键值对压入Map中,所以这里的size需要自减;

然后下面:

if (previous != null) {
            entryRemoved(false, key, previous, value);
        }

这里判断previous是否为空,如果不为空的话,调用了一个空的实现方法entryRemoved(),也就是说我们可以实现自己的LruCache并在添加缓存的时候若存在该缓存可以重写这个方法;

下面调用了trimToSize(maxSize)方法:

public void trimToSize(int maxSize) {
        while (true) {
            K key;
            V value;
            synchronized (this) {
                if (size < 0 || (map.isEmpty() && size != 0)) {
                    throw new IllegalStateException(getClass().getName()
                            + ".sizeOf() is reporting inconsistent results!");
                }

                if (size <= maxSize) {
                    break;
                }

                Map.Entry<K, V> toEvict = map.eldest();
                if (toEvict == null) {
                    break;
                }

                key = toEvict.getKey();
                value = toEvict.getValue();
                map.remove(key);
                size -= safeSizeOf(key, value);
                evictionCount++;
            }

            entryRemoved(true, key, value, null);
        }
    }

该方法主要是判断该Map的大小是否已经达到阙值,若达到,则将Map队尾的元素(最不常使用的元素)remove掉。

总结:
LruCache put方法,将键值对压入Map数据结构中,若这是Map的大小已经大于LruCache中定义的最大值,则将Map中最早压入的元素remove掉;

查看get方法:

public final V get(K key) {
        if (key == null) {
            throw new NullPointerException("key == null");
        }

        V mapValue;
        synchronized (this) {
            mapValue = map.get(key);
            if (mapValue != null) {
                hitCount++;
                return mapValue;
            }
            missCount++;
        }

        /* * Attempt to create a value. This may take a long time, and the map * may be different when create() returns. If a conflicting value was * added to the map while create() was working, we leave that value in * the map and release the created value. */

        V createdValue = create(key);
        if (createdValue == null) {
            return null;
        }

        synchronized (this) {
            createCount++;
            mapValue = map.put(key, createdValue);

            if (mapValue != null) {
                // There was a conflict so undo that last put
                map.put(key, mapValue);
            } else {
                size += safeSizeOf(key, createdValue);
            }
        }

        if (mapValue != null) {
            entryRemoved(false, key, createdValue, mapValue);
            return mapValue;
        } else {
            trimToSize(maxSize);
            return createdValue;
        }
    }

可以看到参数值为Key,简单的理解就是通过key值从map中取出Value值。
具体来说,判断map中是否含有key值value值,若存在,则hitCount(击中元素数量)自增,并返回Value值,若没有击中,则执行create(key)方法,这里看到create方法是一个空的实现方法,返回值为null,所以我们可以重写该方法,在调用get(key)的时候若没有找到value值,则自动创建一个value值并压入map中。

总结:

  • LruCache,内部使用Map保存内存级别的缓存

  • LruCache使用泛型可以设配各种类型

  • LruCache使用了Lru算法保存数据(最短最少使用least recent use)

  • LruCache只用使用put和get方法压入数据和取出数据

你可能感兴趣的:(android源码解析之(七)-->LruCache)