所有的image开源框架,都有自己的缓存策略,一级快速内存映射,二级磁盘映射,三级网络下载映射。
我们还是继续研究Fresco缓存,看完,估计其它你都懂了,千篇一律,关键是每个优秀框架命中率的问题和key的定义,一个好的key可以在开发使用方便,映射准确。
我们先看一级缓存策略:
/**
* Fresco entry point.
*
* You must initialize this class before use. The simplest way is to just do
* {#code Fresco.initialize(Context)}.
*/
public class Fresco {
private static PipelineDraweeControllerBuilderSupplier sDraweeControllerBuilderSupplier;
/** Initializes Fresco with the specified config. */
public static void initialize(Context context, ImagePipelineConfig imagePipelineConfig) {
ImagePipelineFactory.initialize(imagePipelineConfig);
initializeDrawee(context);
}
private static void initializeDrawee(Context context) {
sDraweeControllerBuilderSupplier = new PipelineDraweeControllerBuilderSupplier(context);
SimpleDraweeView.initialize(sDraweeControllerBuilderSupplier);
}
.......
}
public class PipelineDraweeControllerBuilderSupplier implements Supplier {
private final Context mContext;
private final ImagePipeline mImagePipeline;
private final PipelineDraweeControllerFactory mPipelineDraweeControllerFactory;
private final Set mBoundControllerListeners;
public PipelineDraweeControllerBuilderSupplier(Context context,
ImagePipelineFactory imagePipelineFactory,
Set boundControllerListeners) {
mContext = context;
mImagePipeline = imagePipelineFactory.getImagePipeline();
final AnimatedFactory animatedFactory = imagePipelineFactory.getAnimatedFactory();
AnimatedDrawableFactory animatedDrawableFactory = null;
if (animatedFactory != null) {
animatedDrawableFactory = animatedFactory.getAnimatedDrawableFactory(context);
}
mPipelineDraweeControllerFactory = new PipelineDraweeControllerFactory(
context.getResources(),
DeferredReleaser.getInstance(),
animatedDrawableFactory,
UiThreadImmediateExecutorService.getInstance(),
mImagePipeline.getBitmapMemoryCache()); // 一级缓存
mBoundControllerListeners = boundControllerListeners;
}
.......
}
Fresco初始化的时候会初始化Controller工厂,然后PipelineDraweeControllerFactory的缓存来源来自ImagePipeline,而ImagePipeline来自ImagePipelineFactory
/**
* Factory class for the image pipeline.
*
* This class constructs the pipeline and its dependencies from other libraries.
*
*
As the pipeline object can be quite expensive to create, it is strongly
* recommended that applications create just one instance of this class
* and of the pipeline.
*/
@NotThreadSafe
public class ImagePipelineFactory {
private static ImagePipelineFactory sInstance = null;
private final ThreadHandoffProducerQueue mThreadHandoffProducerQueue; // 这货很有用,专门干资源回收的脏活
.......
private final ImagePipelineConfig mConfig;
private CountingMemoryCache mBitmapCountingMemoryCache;
private MemoryCache mBitmapMemoryCache;
private CountingMemoryCache mEncodedCountingMemoryCache;
private MemoryCache mEncodedMemoryCache;
private BufferedDiskCache mMainBufferedDiskCache;
private FileCache mMainFileCache;
private ImageDecoder mImageDecoder;
private ImagePipeline mImagePipeline;
private ProducerFactory mProducerFactory;
private ProducerSequenceFactory mProducerSequenceFactory;
private BufferedDiskCache mSmallImageBufferedDiskCache;
private FileCache mSmallImageFileCache;
private PlatformBitmapFactory mPlatformBitmapFactory;
private PlatformDecoder mPlatformDecoder;
private AnimatedFactory mAnimatedFactory;
public MemoryCache getEncodedMemoryCache() {
if (mEncodedMemoryCache == null) {
mEncodedMemoryCache =
EncodedMemoryCacheFactory.get(
getEncodedCountingMemoryCache(),
mConfig.getImageCacheStatsTracker());
}
return mEncodedMemoryCache;
}
private BufferedDiskCache getMainBufferedDiskCache() {
if (mMainBufferedDiskCache == null) {
mMainBufferedDiskCache =
new BufferedDiskCache(
getMainFileCache(),
mConfig.getPoolFactory().getPooledByteBufferFactory(),
mConfig.getPoolFactory().getPooledByteStreams(),
mConfig.getExecutorSupplier().forLocalStorageRead(),
mConfig.getExecutorSupplier().forLocalStorageWrite(),
mConfig.getImageCacheStatsTracker());
}
return mMainBufferedDiskCache;
}
public FileCache getMainFileCache() {
if (mMainFileCache == null) {
DiskCacheConfig diskCacheConfig = mConfig.getMainDiskCacheConfig();
mMainFileCache = mConfig.getFileCacheFactory().get(diskCacheConfig);
}
return mMainFileCache;
}
public FileCache getSmallImageFileCache() {
if (mSmallImageFileCache == null) {
DiskCacheConfig diskCacheConfig = mConfig.getSmallImageDiskCacheConfig();
mSmallImageFileCache = mConfig.getFileCacheFactory().get(diskCacheConfig);
}
return mSmallImageFileCache;
}
private BufferedDiskCache getSmallImageBufferedDiskCache() {
if (mSmallImageBufferedDiskCache == null) {
mSmallImageBufferedDiskCache =
new BufferedDiskCache(
getSmallImageFileCache(),
mConfig.getPoolFactory().getPooledByteBufferFactory(),
mConfig.getPoolFactory().getPooledByteStreams(),
mConfig.getExecutorSupplier().forLocalStorageRead(),
mConfig.getExecutorSupplier().forLocalStorageWrite(),
mConfig.getImageCacheStatsTracker());
}
return mSmallImageBufferedDiskCache;
}
public CountingMemoryCache getEncodedCountingMemoryCache() {
if (mEncodedCountingMemoryCache == null) {
mEncodedCountingMemoryCache =
EncodedCountingMemoryCacheFactory.get(
mConfig.getEncodedMemoryCacheParamsSupplier(),
mConfig.getMemoryTrimmableRegistry());
}
return mEncodedCountingMemoryCache;
}
public CountingMemoryCache
getBitmapCountingMemoryCache() {
if (mBitmapCountingMemoryCache == null) {
mBitmapCountingMemoryCache =
BitmapCountingMemoryCacheFactory.get(
mConfig.getBitmapMemoryCacheParamsSupplier(),
mConfig.getMemoryTrimmableRegistry());
}
return mBitmapCountingMemoryCache;
}
public MemoryCache getBitmapMemoryCache() { // 一级缓存
if (mBitmapMemoryCache == null) {
mBitmapMemoryCache =
BitmapMemoryCacheFactory.get(
getBitmapCountingMemoryCache(),
mConfig.getImageCacheStatsTracker());
}
return mBitmapMemoryCache;
}
.......
}
这个工厂类东西很多,很多cache规则和buffer,我们后面介绍,先看getBitmapMemoryCache(),mBitmapMemoryCache来自BitmapCountingMemoryCacheFactory
这货里面注册了一个系统内存低的垃圾GC回调和一个bitmapCacheSupplier。
public class ImagePipelineConfig {
// There are a lot of parameters in this class. Please follow strict alphabetical order.
@Nullable private final AnimatedImageFactory mAnimatedImageFactory;
private final Bitmap.Config mBitmapConfig;
private final Supplier mBitmapMemoryCacheParamsSupplier;
private final CacheKeyFactory mCacheKeyFactory;
private final Context mContext;
private final boolean mDownsampleEnabled;
private final boolean mDecodeMemoryFileEnabled;
private final FileCacheFactory mFileCacheFactory;
private final Supplier mEncodedMemoryCacheParamsSupplier;
private final ExecutorSupplier mExecutorSupplier;
private final ImageCacheStatsTracker mImageCacheStatsTracker;
@Nullable private final ImageDecoder mImageDecoder;
private final Supplier mIsPrefetchEnabledSupplier;
private final DiskCacheConfig mMainDiskCacheConfig;
private final MemoryTrimmableRegistry mMemoryTrimmableRegistry;
private final NetworkFetcher mNetworkFetcher;
@Nullable private final PlatformBitmapFactory mPlatformBitmapFactory;
private final PoolFactory mPoolFactory;
private final ProgressiveJpegConfig mProgressiveJpegConfig;
private final Set mRequestListeners;
private final boolean mResizeAndRotateEnabledForNetwork;
private final DiskCacheConfig mSmallImageDiskCacheConfig;
private final ImagePipelineExperiments mImagePipelineExperiments;
private ImagePipelineConfig(Builder builder) {
mAnimatedImageFactory = builder.mAnimatedImageFactory;
mBitmapMemoryCacheParamsSupplier =
builder.mBitmapMemoryCacheParamsSupplier == null ?
new DefaultBitmapMemoryCacheParamsSupplier(
(ActivityManager) builder.mContext.getSystemService(Context.ACTIVITY_SERVICE)) :
builder.mBitmapMemoryCacheParamsSupplier;
mBitmapConfig =
builder.mBitmapConfig == null ?
Bitmap.Config.ARGB_8888 :
builder.mBitmapConfig;
mCacheKeyFactory =
builder.mCacheKeyFactory == null ?
DefaultCacheKeyFactory.getInstance() :
builder.mCacheKeyFactory;
mContext = Preconditions.checkNotNull(builder.mContext);
mDecodeMemoryFileEnabled = builder.mDecodeMemoryFileEnabled;
mFileCacheFactory = builder.mFileCacheFactory == null ?
new DiskStorageCacheFactory(new DynamicDefaultDiskStorageFactory()) :
builder.mFileCacheFactory;
mDownsampleEnabled = builder.mDownsampleEnabled;
mEncodedMemoryCacheParamsSupplier =
builder.mEncodedMemoryCacheParamsSupplier == null ?
new DefaultEncodedMemoryCacheParamsSupplier() :
builder.mEncodedMemoryCacheParamsSupplier;
mImageCacheStatsTracker =
builder.mImageCacheStatsTracker == null ?
NoOpImageCacheStatsTracker.getInstance() :
builder.mImageCacheStatsTracker;
mImageDecoder = builder.mImageDecoder;
mIsPrefetchEnabledSupplier =
builder.mIsPrefetchEnabledSupplier == null ?
new Supplier() {
@Override
public Boolean get() {
return true;
}
} :
builder.mIsPrefetchEnabledSupplier;
mMainDiskCacheConfig =
builder.mMainDiskCacheConfig == null ?
getDefaultMainDiskCacheConfig(builder.mContext) :
builder.mMainDiskCacheConfig;
mMemoryTrimmableRegistry =
builder.mMemoryTrimmableRegistry == null ?
NoOpMemoryTrimmableRegistry.getInstance() :
builder.mMemoryTrimmableRegistry;
mNetworkFetcher =
builder.mNetworkFetcher == null ?
new HttpUrlConnectionNetworkFetcher() :
builder.mNetworkFetcher;
mPlatformBitmapFactory = builder.mPlatformBitmapFactory;
mPoolFactory =
builder.mPoolFactory == null ?
new PoolFactory(PoolConfig.newBuilder().build()) :
builder.mPoolFactory;
mProgressiveJpegConfig =
builder.mProgressiveJpegConfig == null ?
new SimpleProgressiveJpegConfig() :
builder.mProgressiveJpegConfig;
mRequestListeners =
builder.mRequestListeners == null ?
new HashSet() :
builder.mRequestListeners;
mResizeAndRotateEnabledForNetwork = builder.mResizeAndRotateEnabledForNetwork;
mSmallImageDiskCacheConfig =
builder.mSmallImageDiskCacheConfig == null ?
mMainDiskCacheConfig :
builder.mSmallImageDiskCacheConfig;
// Below this comment can't be built in alphabetical order, because of dependencies
int numCpuBoundThreads = mPoolFactory.getFlexByteArrayPoolMaxNumThreads();
mExecutorSupplier =
builder.mExecutorSupplier == null ?
new DefaultExecutorSupplier(numCpuBoundThreads) : builder.mExecutorSupplier;
mImagePipelineExperiments = builder.mExperimentsBuilder.build();
}
}
然后你看ImagePipelineConfig构造方法
/**
* Supplies {@link MemoryCacheParams} for the bitmap memory cache.
*/
public class DefaultBitmapMemoryCacheParamsSupplier implements Supplier {
private static final int MAX_CACHE_ENTRIES = 256;
private static final int MAX_EVICTION_QUEUE_SIZE = Integer.MAX_VALUE;
private static final int MAX_EVICTION_QUEUE_ENTRIES = Integer.MAX_VALUE;
private static final int MAX_CACHE_ENTRY_SIZE = Integer.MAX_VALUE;
private final ActivityManager mActivityManager;
public DefaultBitmapMemoryCacheParamsSupplier(ActivityManager activityManager) {
mActivityManager = activityManager;
}
@Override
public MemoryCacheParams get() {
return new MemoryCacheParams(
getMaxCacheSize(),
MAX_CACHE_ENTRIES,
MAX_EVICTION_QUEUE_SIZE,
MAX_EVICTION_QUEUE_ENTRIES,
MAX_CACHE_ENTRY_SIZE);
}
private int getMaxCacheSize() {
final int maxMemory =
Math.min(mActivityManager.getMemoryClass() * ByteConstants.MB, Integer.MAX_VALUE);
if (maxMemory < 32 * ByteConstants.MB) {
return 4 * ByteConstants.MB;
} else if (maxMemory < 64 * ByteConstants.MB) {
return 6 * ByteConstants.MB;
} else {
// We don't want to use more ashmem on Gingerbread for now, since it doesn't respond well to
// native memory pressure (doesn't throw exceptions, crashes app, crashes phone)
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.HONEYCOMB) {
return 8 * ByteConstants.MB;
} else {
return maxMemory / 4;
}
}
}
}
从ActivityManager拿到内存大小,进行计算MemoryCacheParams的缓存策略。再回到上面getBitmapCountingMemoryCache()函数中CountingMemoryCache就是核心了。
public class BitmapCountingMemoryCacheFactory {
public static CountingMemoryCache get(
Supplier bitmapMemoryCacheParamsSupplier,
MemoryTrimmableRegistry memoryTrimmableRegistry) {
ValueDescriptor valueDescriptor =
new ValueDescriptor() {
@Override
public int getSizeInBytes(CloseableImage value) {
return value.getSizeInBytes();
}
};
CountingMemoryCache.CacheTrimStrategy trimStrategy = new BitmapMemoryCacheTrimStrategy();
CountingMemoryCache countingCache =
new CountingMemoryCache<>(valueDescriptor, trimStrategy, bitmapMemoryCacheParamsSupplier);
memoryTrimmableRegistry.registerMemoryTrimmable(countingCache);
return countingCache;
}
}
该类构造了BitmapMemoryCacheTrimStrategy策略模式和CountingMemoryCache缓存机制。
ValueDescriptor是什么呢,他是一种数据大小描述器
先看看BitmapMemoryCacheTrimStrategy
/**
* CountingMemoryCache eviction strategy appropriate for bitmap caches.
*
* If run on KitKat or below, then this TrimStrategy behaves exactly as
* NativeMemoryCacheTrimStrategy. If run on Lollipop, then BitmapMemoryCacheTrimStrategy will trim
* cache in one additional case: when OnCloseToDalvikHeapLimit trim type is received, cache's
* eviction queue will be trimmed according to OnCloseToDalvikHeapLimit's suggested trim ratio.
*/
public class BitmapMemoryCacheTrimStrategy implements CountingMemoryCache.CacheTrimStrategy {
private static final String TAG = "BitmapMemoryCacheTrimStrategy";
@Override
public double getTrimRatio(MemoryTrimType trimType) {
switch (trimType) {
case OnCloseToDalvikHeapLimit:
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) {
return MemoryTrimType.OnCloseToDalvikHeapLimit.getSuggestedTrimRatio();
} else {
// On pre-lollipop versions we keep bitmaps on the native heap, so no need to trim here
// as it wouldn't help Dalvik heap anyway.
return 0;
}
case OnAppBackgrounded:
case OnSystemLowMemoryWhileAppInForeground:
case OnSystemLowMemoryWhileAppInBackground:
return 1;
default:
FLog.wtf(TAG, "unknown trim type: %s", trimType);
return 0;
}
}
}
根据不同系统返回trim系数因子
/**
* Layer of memory cache stack responsible for managing eviction of the the cached items.
*
* This layer is responsible for LRU eviction strategy and for maintaining the size boundaries
* of the cached items.
*
*
Only the exclusively owned elements, i.e. the elements not referenced by any client, can be
* evicted.
*
* @param the key type
* @param the value type
*/
@ThreadSafe
public class CountingMemoryCache implements MemoryCache, MemoryTrimmable {
final CountingLruMap> mExclusiveEntries;
// Contains all the cached items including the exclusively owned ones.
@GuardedBy("this")
@VisibleForTesting
final CountingLruMap> mCachedEntries;
private final ValueDescriptor mValueDescriptor;
private final CacheTrimStrategy mCacheTrimStrategy;
// Cache size constraints.
private final Supplier mMemoryCacheParamsSupplier;
@GuardedBy("this")
protected MemoryCacheParams mMemoryCacheParams;
@GuardedBy("this")
private long mLastCacheParamsCheck;
public CountingMemoryCache(
ValueDescriptor valueDescriptor,
CacheTrimStrategy cacheTrimStrategy,
Supplier memoryCacheParamsSupplier) {
mValueDescriptor = valueDescriptor;
mExclusiveEntries = new CountingLruMap<>(wrapValueDescriptor(valueDescriptor));
mCachedEntries = new CountingLruMap<>(wrapValueDescriptor(valueDescriptor));
mCacheTrimStrategy = cacheTrimStrategy;
mMemoryCacheParamsSupplier = memoryCacheParamsSupplier;
mMemoryCacheParams = mMemoryCacheParamsSupplier.get();
mLastCacheParamsCheck = SystemClock.uptimeMillis();
}
/**
* Caches the given key-value pair.
*
* Important: the client should use the returned reference instead of the original one.
* It is the caller's responsibility to close the returned reference once not needed anymore.
*
* @return the new reference to be used, null if the value cannot be cached
*/
public CloseableReference cache(final K key, final CloseableReference valueRef, final EntryStateObserver observer) {
Preconditions.checkNotNull(key);
Preconditions.checkNotNull(valueRef);
maybeUpdateCacheParams();
Entry oldExclusive;
CloseableReference oldRefToClose = null;
CloseableReference clientRef = null;
synchronized (this) {
// remove the old item (if any) as it is stale now
oldExclusive = mExclusiveEntries.remove(key);
Entry oldEntry = mCachedEntries.remove(key);
if (oldEntry != null) {
makeOrphan(oldEntry);
oldRefToClose = referenceToClose(oldEntry);
}
if (canCacheNewValue(valueRef.get())) {
Entry newEntry = Entry.of(key, valueRef, observer);
mCachedEntries.put(key, newEntry);
clientRef = newClientReference(newEntry);
}
}
CloseableReference.closeSafely(oldRefToClose);
maybeNotifyExclusiveEntryRemoval(oldExclusive);
maybeEvictEntries();
return clientRef;
}
@Nullable
public CloseableReference get(final K key) {
Preconditions.checkNotNull(key);
Entry oldExclusive;
CloseableReference clientRef = null;
synchronized (this) {
oldExclusive = mExclusiveEntries.remove(key);
Entry entry = mCachedEntries.get(key);
if (entry != null) {
clientRef = newClientReference(entry);
}
}
maybeNotifyExclusiveEntryRemoval(oldExclusive);
maybeUpdateCacheParams();
maybeEvictEntries();
return clientRef;
}
public CloseableReference reuse(K key) {
Preconditions.checkNotNull(key);
CloseableReference clientRef = null;
boolean removed = false;
Entry oldExclusive = null;
synchronized (this) {
oldExclusive = mExclusiveEntries.remove(key);
if (oldExclusive != null) {
Entry entry = mCachedEntries.remove(key);
Preconditions.checkNotNull(entry);
Preconditions.checkState(entry.clientCount == 0);
// optimization: instead of cloning and then closing the original reference,
// we just do a move
clientRef = entry.valueRef;
removed = true;
}
}
if (removed) {
maybeNotifyExclusiveEntryRemoval(oldExclusive);
}
return clientRef;
}
@Override
public synchronized boolean contains(Predicate predicate) {
return !mCachedEntries.getMatchingEntries(predicate).isEmpty();
}
/** Trims the cache according to the specified trimming strategy and the given trim type. */
@Override
public void trim(MemoryTrimType trimType) {
ArrayList> oldEntries;
final double trimRatio = mCacheTrimStrategy.getTrimRatio(trimType);
synchronized (this) {
int targetCacheSize = (int) (mCachedEntries.getSizeInBytes() * (1 - trimRatio));
int targetEvictionQueueSize = Math.max(0, targetCacheSize - getInUseSizeInBytes());
oldEntries = trimExclusivelyOwnedEntries(Integer.MAX_VALUE, targetEvictionQueueSize);
makeOrphans(oldEntries);
}
maybeClose(oldEntries);
maybeNotifyExclusiveEntryRemoval(oldEntries);
maybeUpdateCacheParams();
maybeEvictEntries();
}
}
提供了重复使用reuse和简单增删减。
每次cache会根据key释放old数据,把Entry
重点是mCachedEntries和mExclusiveEntries
mCachedEntries是所有Cahce
mExclusiveEntries是Cache不在使用的,挂载到这个cache下,便于后台回收
两者都是Lru的包装算法
/**
* Map that keeps track of the elements order (according to the LRU policy) and their size.
*/
@ThreadSafe
public class CountingLruMap {
private final ValueDescriptor mValueDescriptor;
@GuardedBy("this")
private final LinkedHashMap mMap = new LinkedHashMap<>();
@GuardedBy("this")
private int mSizeInBytes = 0;
public CountingLruMap(ValueDescriptor valueDescriptor) {
mValueDescriptor = valueDescriptor;
}
/** Gets the total size in bytes of the elements in the map. */
public synchronized int getSizeInBytes() {
return mSizeInBytes;
}
/** Gets the key of the first element in the map. */
@Nullable
public synchronized K getFirstKey() {
return mMap.isEmpty() ? null : mMap.keySet().iterator().next();
}
/** Gets the all matching elements. */
public synchronized ArrayList> getMatchingEntries(
@Nullable Predicate predicate) {
ArrayList> matchingEntries = new ArrayList<>();
for (LinkedHashMap.Entry entry : mMap.entrySet()) {
if (predicate == null || predicate.apply(entry.getKey())) {
matchingEntries.add(entry);
}
}
return matchingEntries;
}
/** Returns whether the map contains an element with the given key. */
public synchronized boolean contains(K key) {
return mMap.containsKey(key);
}
/** Gets the element from the map. */
@Nullable
public synchronized V get(K key) {
return mMap.get(key);
}
/** Adds the element to the map, and removes the old element with the same key if any. */
@Nullable
public synchronized V put(K key, V value) {
// We do remove and insert instead of just replace, in order to cause a structural change
// to the map, as we always want the latest inserted element to be last in the queue.
V oldValue = mMap.remove(key);
mSizeInBytes -= getValueSizeInBytes(oldValue);
mMap.put(key, value);
mSizeInBytes += getValueSizeInBytes(value);
return oldValue;
}
/** Removes the element from the map. */
@Nullable
public synchronized V remove(K key) {
V oldValue = mMap.remove(key);
mSizeInBytes -= getValueSizeInBytes(oldValue);
return oldValue;
}
private int getValueSizeInBytes(V value) {
return (value == null) ? 0 : mValueDescriptor.getSizeInBytes(value);
}
}
但尼玛就是LinkedHashMap,也没有看到Lru用在哪里。。。在外面CountingMemoryCache
@ThreadSafe
public class CountingMemoryCache implements MemoryCache, MemoryTrimmable {
/**
* Removes the exclusively owned items until the cache constraints are met.
*
* This method invokes the external {@link CloseableReference#close} method,
* so it must not be called while holding the this
lock.
*/
private void maybeEvictEntries() {
ArrayList> oldEntries;
synchronized (this) {
int maxCount = Math.min(
mMemoryCacheParams.maxEvictionQueueEntries,
mMemoryCacheParams.maxCacheEntries - getInUseCount());
int maxSize = Math.min(
mMemoryCacheParams.maxEvictionQueueSize,
mMemoryCacheParams.maxCacheSize - getInUseSizeInBytes());
oldEntries = trimExclusivelyOwnedEntries(maxCount, maxSize);
makeOrphans(oldEntries);
}
maybeClose(oldEntries);
maybeNotifyExclusiveEntryRemoval(oldEntries);
}
/**
* Removes the exclusively owned items until there is at most count
of them
* and they occupy no more than size
bytes.
*
* This method returns the removed items instead of actually closing them, so it is safe to
* be called while holding the this
lock.
*/
@Nullable
private synchronized ArrayList> trimExclusivelyOwnedEntries(int count, int size) {
count = Math.max(count, 0);
size = Math.max(size, 0);
// fast path without array allocation if no eviction is necessary
if (mExclusiveEntries.getCount() <= count && mExclusiveEntries.getSizeInBytes() <= size) {
return null;
}
ArrayList> oldEntries = new ArrayList<>();
while (mExclusiveEntries.getCount() > count || mExclusiveEntries.getSizeInBytes() > size) {
K key = mExclusiveEntries.getFirstKey();
mExclusiveEntries.remove(key);
oldEntries.add(mCachedEntries.remove(key));
}
return oldEntries;
}
}
这个2个函数做Lru,计算最大count,然后把LinkerHashMap从第一个开始挪走加入old流程设置flag为Orphans,后台去释放资源,这个count就是如果遇到当前要命中的key,又会重新加入,这个count是从DefaultBitmapMemoryCacheParamsSupplier&MemoryCacheParams得到的
二级缓存