LruCache 的使用及源码解析

在Android 应用中恰当的使用缓存技术不仅可以缓解服务器压力,还可以优化用户的使用体验,减少用户流量的使用。常用的三级缓存主要是指 LruCache、DiskLruCache、网络,其中 LruCache 对应内存缓存、DiskLruCache 对应磁盘缓存。LRU 全称是 Least Recently Used,即最近最少使用策略,意思是当缓存到达限制时候,优先淘汰近期内最少使用的缓存,LruCache 和 DiskLruCache 都是采用 LRU 策略。比如说 Android 中常来缓存 Bitmap,我们先从 LruCache 中取,取不到再从 DiskLruCache 中取,也取不到的话,最后才从数据源获取(网络下载 or 本地文件)。

内存缓存的特点:

读取速度快
可分配空间小
有被系统回收风险
应用退出就没有了,无法做到离线缓存

磁盘缓存的特点:

读取速度比内存缓存慢
可分配空间较大
不会因为系统内存紧张而被系统回收
退出应用缓存仍然存在(缓存在应用对应的磁盘目录中卸载时会一同清理,缓存在其他位置卸载会有残留)

本文主要从原理、使用和源码的角度来解析 LruCache。

一、基本原理及底层实现

LruCache 使用了 LRU 缓存淘汰算法,其中 LRU 全称是 Least Recently Used,即最近最少使用策略。 其底层代码实现用到了LinkedHashMap 采用双向链表这种数据结构,是一种空间换时间的设计思想,以及用 synchronized 来保证线程安全。并提供了get() 和 put() 方法来完成缓存的获取和添加操作,当缓存满时,LruCache 会移除较早使用的缓存对象,然后再添加新的缓存对象。来看源码注释了解具体的操作过程:

A cache that holds strong references to a limited number of values. Each time a value is accessed, it is moved to the head of a queue. When a value is added to a full cache, the value at the end of that queue is evicted and may become eligible for garbage collection.
一个包含有限数量值的强引用的缓存。每次访问一个值,它都会被移动到队列的头部。将一个新的值添加到已经满了的缓存队列时,该队列末尾的值将会被逐出,并且可能会被垃圾回收机制进行回收。

具体操作过程可看以下图示:
在这里插入图片描述

二、LruCache 的使用

//获取系统分配给每个应用程序的最大内存,单位换算为 KB
int maxMemory=(int)(Runtime.getRuntime().maxMemory()/1024);
int cacheSize=maxMemory/8; //取最大内存的 1/8 作为缓存容量
private LruCache<String, Bitmap> mMemoryCache;
mMemoryCache = new LruCache<String, Bitmap>(mCacheSize){
    
    //给 LruCache 分配缓存容量
    //重写该方法,来测量 Bitmap 的大小  
    @Override  
    protected int sizeOf(String key, Bitmap bitmap) {
    
      
        return bitmap.getRowBytes() * value.getHeight()/1024;  
    }  
};

在上面的代码中,只需提供缓存的总容量大小并重写 sizeOf() 方法即可。sizeOf() 方法的作用是计算缓存对象的大小,这里大小的单位需要和总容量的单位一致。对于上面的示例代码来说,总容量的大小为当前进程的可用内存的 1/8,单位为 KB(除以 1024 是为了将其单位转换为 KB ),而 sizeOf() 方法则完成了 Bitmap 对象的大小计算。一些特殊情况下,还需要重写 LruCache 的 entryRemoved() 方法,LruCache 移除旧缓存时会调用 entryRemoved() 方法,因此可以在 entryRemoved() 中完成一些资源回收工作(如果需要的话)。
除了 LruCache 的创建以外,还有缓存的获取和添加,这也很简单,从LruCache中获取一个缓存对象,如下所示。

三、部分源码解析

1. 构造方法
public class LruCache<K, V> {
    
    
	...
	public LruCache(int maxSize) {
    
    
        if (maxSize <= 0) {
    
    
            throw new IllegalArgumentException("maxSize <= 0");
        }
        this.maxSize = maxSize;
        this.map = new LinkedHashMap<K, V>(0, 0.75f, true);
    }
}

LruCache 是一个泛型类,从构造函数可以看出,它内部采用了一个 LinkedHashMap强引用的方式存储外界的缓存对象,LinkedHashMap 的三个参数分别为 初始容量、加载因子 和 访问顺序,当 accessOrder 为 true 时,这个集合的元素顺序就会是访问顺序,也就是访问了之后就会将这个元素放到集合的最后面(??);false 表示插入顺序。

LinkedHashMap 参数介绍:
initialCapacity 用于初始化该 LinkedHashMap 的大小。
loadFactor(负载因子)是 LinkedHashMap 的父类 HashMap 里的构造参数,涉及到扩容问题,比如 HashMap 的最大容量是100,那么这里设置 0.75f 的话,到 75 的时候就会扩容。
accessOrder 是排序模式,true 表示按照访问顺序进行排序( LruCache 核心工作原理就在此),false 表示按照插入的顺序进行排序。

有关 LinkedHashMap 的源码分析,我们之后另开一篇文章来详细介绍。这里先简单提一下,LinkedHashMap 默认的构造参数是插入顺序的,就是说 LinkedHashMap 中存储的顺序是按照调用 put() 方法插入的顺序进行排序的;而访问顺序,是当我们访问了一个 key 后,这个 key 就跑到了队尾这里注意:我们在文章开头看到 LruCache 源码注释部分介绍的,“Each time a value is accessed, it is moved to the head of a queue. ” 每次访问一个值,它都会被移动到队头。那么被访问的数据到底是被移动到了队头还是队尾呢?带着疑问我们继续向下看。

加餐:这里简单介绍下上面涉及到的相关知识:强引用、软引用、弱引用、虚引用的区别。

· 强引用:直接的对象引用;
· 软引用:当一个对象只有软引用存在时,系统内存不足时此对象会被 gc 回收;
· 弱引用:当一个对象只有弱引用存在时,此对象会随时被 gc 回收;
· 虚引用:如果一个对象仅持有虚引用,那么它就和没有任何引用一样,在任何时候都可能被垃圾回收。虚引用并不会决定对象的生命周期。虚引用主要用来跟踪对象被垃圾回收的活动。虚引用必须和引用队列(ReferenceQueue)联合使用。

2. LruCahche 的 get() 方法
/**
     * Returns the value for {@code key} if it exists in the cache or can be
     * created by {@code #create}. If a value was returned, it is moved to the
     * head of the queue. This returns null if a value is not cached and cannot
     * be created.
     */
    public final V get(K key) {
    
    
        if (key == null) {
    
    
            throw new NullPointerException("key == null");
        }
        V mapValue;
        synchronized (this) {
    
    
            mapValue = map.get(key);
            if (mapValue != null) {
    
    
                hitCount++;
                return mapValue;
            }
            missCount++;
        }
        /*
         * Attempt to create a value. This may take a long time, and the map
         * may be different when create() returns. If a conflicting value was
         * added to the map while create() was working, we leave that value in
         * the map and release the created value.
         * 如果通过 key 从缓存集合中获取不到缓存数据,就尝试使用creat(key) 方法创造一个新数据。
         * create(key) 默认返回的也是 null,需要的时候可以重写这个方法。 
         */
        V createdValue = create(key);
        if (createdValue == null) {
    
    
            return null;
        }
        //如果重写了 create(key) 方法,创建了新的数据,就将新数据放入缓存中。
        synchronized (this) {
    
    
            createCount++;
            mapValue = map.put(key, createdValue);
            if (mapValue != null) {
    
    
                // There was a conflict so undo that last put
                map.put(key, mapValue);
            } else {
    
    
                size += safeSizeOf(key, createdValue);
            }
        }
        if (mapValue != null) {
    
    
            entryRemoved(false, key, createdValue, mapValue);
            return mapValue;
        } else {
    
    
            trimToSize(maxSize);
            return createdValue;
        }
    }

从 get() 方法的注释中我们可以看到,如果一个 key 存在于缓存中,或者其可以由 create() 创建,则返回 key 的值。如果返回了一个值,它将移动到队列的头部。如果值未缓存且无法创建,则返回 null。从而解答了我们上面的疑惑,被访问的元素会移动到队列的头部,而队列的尾部元素是最近最少使用的元素。

3. LruCache 的 put() 方法
 /**
     * Caches {@code value} for {@code key}. The value is moved to the head of
     * the queue.
     *
     * @return the previous value mapped by {@code key}.
     */
    public final V put(K key, V value) {
    
    
        if (key == null || value == null) {
    
    
            throw new NullPointerException("key == null || value == null");
        }
        V previous;
        synchronized (this) {
    
    
            putCount++;
            //safeSizeOf(key, value)。
            //safeSizeOf() 方法内调用了 sizeOf() 方法,sizeOf() 方法默认返回1,也就是将缓存的个数加1.
            // 当缓存的是图片的时候,这个 size 应该表示图片占用的内存的大小,所以应该重写里面调用的 sizeOf(key, value)
            size += safeSizeOf(key, value);
            //向 map 中加入缓存对象,若缓存中已存在,返回已有的值,否则执行插入新的数据
            previous = map.put(key, value);
            //如果已有缓存对象,则缓存大小恢复到之前
            if (previous != null) {
    
    
                size -= safeSizeOf(key, previous);
            }
        }
        //entryRemoved() 是个空方法,可以自行实现
        if (previous != null) {
    
    
            entryRemoved(false, key, previous, value);
        }
        //通过 trimToSize() 方法 来判断 size 是否大于 maxSize。
        trimToSize(maxSize);
        return previous;
    }

可见,put() 方法就是添加缓存对象,以及在添加过缓存对象后,调用 trimToSize() 方法,来判断加入元素后是否超过最大缓存数,如果超过就要清除掉近期最少使用的元素。其源码如下

 /**
     * Remove the eldest entries until the total of remaining entries is at or
     * below the requested size.
     *
     * @param maxSize the maximum size of the cache before returning. May be -1
     *            to evict even 0-sized elements.
     */
    public void trimToSize(int maxSize) {
    
    
        while (true) {
    
    
            K key;
            V value;
            synchronized (this) {
    
    
            	//如果 map 为空并且缓存 size 不等于 0 或者缓存 size 小于 0 ,抛出异常
                if (size < 0 || (map.isEmpty() && size != 0)) {
    
    
                    throw new IllegalStateException(getClass().getName()
                            + ".sizeOf() is reporting inconsistent results!");
                }
                //如果缓存 size 小于最大缓存,不需要再删除缓存对象,跳出循环
                if (size <= maxSize) {
    
    
                    break;
                }
                //在缓存队列中查找最近最少使用的元素,若不存在,直接退出循环,若存在则在 map 中删除该元素
                Map.Entry<K, V> toEvict = map.eldest();
                if (toEvict == null) {
    
    
                    break;
                }
                key = toEvict.getKey();
                value = toEvict.getValue();
                map.remove(key);
                size -= safeSizeOf(key, value);
                //回收次数 +1
                evictionCount++;
            }
            entryRemoved(true, key, value, null);
        }
    }
4. LruCache 的 remove() 方法
/**
     * Removes the entry for {@code key} if it exists.
     *
     * @return the previous value mapped by {@code key}.
     */
    public final V remove(K key) {
    
    
        if (key == null) {
    
    
            throw new NullPointerException("key == null");
        }
        V previous;
        synchronized (this) {
    
    
            previous = map.remove(key);
            if (previous != null) {
    
    
                size -= safeSizeOf(key, previous);
            }
        }
        if (previous != null) {
    
    
            entryRemoved(false, key, previous, null);
        }
        return previous;
    }

其内部调用了 entryRemoved() 的方法来实现从缓存中删除内容,并更新缓存大小。

四、LeetCode :LruCache 缓存机制。

LeetCode —— LRU缓存机制

大家可以去力扣练习并熟练掌握其中一种解法。敲重点!!此题有大厂面试要求手写哦~

五、LruCache 的官方文档和完整源码誊录

LruCache 官方文档

LruCache 的完整源码

/*
 * Copyright (C) 2011 The Android Open Source Project
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package android.util;
import android.compat.annotation.UnsupportedAppUsage;
import java.util.LinkedHashMap;
import java.util.Map;
/**
 * A cache that holds strong references to a limited number of values. Each time
 * a value is accessed, it is moved to the head of a queue. When a value is
 * added to a full cache, the value at the end of that queue is evicted and may
 * become eligible for garbage collection.
 *
 * <p>If your cached values hold resources that need to be explicitly released,
 * override {@link #entryRemoved}.
 *
 * <p>If a cache miss should be computed on demand for the corresponding keys,
 * override {@link #create}. This simplifies the calling code, allowing it to
 * assume a value will always be returned, even when there's a cache miss.
 *
 * <p>By default, the cache size is measured in the number of entries. Override
 * {@link #sizeOf} to size the cache in different units. For example, this cache
 * is limited to 4MiB of bitmaps:
 * <pre>   {@code
 *   int cacheSize = 4 * 1024 * 1024; // 4MiB
 *   LruCache<String, Bitmap> bitmapCache = new LruCache<String, Bitmap>(cacheSize) {
 *       protected int sizeOf(String key, Bitmap value) {
 *           return value.getByteCount();
 *       }
 *   }}</pre>
 *
 * <p>This class is thread-safe. Perform multiple cache operations atomically by
 * synchronizing on the cache: <pre>   {@code
 *   synchronized (cache) {
 *     if (cache.get(key) == null) {
 *         cache.put(key, value);
 *     }
 *   }}</pre>
 *
 * <p>This class does not allow null to be used as a key or value. A return
 * value of null from {@link #get}, {@link #put} or {@link #remove} is
 * unambiguous: the key was not in the cache.
 *
 * <p>This class appeared in Android 3.1 (Honeycomb MR1); it's available as part
 * of <a href="http://developer.android.com/sdk/compatibility-library.html">Android's
 * Support Package</a> for earlier releases.
 */
public class LruCache<K, V> {
    
    
    @UnsupportedAppUsage
    private final LinkedHashMap<K, V> map;
    /** Size of this cache in units. Not necessarily the number of elements. */
    private int size;
    private int maxSize;
    private int putCount;
    private int createCount;
    private int evictionCount;
    private int hitCount;
    private int missCount;
    /**
     * @param maxSize for caches that do not override {@link #sizeOf}, this is
     *     the maximum number of entries in the cache. For all other caches,
     *     this is the maximum sum of the sizes of the entries in this cache.
     */
    public LruCache(int maxSize) {
    
    
        if (maxSize <= 0) {
    
    
            throw new IllegalArgumentException("maxSize <= 0");
        }
        this.maxSize = maxSize;
        this.map = new LinkedHashMap<K, V>(0, 0.75f, true);
    }
    /**
     * Sets the size of the cache.
     *
     * @param maxSize The new maximum size.
     */
    public void resize(int maxSize) {
    
    
        if (maxSize <= 0) {
    
    
            throw new IllegalArgumentException("maxSize <= 0");
        }
        synchronized (this) {
    
    
            this.maxSize = maxSize;
        }
        trimToSize(maxSize);
    }
    /**
     * Returns the value for {@code key} if it exists in the cache or can be
     * created by {@code #create}. If a value was returned, it is moved to the
     * head of the queue. This returns null if a value is not cached and cannot
     * be created.
     */
    public final V get(K key) {
    
    
        if (key == null) {
    
    
            throw new NullPointerException("key == null");
        }
        V mapValue;
        synchronized (this) {
    
    
            mapValue = map.get(key);
            if (mapValue != null) {
    
    
                hitCount++;
                return mapValue;
            }
            missCount++;
        }
        /*
         * Attempt to create a value. This may take a long time, and the map
         * may be different when create() returns. If a conflicting value was
         * added to the map while create() was working, we leave that value in
         * the map and release the created value.
         */
        V createdValue = create(key);
        if (createdValue == null) {
    
    
            return null;
        }
        synchronized (this) {
    
    
            createCount++;
            mapValue = map.put(key, createdValue);
            if (mapValue != null) {
    
    
                // There was a conflict so undo that last put
                map.put(key, mapValue);
            } else {
    
    
                size += safeSizeOf(key, createdValue);
            }
        }
        if (mapValue != null) {
    
    
            entryRemoved(false, key, createdValue, mapValue);
            return mapValue;
        } else {
    
    
            trimToSize(maxSize);
            return createdValue;
        }
    }
    /**
     * Caches {@code value} for {@code key}. The value is moved to the head of
     * the queue.
     *
     * @return the previous value mapped by {@code key}.
     */
    public final V put(K key, V value) {
    
    
        if (key == null || value == null) {
    
    
            throw new NullPointerException("key == null || value == null");
        }
        V previous;
        synchronized (this) {
    
    
            putCount++;
            size += safeSizeOf(key, value);
            previous = map.put(key, value);
            if (previous != null) {
    
    
                size -= safeSizeOf(key, previous);
            }
        }
        if (previous != null) {
    
    
            entryRemoved(false, key, previous, value);
        }
        trimToSize(maxSize);
        return previous;
    }
    /**
     * Remove the eldest entries until the total of remaining entries is at or
     * below the requested size.
     *
     * @param maxSize the maximum size of the cache before returning. May be -1
     *            to evict even 0-sized elements.
     */
    public void trimToSize(int maxSize) {
    
    
        while (true) {
    
    
            K key;
            V value;
            synchronized (this) {
    
    
                if (size < 0 || (map.isEmpty() && size != 0)) {
    
    
                    throw new IllegalStateException(getClass().getName()
                            + ".sizeOf() is reporting inconsistent results!");
                }
                if (size <= maxSize) {
    
    
                    break;
                }
                Map.Entry<K, V> toEvict = map.eldest();
                if (toEvict == null) {
    
    
                    break;
                }
                key = toEvict.getKey();
                value = toEvict.getValue();
                map.remove(key);
                size -= safeSizeOf(key, value);
                evictionCount++;
            }
            entryRemoved(true, key, value, null);
        }
    }
    /**
     * Removes the entry for {@code key} if it exists.
     *
     * @return the previous value mapped by {@code key}.
     */
    public final V remove(K key) {
    
    
        if (key == null) {
    
    
            throw new NullPointerException("key == null");
        }
        V previous;
        synchronized (this) {
    
    
            previous = map.remove(key);
            if (previous != null) {
    
    
                size -= safeSizeOf(key, previous);
            }
        }
        if (previous != null) {
    
    
            entryRemoved(false, key, previous, null);
        }
        return previous;
    }
    /**
     * Called for entries that have been evicted or removed. This method is
     * invoked when a value is evicted to make space, removed by a call to
     * {@link #remove}, or replaced by a call to {@link #put}. The default
     * implementation does nothing.
     *
     * <p>The method is called without synchronization: other threads may
     * access the cache while this method is executing.
     *
     * @param evicted true if the entry is being removed to make space, false
     *     if the removal was caused by a {@link #put} or {@link #remove}.
     * @param newValue the new value for {@code key}, if it exists. If non-null,
     *     this removal was caused by a {@link #put} or a {@link #get}. Otherwise it was caused by
     *     an eviction or a {@link #remove}.
     */
    protected void entryRemoved(boolean evicted, K key, V oldValue, V newValue) {
    
    }
    /**
     * Called after a cache miss to compute a value for the corresponding key.
     * Returns the computed value or null if no value can be computed. The
     * default implementation returns null.
     *
     * <p>The method is called without synchronization: other threads may
     * access the cache while this method is executing.
     *
     * <p>If a value for {@code key} exists in the cache when this method
     * returns, the created value will be released with {@link #entryRemoved}
     * and discarded. This can occur when multiple threads request the same key
     * at the same time (causing multiple values to be created), or when one
     * thread calls {@link #put} while another is creating a value for the same
     * key.
     */
    protected V create(K key) {
    
    
        return null;
    }
    private int safeSizeOf(K key, V value) {
    
    
        int result = sizeOf(key, value);
        if (result < 0) {
    
    
            throw new IllegalStateException("Negative size: " + key + "=" + value);
        }
        return result;
    }
    /**
     * Returns the size of the entry for {@code key} and {@code value} in
     * user-defined units.  The default implementation returns 1 so that size
     * is the number of entries and max size is the maximum number of entries.
     *
     * <p>An entry's size must not change while it is in the cache.
     */
    protected int sizeOf(K key, V value) {
    
    
        return 1;
    }
    /**
     * Clear the cache, calling {@link #entryRemoved} on each removed entry.
     */
    public final void evictAll() {
    
    
        trimToSize(-1); // -1 will evict 0-sized elements
    }
    /**
     * For caches that do not override {@link #sizeOf}, this returns the number
     * of entries in the cache. For all other caches, this returns the sum of
     * the sizes of the entries in this cache.
     */
    public synchronized final int size() {
    
    
        return size;
    }
    /**
     * For caches that do not override {@link #sizeOf}, this returns the maximum
     * number of entries in the cache. For all other caches, this returns the
     * maximum sum of the sizes of the entries in this cache.
     */
    public synchronized final int maxSize() {
    
    
        return maxSize;
    }
    /**
     * Returns the number of times {@link #get} returned a value that was
     * already present in the cache.
     */
    public synchronized final int hitCount() {
    
    
        return hitCount;
    }
    /**
     * Returns the number of times {@link #get} returned null or required a new
     * value to be created.
     */
    public synchronized final int missCount() {
    
    
        return missCount;
    }
    /**
     * Returns the number of times {@link #create(Object)} returned a value.
     */
    public synchronized final int createCount() {
    
    
        return createCount;
    }
    /**
     * Returns the number of times {@link #put} was called.
     */
    public synchronized final int putCount() {
    
    
        return putCount;
    }
    /**
     * Returns the number of values that have been evicted.
     */
    public synchronized final int evictionCount() {
    
    
        return evictionCount;
    }
    /**
     * Returns a copy of the current contents of the cache, ordered from least
     * recently accessed to most recently accessed.
     */
    public synchronized final Map<K, V> snapshot() {
    
    
        return new LinkedHashMap<K, V>(map);
    }
    @Override public synchronized final String toString() {
    
    
        int accesses = hitCount + missCount;
        int hitPercent = accesses != 0 ? (100 * hitCount / accesses) : 0;
        return String.format("LruCache[maxSize=%d,hits=%d,misses=%d,hitRate=%d%%]",
                maxSize, hitCount, missCount, hitPercent);
    }
}

猜你喜欢

转载自blog.csdn.net/CHITTY1993/article/details/109244204