http://www.importnew.com/18706.html
https://blog.csdn.net/xiaxl/article/details/72621810
https://www.cnblogs.com/zhchoutai/p/6726391.html
网站性能优化第一定律:优先考虑使用缓存优化性能。
对于一些访问频率高、更新频率小的数据,考虑加入缓存来减少数据库压力。
而分布式缓存Redis等实现,但在有些小型场景可能显得略微笨拙,这时候可考虑使用单机的LocalCache。
首先我们了解一下缓存调度算法LRU——最少使用算法。将最常使用的数据前移,这样最常用数据总是能更快速度读取,而不常用数据被移动到末尾,设置淘汰规则可将其清除。
Java中有LinkedHashMap集合,存取效率都是O(1),并且可设置为访问有序,非常符合LRU算法的过程。
先看代码:
LinkedHashMap lmap = new LinkedHashMap();
lmap.put("语文", 1);
lmap.put("数学", 2);
lmap.put("英语", 3);
lmap.put("历史", 4);
lmap.put("政治", 5);
lmap.put("地理", 6);
lmap.put("生物", 7);
lmap.put("化学", 8);
for(Entry entry : lmap.entrySet()) {
System.out.println(entry.getKey() + ": " + entry.getValue());
}
运行结果:
语文: 1
数学: 2
英语: 3
历史: 4
政治: 5
地理: 6
生物: 7
化学: 8
可以观察到,LinkedHashMap实现了HashMap并不存在的插入顺序特性。我们看看其数据结构:
LinkedHashMap哈希表和双向链表的结合,每当put、get、access等操作时,该链表会执行相应调整操作。该链表定义了集合的迭代顺序, 默认是按照集合插入顺序(可调整为访问顺序)。
插入元素(key0,value0)、(key1,value1)后的链表结构
LinkedHashMap是一个双向链表,向链表中插入两个元素key0和key1后,双向链表的结构如下图所示:
map.put(key0, value0);
map.put(key1, value1);
调用map.get(key0)后的链表结构如下:
map.get(key0);
移除最早使用的元素时:
header.next()的数据为 (key1,value1) 。
每次调用 map.get(key)方法 后,都会将该元素放到Header元素的上一个;每次移除时,都会先移除header.next()元素;从而达到了保留最近使用的元素,移除了最早使用的元素。这就是Lru的实现原理。
LinkedHashMap有一个accessOrder。False使用插入顺序,true使用访问顺序,默认是fasle
/**
* The iteration ordering method for this linked hash map: true
* for access-order, false for insertion-order.
*
* @serial
*/
final boolean accessOrder;
LinkedHashMap重新实现了HashMap的Entry:
重写的Entry增加了after和before引用,代表前置后置指针。
static class Entry extends HashMap.Node {
Entry before, after;
Entry(int hash, K key, V value, Node next) {
super(hash, key, value, next);
}
}
还重写了HashMap几个方法, 就是为了让节点根据访问顺序更新到最新的位置上:
// 节点访问后
void afterNodeAccess(Node e) { // move node to last
LinkedHashMap.Entry last;
if (accessOrder && (last = tail) != e) {
LinkedHashMap.Entry p =
(LinkedHashMap.Entry)e, b = p.before, a = p.after;
p.after = null;
if (b == null)
head = a;
else
b.after = a;
if (a != null)
a.before = b;
else
last = b;
if (last == null)
head = p;
else {
p.before = last;
last.after = p;
}
tail = p;
++modCount;
}
}
// 节点插入后
void afterNodeInsertion(boolean evict) { // possibly remove eldest
LinkedHashMap.Entry first;
if (evict && (first = head) != null && removeEldestEntry(first)) {
K key = first.key;
removeNode(hash(key), key, null, false, true);
}
}
// 节点移除后
void afterNodeRemoval(Node e) { // unlink
LinkedHashMap.Entry p =
(LinkedHashMap.Entry)e, b = p.before, a = p.after;
p.before = p.after = null;
if (b == null)
head = a;
else
b.after = a;
if (a == null)
tail = b;
else
a.before = b;
}
在 put 的时候会根据最大容量来判断是否需要移除最不常用的元素了。要实现LRU的话,我们需要重写这个方法,他代表是否删除最老的元素,此方法默认返回false,
// 节点失效规则
最常用的元素如何处理。内部原理就是当每次 get 的时候,如果找到了元素就把元素重新添加到链表的头部。
public V get(Object key) {
Entry e = (Entry)getEntry(key);
if (e == null)
return null;
e.recordAccess(this);
return e.value;
}
void recordAccess(HashMap m) {
LinkedHashMap lm = (LinkedHashMap)m;
if (lm.accessOrder) {
lm.modCount++;
//先把自己移除,然后在把自己添加进去。
remove();
addBefore(lm.header);
}
}
实现一个全局范围的LocalCache,各个业务点使用自己的Namespace对LocalCache进行逻辑分区。所以在LocalCache中进行读写採用的key为(namespace+(分隔符)+数据key)。如存在下面的一对keyValue : NameToAge,Troy -> 23 。要求LocalCache线程安全,且LocalCache中总keyValue数量可控,提供清空,调整大小,dump到本地文件等一系列操作。
package toutiao;
import java.io.Externalizable;
import java.io.IOException;
import java.io.ObjectInput;
import java.io.ObjectOutput;
import java.lang.ref.SoftReference;
import java.util.Iterator;
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class LRUMap extends LinkedHashMap> implements Externalizable {
private static final long serialVersionUID = -7076355612133906912L;
/** The maximum size of the cache. */
private int maxCacheSize;
/* lock for map */
private final Lock lock = new ReentrantLock();
/**
* 默认构造函数,LRUMap的大小为Integer.MAX_VALUE
*/
public LRUMap() {
super();
maxCacheSize = Integer.MAX_VALUE;
}
/**
* Constructs a new, empty cache with the specified maximum size.
*/
public LRUMap(int size) {
super(size + 1, 1f, true);
maxCacheSize = size;
}
/**
* 让LinkHashMap支持LRU。假设Map的大小超过了预定值,则返回true,LinkedHashMap自身实现返回
* fasle。即永远不删除元素
*/
@Override
protected boolean removeEldestEntry(Map.Entry> eldest) {
boolean tmp = (size() > maxCacheSize);
return tmp;
}
public T addEntry(String key, T entry) {
try {
SoftReference sr_entry = new SoftReference(entry);
// add entry to hashmap
lock.lock();
put(key, sr_entry);
}
finally {
lock.unlock();
}
return entry;
}
public T getEntry(String key) {
SoftReference sr_entry;
try {
lock.lock();
if ((sr_entry = get(key)) == null) {
return null;
}
// if soft reference is null then the entry has been
// garbage collected and so the key should be removed also.
if (sr_entry.get() == null) {
remove(key);
return null;
}
}
finally {
lock.unlock();
}
return sr_entry.get();
}
@Override
public SoftReference remove(Object key) {
try {
lock.lock();
return super.remove(key);
}
finally {
lock.unlock();
}
}
@Override
public synchronized void clear() {
super.clear();
}
@Override
public void writeExternal(ObjectOutput out) throws IOException {
Iterator>> i = (size() > 0) ?
entrySet().iterator() : null;
// Write out size
out.writeInt(size());
// Write out keys and values
if (i != null) {
while (i.hasNext()) {
Map.Entry> e = i.next();
if (e != null && e.getValue() != null && e.getValue().get() != null) {
out.writeObject(e.getKey());
out.writeObject(e.getValue().get());
}
}
}
}
@Override
public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException {
// Read in size
int size = in.readInt();
// Read the keys and values, and put the mappings in the Map
for (int i = 0; i < size; i++) {
String key = (String) in.readObject();
@SuppressWarnings("unchecked")
T value = (T) in.readObject();
addEntry(key, value);
}
}
}
假设在LocalCache中仅仅使用一个LRU Map。将产生性能问题:1. 单个LinkedHashMap中元素数量太多 2. 高并发下读写锁限制。所以能够在LocalCache中使用多个LRU Map,并使用key 来 hash到某个LRU Map上,以此来提高在单个LinkedHashMap中检索的速度以及提高总体并发度。
这里hash选用了Wang/Jenkins hash算法。实现Hash的方式參考了ConcurrentHashMap的实现。
import java.io.File;
import java.lang.ref.SoftReference;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.atomic.AtomicLong;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class LocalCache{
private final int size;
/**
* 本地缓存最大容量
*/
static final int MAXIMUM_CAPACITY = 1 << 30;
/**
* 本地缓存支持最大的分区数
*/
static final int MAX_SEGMENTS = 1 << 16; // slightly conservative
/**
* 本地缓存存储的LRUMap数组
*/
LRUMap[] segments;
/**
* Mask value for indexing into segments. The upper bits of a key's hash
* code are used to choose the segment.
*/
int segmentMask;
/**
* Shift value for indexing within segments.
*/
int segmentShift;
/**
*
* 计数器重置阀值
*/
private static final int MAX_LOOKUP = 100000000;
/**
* 用于重置计数器的锁。防止多次重置计数器
*/
private final Lock lock = new ReentrantLock();
/**
* Number of requests made to lookup a cache entry.
*/
private AtomicLong lookup = new AtomicLong(0);
/**
* Number of successful requests for cache entries.
*/
private AtomicLong found = new AtomicLong(0);
public LocalCache(int size) {
this.size = size;
}
public CacheObject get(String key) {
if (StringUtils.isBlank(key)) {
return null;
}
// 添加计数器
lookup.incrementAndGet();
// 假设必要重置计数器
if (lookup.get() > MAX_LOOKUP) {
if (lock.tryLock()) {
try {
lookup.set(0);
found.set(0);
}
finally {
lock.unlock();
}
}
}
int hash = hash(key.hashCode());
CacheObject ret = segmentFor(hash).getEntry(key);
if (ret != null) {
found.incrementAndGet();
}
return ret;
}
public void remove(String key) {
if (StringUtils.isBlank(key)) {
return;
}
int hash = hash(key.hashCode());
segmentFor(hash).remove(key);
return;
}
public void put(String key, CacheObject val) {
if (StringUtils.isBlank(key) || val == null) {
return;
}
int hash = hash(key.hashCode());
segmentFor(hash).addEntry(key, val);
return;
}
public synchronized void clearCache() {
for (int i = 0; i < segments.length; ++i) {
segments[i].clear();
}
}
public synchronized void reload() throws Exception {
clearCache();
init();
}
public synchronized void dumpLocalCache() throws Exception {
for (int i = 0; i < segments.length; ++i) {
String tmpDir = System.getProperty("java.io.tmpdir");
String fileName = tmpDir + File.separator + "localCache-dump-file" + i + ".cache";
File file = new File(fileName);
ObjectUtils.objectToFile(segments[i], file);
}
}
@SuppressWarnings("unchecked")
public synchronized void restoreLocalCache() throws Exception {
for (int i = 0; i < segments.length; ++i) {
String tmpDir = System.getProperty("java.io.tmpdir");
String fileName = tmpDir + File.separator + "localCache-dump-file" + i + ".cache";
File file = new File(fileName);
LRUMap lruMap = (LRUMap) ObjectUtils.fileToObject(file);
if (lruMap != null) {
Set>> set = lruMap.entrySet();
Iterator>> it = set.iterator();
while (it.hasNext()) {
HashMap.Entry> entry = it.next();
if (entry.getValue() != null && entry.getValue().get() != null) {
segments[i].addEntry(entry.getKey(), entry.getValue().get());
}
}
}
}
}
/**
* 本地缓存命中次数,在计数器RESET的时刻可能会出现0的命中率
*/
public int getHitRate() {
long query = lookup.get();
return query == 0 ? 0 : (int) ((found.get() * 100) / query);
}
/**
* 本地缓存訪问次数。在计数器RESET时可能会出现0的查找次数
*/
public long getCount() {
return lookup.get();
}
public int size() {
final LRUMap[] segments = this.segments;
long sum = 0;
for (int i = 0; i < segments.length; ++i) {
sum += segments[i].size();
}
if (sum > Integer.MAX_VALUE) {
return Integer.MAX_VALUE;
} else {
return (int) sum;
}
}
/**
* Returns the segment that should be used for key with given hash
*
* @param hash
* the hash code for the key
* @return the segment
*/
final LRUMap segmentFor(int hash) {
return segments[(hash >>> segmentShift) & segmentMask];
}
/* ---------------- Small Utilities -------------- */
/**
* Applies a supplemental hash function to a given hashCode, which defends
* against poor quality hash functions. This is critical because
* ConcurrentHashMap uses power-of-two length hash tables, that otherwise
* encounter collisions for hashCodes that do not differ in lower or upper
* bits.
*/
private static int hash(int h) {
// Spread bits to regularize both segment and index locations,
// using variant of single-word Wang/Jenkins hash.
h += (h << 15) ^ 0xffffcd7d;
h ^= (h >>> 10);
h += (h << 3);
h ^= (h >>> 6);
h += (h << 2) + (h << 14);
return h ^ (h >>> 16);
}
@SuppressWarnings("unchecked")
public void init() throws Exception {
int concurrencyLevel = 16;
int capacity = size;
if (capacity < 0 || concurrencyLevel <= 0) {
throw new IllegalArgumentException();
}
if (concurrencyLevel > MAX_SEGMENTS) {
concurrencyLevel = MAX_SEGMENTS;
}
// Find power-of-two sizes best matching arguments
int sshift = 0;
int ssize = 1;
while (ssize < concurrencyLevel) {
++sshift;
ssize <<= 1;
}
segmentShift = 32 - sshift;
segmentMask = ssize - 1;
this.segments = new LRUMap[ssize];
if (capacity > MAXIMUM_CAPACITY) {
capacity = MAXIMUM_CAPACITY;
}
int c = capacity / ssize;
if (c * ssize < capacity) {
++c;
}
int cap = 1;
while (cap < c) {
cap <<= 1;
}
cap >>= 1;
for (int i = 0; i < this.segments.length; ++i) {
this.segments[i] = new LRUMap(cap);
}
}
}