d) totalCount>c deadline=currentTime
上述看似非常优雅的方案,却隐藏几个严重的问题:
1、 时间片的选择问题。
这个方案中,时间片的选择是一个比较困难的问题。因为,如果系统在一个时间片之内爆掉内存的话,系统将直接崩溃。
当然,这个问题,我们可以加外部限制得方式去控制
2、 deadline 之前的数据,不能很快删除。导致deaddata滞留,浪费大量的内存
假定 deadline之前的数据,约为总数据量的10%。因为删数据操作,只在put的时候。假定每个时间点的put操作,能覆盖20%的hash槽。这个10%*20%=2%,每个时间点,只能删除2%的过期数据。然后,随着时间的推移。这个过程必将趋于稳定。而这个趋于稳定后,内存消耗,至少是capacity的4-5倍。这样的消耗和浪费。是难以承受的。
这个方案,从实际测试来看,情况非常糟糕。所以最终还是放弃了。
五、 ConcurrentLRUHashMap实现方式三:分段实现锁分离+每个段内维护一份退化链表
【实现策略】:
1、锁分离机制。内部分成了多个segement,每个segement是独立加锁,相互不干扰。
2、每个segement内部维护一个双向链表(退化链表)。每次命中/添加,就把节点移动到退化链表头部。
3、每次put操作,通过hash,散到每个segement中,判断segment的容量是否到达阈值。 如果到达阈值,则删除退化链表中最末尾的节点。
【实现】
1、重新定义HashEntry
static class HashEntry {
/**
* 键
*/
final K key;
/**
* hash值
*/
final int hash;
/**
* 值
*/
volatile V value;
/**
* hash链指针
*/
final HashEntry next;
/**
* 双向链表的下一个节点
*/
HashEntry linknext;
/**
* 双向链表的下一个节点
*/
HashEntry linkpref;
/**
* 死亡标记
*/
AtomicBoolean dead;
}
2、定义segment
static final class Segment extends ReentrantLock implements
Serializable {
private static final long serialVersionUID = 1L;
transient int threshold;
transient volatile int count;
transient int modCount;
transient volatile HashEntry[] table;
transient final HashEntry header;// 头节点
}
3、 put操作
代码太长了,见附件吧
4、 get操作
V get(Object key, int hash) {
HashEntry e = getFirst(hash);
// 遍历查找
while (e != null) {
if (e.hash == hash && key.equals(e.key)) {
V v = e.value;
// 把节点移动到头部。
moveNodeToHeader(e);
if (v != null)
return v;
// 在锁的情况读,必定能读到。
// tab[index] = new HashEntry(key, hash, first, value),
// value赋值和tab[index]赋值可能会重新排序,重新排序之后,可能会读空值
// 读到空值的话,在有锁的情况在再读一遍,一定能读!
return readValueUnderLock(e); // recheck
}
e = e.next;
}
return null;
六、 ConcurrentLRUHashMap实现方式四:
具体的做法是:
1、 对concurrentHashMap 每个节点加时间戳,每次命中只修改该节点的时间戳。
2、 集中式退化操作,每次命中并不进行退化操作。而是集中式进行退化操作(满的时候,或者时间到了)。
代码:
private static class CountableKey implements Comparable> {
public CountableKey(K key,V value) {
if (value == null) {
throw new NullPointerException("should not be null");
}
this.value = value;
this.key = key;
refreshTimeStamp();
}
public void refreshTimeStamp(){
timestamp.set(System.currentTimeMillis());
}
final V value;
final K key;
AtomicLong timestamp = new AtomicLong();
@Override
public int compareTo(CountableKey o) {
long thisval = this.timestamp.get();
long anotherVal = o.timestamp.get();
return (thisval < anotherVal?-1:(thisval == anotherVal?0:1));
}
}
该方案的好处:
1、 快速执行get操作。get操作的时间是“concurrentHashMap的get时间+更新时间戳”的时间。
2、 put操作,一般的put操作的时间是“concurrentHashMap的put时间”,只要还未到达容量限制。而到达容量限制以后的,需要进行“退化,清理操作”+put的时间
该方案的 可能存在的问题:
1、 命中率,该算法的命中率同linkedHashMap
2、 清除 策略:
l 满了,执行清楚。缺点:1、会出现某个时刻,写操作卡死(如果正在等待清理的话)
l 定时执行。缺点:1、性能耗费。2、读不一致仍然无法避免。
七、 ConcurrentLRUHashMap实现方式的比较
本文只是抛砖引玉,希望能看到更多好多ConcurrentLRUHashMap的实现方式。由于能力有限。上文提到的第二种实现方式,在实际实现中并不能很好的退化,最终可能导致内存溢出。具体分析如下表
方式 |
方式一 |
方式二 |
方式三 |
方式四 |
性能 |
差 |
好 |
好 |
好 |
线程安全 |
绝对安全 |
安全 |
安全 |
安全 |
内存消耗 |
一般 |
很多 |
一般 |
一般 |
稳定性 |
稳定 |
不稳定 |
稳定 |
不稳定 |
总体来说,第三者性较好。
比较方式一和方式三:
源代码如下:
package com.googlecode.jue.util;
import java.io.IOException;
import java.io.Serializable;
import java.util.AbstractCollection;
import java.util.AbstractMap;
import java.util.AbstractSet;
import java.util.Collection;
import java.util.ConcurrentModificationException;
import java.util.Enumeration;
import java.util.Iterator;
import java.util.Map;
import java.util.NoSuchElementException;
import java.util.Set;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.locks.ReentrantLock;
/**
* 基于ConcurrentHashMap修改的LRUMap
*
* @author noah
*
* @param
* @param
*/
public class ConcurrentLRUHashMap extends AbstractMap implements
ConcurrentMap, Serializable {
/*
* The basic strategy is to subdivide the table among Segments, each of
* which itself is a concurrently readable hash table.
*/
/* ---------------- Constants -------------- */
/**
*
*/
private static final long serialVersionUID = -5031526786765467550L;
/**
* Segement默认最大数
*/
static final int DEFAULT_SEGEMENT_MAX_CAPACITY = 100;
/**
* The default load factor for this table, used when not otherwise specified
* in a constructor.
*/
static final float DEFAULT_LOAD_FACTOR = 0.75f;
/**
* The default concurrency level for this table, used when not otherwise
* specified in a constructor.
*/
static final int DEFAULT_CONCURRENCY_LEVEL = 16;
/**
* The maximum capacity, used if a higher value is implicitly specified by
* either of the constructors with arguments. MUST be a power of two <=
* 1<<30 to ensure that entries are indexable using ints.
*/
static final int MAXIMUM_CAPACITY = 1 << 30;
/**
* The maximum number of segments to allow; used to bound constructor
* arguments.
*/
static final int MAX_SEGMENTS = 1 << 16; // slightly conservative
/**
* Number of unsynchronized retries in size and containsValue methods before
* resorting to locking. This is used to avoid unbounded retries if tables
* undergo continuous modification which would make it impossible to obtain
* an accurate result.
*/
static final int RETRIES_BEFORE_LOCK = 2;
/* ---------------- Fields -------------- */
/**
* Mask value for indexing into segments. The upper bits of a key's hash
* code are used to choose the segment.
*/
final int segmentMask;
/**
* Shift value for indexing within segments.
*/
final int segmentShift;
/**
* The segments, each of which is a specialized hash table
*/
final Segment[] segments;
transient Set keySet;
transient Set> entrySet;
transient Collection values;
/* ---------------- Small Utilities -------------- */
/**
* Applies a supplemental hash function to a given hashCode, which defends
* against poor quality hash functions. This is critical because
* ConcurrentHashMap uses power-of-two length hash tables, that otherwise
* encounter collisions for hashCodes that do not differ in lower or upper
* bits.
*/
private static int hash(int h) {
// Spread bits to regularize both segment and index locations,
// using variant of single-word Wang/Jenkins hash.
h += (h << 15) ^ 0xffffcd7d;
h ^= (h >>> 10);
h += (h << 3);
h ^= (h >>> 6);
h += (h << 2) + (h << 14);
return h ^ (h >>> 16);
}
/**
* Returns the segment that should be used for key with given hash
*
* @param hash
* the hash code for the key
* @return the segment
*/
final Segment segmentFor(int hash) {
return segments[(hash >>> segmentShift) & segmentMask];
}
/* ---------------- Inner Classes -------------- */
/**
* 修改原HashEntry,
*/
static final class HashEntry {
/**
* 键
*/
final K key;
/**
* hash值
*/
final int hash;
/**
* 值
*/
volatile V value;
/**
* hash链指针
*/
final HashEntry next;
/**
* 双向链表的下一个节点
*/
HashEntry linkNext;
/**
* 双向链表的上一个节点
*/
HashEntry linkPrev;
/**
* 死亡标记
*/
AtomicBoolean dead;
HashEntry(K key, int hash, HashEntry next, V value) {
this.key = key;
this.hash = hash;
this.next = next;
this.value = value;
dead = new AtomicBoolean(false);
}
@SuppressWarnings("unchecked")
static final HashEntry[] newArray(int i) {
return new HashEntry[i];
}
}
/**
* 基于原Segment修改,内部实现一个双向列表
*
* @author noah
*
* @param
* @param
*/
static final class Segment extends ReentrantLock implements Serializable {
/*
* Segments maintain a table of entry lists that are ALWAYS kept in a
* consistent state, so can be read without locking. Next fields of
* nodes are immutable (final). All list additions are performed at the
* front of each bin. This makes it easy to check changes, and also fast
* to traverse. When nodes would otherwise be changed, new nodes are
* created to replace them. This works well for hash tables since the
* bin lists tend to be short. (The average length is less than two for
* the default load factor threshold.)
*
* Read operations can thus proceed without locking, but rely on
* selected uses of volatiles to ensure that completed write operations
* performed by other threads are noticed. For most purposes, the
* "count" field, tracking the number of elements, serves as that
* volatile variable ensuring visibility. This is convenient because
* this field needs to be read in many read operations anyway:
*
* - All (unsynchronized) read operations must first read the "count"
* field, and should not look at table entries if it is 0.
*
* - All (synchronized) write operations should write to the "count"
* field after structurally changing any bin. The operations must not
* take any action that could even momentarily cause a concurrent read
* operation to see inconsistent data. This is made easier by the nature
* of the read operations in Map. For example, no operation can reveal
* that the table has grown but the threshold has not yet been updated,
* so there are no atomicity requirements for this with respect to
* reads.
*
* As a guide, all critical volatile reads and writes to the count field
* are marked in code comments.
*/
private static final long serialVersionUID = 2249069246763182397L;
/**
* The number of elements in this segment's region.
*/
transient volatile int count;
/**
* Number of updates that alter the size of the table. This is used
* during bulk-read methods to make sure they see a consistent snapshot:
* If modCounts change during a traversal of segments computing size or
* checking containsValue, then we might have an inconsistent view of
* state so (usually) must retry.
*/
transient int modCount;
/**
* The table is rehashed when its size exceeds this threshold. (The
* value of this field is always (int)(capacity *
* loadFactor).)
*/
transient int threshold;
/**
* The per-segment table.
*/
transient volatile HashEntry[] table;
/**
* The load factor for the hash table. Even though this value is same
* for all segments, it is replicated to avoid needing links to outer
* object.
*
* @serial
*/
final float loadFactor;
/**
* 头节点
*/
transient final HashEntry header;
/**
* Segement最大容量
*/
final int maxCapacity;
Segment(int maxCapacity, float lf, ConcurrentLRUHashMap lruMap) {
this.maxCapacity = maxCapacity;
loadFactor = lf;
setTable(HashEntry. newArray(maxCapacity));
header = new HashEntry(null, -1, null, null);
header.linkNext = header;
header.linkPrev = header;
}
@SuppressWarnings("unchecked")
static final Segment[] newArray(int i) {
return new Segment[i];
}
/**
* Sets table to new HashEntry array. Call only while holding lock or in
* constructor.
*/
void setTable(HashEntry[] newTable) {
threshold = (int) (newTable.length * loadFactor);
table = newTable;
}
/**
* Returns properly casted first entry of bin for given hash.
*/
HashEntry getFirst(int hash) {
HashEntry[] tab = table;
return tab[hash & (tab.length - 1)];
}
/**
* Reads value field of an entry under lock. Called if value field ever
* appears to be null. This is possible only if a compiler happens to
* reorder a HashEntry initialization with its table assignment, which
* is legal under memory model but is not known to ever occur.
*/
V readValueUnderLock(HashEntry e) {
lock();
try {
return e.value;
} finally {
unlock();
}
}
/* Specialized implementations of map methods */
V get(Object key, int hash) {
lock();
try {
if (count != 0) { // read-volatile
HashEntry e = getFirst(hash);
while (e != null) {
if (e.hash == hash && key.equals(e.key)) {
V v = e.value;
// 将节点移动到头节点之前
moveNodeToHeader(e);
if (v != null)
return v;
return readValueUnderLock(e); // recheck
}
e = e.next;
}
}
return null;
} finally {
unlock();
}
}
/**
* 将节点移动到头节点之前
*
* @param entry
*/
void moveNodeToHeader(HashEntry entry) {
// 先移除,然后插入到头节点的前面
removeNode(entry);
addBefore(entry, header);
}
/**
* 将第一个参数代表的节点插入到第二个参数代表的节点之前
*
* @param newEntry
* 需要插入的节点
* @param entry
* 被插入的节点
*/
void addBefore(HashEntry newEntry, HashEntry entry) {
newEntry.linkNext = entry;
newEntry.linkPrev = entry.linkPrev;
entry.linkPrev.linkNext = newEntry;
entry.linkPrev = newEntry;
}
/**
* 从双向链中删除该Entry
*
* @param entry
*/
void removeNode(HashEntry entry) {
entry.linkPrev.linkNext = entry.linkNext;
entry.linkNext.linkPrev = entry.linkPrev;
}
boolean containsKey(Object key, int hash) {
lock();
try {
if (count != 0) { // read-volatile
HashEntry e = getFirst(hash);
while (e != null) {
if (e.hash == hash && key.equals(e.key)) {
moveNodeToHeader(e);
return true;
}
e = e.next;
}
}
return false;
} finally {
unlock();
}
}
boolean containsValue(Object value) {
lock();
try {
if (count != 0) { // read-volatile
HashEntry[] tab = table;
int len = tab.length;
for (int i = 0; i < len; i++) {
for (HashEntry e = tab[i]; e != null; e = e.next) {
V v = e.value;
if (v == null) // recheck
v = readValueUnderLock(e);
if (value.equals(v)) {
moveNodeToHeader(e);
return true;
}
}
}
}
return false;
} finally {
unlock();
}
}
boolean replace(K key, int hash, V oldValue, V newValue) {
lock();
try {
HashEntry e = getFirst(hash);
while (e != null && (e.hash != hash || !key.equals(e.key)))
e = e.next;
boolean replaced = false;
if (e != null && oldValue.equals(e.value)) {
replaced = true;
e.value = newValue;
// 移动到头部
moveNodeToHeader(e);
}
return replaced;
} finally {
unlock();
}
}
V replace(K key, int hash, V newValue) {
lock();
try {
HashEntry e = getFirst(hash);
while (e != null && (e.hash != hash || !key.equals(e.key)))
e = e.next;
V oldValue = null;
if (e != null) {
oldValue = e.value;
e.value = newValue;
// 移动到头部
moveNodeToHeader(e);
}
return oldValue;
} finally {
unlock();
}
}
V put(K key, int hash, V value, boolean onlyIfAbsent) {
lock();
try {
int c = count;
if (c++ > threshold) // ensure capacity
rehash();
HashEntry[] tab = table;
int index = hash & (tab.length - 1);
HashEntry first = tab[index];
HashEntry e = first;
while (e != null && (e.hash != hash || !key.equals(e.key)))
e = e.next;
V oldValue = null;
if (e != null) {
oldValue = e.value;
if (!onlyIfAbsent) {
e.value = value;
// 移动到头部
moveNodeToHeader(e);
}
} else {
oldValue = null;
++modCount;
HashEntry newEntry = new HashEntry(key, hash, first, value);
tab[index] = newEntry;
count = c; // write-volatile
// 添加到双向链
addBefore(newEntry, header);
// 判断是否达到最大值
removeEldestEntry();
}
return oldValue;
} finally {
unlock();
}
}
void rehash() {
HashEntry[] oldTable = table;
int oldCapacity = oldTable.length;
if (oldCapacity >= MAXIMUM_CAPACITY)
return;
/*
* Reclassify nodes in each list to new Map. Because we are using
* power-of-two expansion, the elements from each bin must either
* stay at same index, or move with a power of two offset. We
* eliminate unnecessary node creation by catching cases where old
* nodes can be reused because their next fields won't change.
* Statistically, at the default threshold, only about one-sixth of
* them need cloning when a table doubles. The nodes they replace
* will be garbage collectable as soon as they are no longer
* referenced by any reader thread that may be in the midst of
* traversing table right now.
*/
HashEntry[] newTable = HashEntry.newArray(oldCapacity << 1);
threshold = (int) (newTable.length * loadFactor);
int sizeMask = newTable.length - 1;
for (int i = 0; i < oldCapacity; i++) {
// We need to guarantee that any existing reads of old Map can
// proceed. So we cannot yet null out each bin.
HashEntry e = oldTable[i];
if (e != null) {
HashEntry next = e.next;
int idx = e.hash & sizeMask;
// Single node on list
if (next == null)
newTable[idx] = e;
else {
// Reuse trailing consecutive sequence at same slot
HashEntry lastRun = e;
int lastIdx = idx;
for (HashEntry last = next; last != null; last = last.next) {
int k = last.hash & sizeMask;
if (k != lastIdx) {
lastIdx = k;
lastRun = last;
}
}
newTable[lastIdx] = lastRun;
// Clone all remaining nodes
for (HashEntry p = e; p != lastRun; p = p.next) {
int k = p.hash & sizeMask;
HashEntry n = newTable[k];
HashEntry newEntry = new HashEntry(
p.key, p.hash, n, p.value);
// update by Noah
newEntry.linkNext = p.linkNext;
newEntry.linkPrev = p.linkPrev;
newTable[k] = newEntry;
}
}
}
}
table = newTable;
}
/**
* Remove; match on key only if value null, else match both.
*/
V remove(Object key, int hash, Object value) {
lock();
try {
int c = count - 1;
HashEntry[] tab = table;
int index = hash & (tab.length - 1);
HashEntry first = tab[index];
HashEntry e = first;
while (e != null && (e.hash != hash || !key.equals(e.key)))
e = e.next;
V oldValue = null;
if (e != null) {
V v = e.value;
if (value == null || value.equals(v)) {
oldValue = v;
// All entries following removed node can stay
// in list, but all preceding ones need to be
// cloned.
++modCount;
HashEntry newFirst = e.next;
for (HashEntry p = first; p != e; p = p.next) {
newFirst = new HashEntry(p.key, p.hash,
newFirst, p.value);
newFirst.linkNext = p.linkNext;
newFirst.linkPrev = p.linkPrev;
}
tab[index] = newFirst;
count = c; // write-volatile
// 移除节点
removeNode(e);
}
}
return oldValue;
} finally {
unlock();
}
}
/**
* 移除最旧元素
*/
void removeEldestEntry() {
if (count > this.maxCapacity) {
HashEntry eldest = header.linkNext;
remove(eldest.key, eldest.hash, null);
}
}
void clear() {
if (count != 0) {
lock();
try {
HashEntry[] tab = table;
for (int i = 0; i < tab.length; i++)
tab[i] = null;
++modCount;
count = 0; // write-volatile
} finally {
unlock();
}
}
}
}
/**
* 使用指定参数,创建一个ConcurrentLRUHashMap
*
* @param segementCapacity
* Segement最大容量
* @param loadFactor
* 加载因子
* @param concurrencyLevel
* 并发级别
*/
public ConcurrentLRUHashMap(int segementCapacity, float loadFactor,
int concurrencyLevel) {
if (!(loadFactor > 0) || segementCapacity < 0 || concurrencyLevel <= 0)
throw new IllegalArgumentException();
if (concurrencyLevel > MAX_SEGMENTS)
concurrencyLevel = MAX_SEGMENTS;
// Find power-of-two sizes best matching arguments
int sshift = 0;
int ssize = 1;
while (ssize < concurrencyLevel) {
++sshift;
ssize <<= 1;
}
segmentShift = 32 - sshift;
segmentMask = ssize - 1;
this.segments = Segment.newArray(ssize);
for (int i = 0; i < this.segments.length; ++i)
this.segments[i] = new Segment(segementCapacity, loadFactor, this);
}
/**
* 使用指定参数,创建一个ConcurrentLRUHashMap
*
* @param segementCapacity
* Segement最大容量
* @param loadFactor
* 加载因子
*/
public ConcurrentLRUHashMap(int segementCapacity, float loadFactor) {
this(segementCapacity, loadFactor, DEFAULT_CONCURRENCY_LEVEL);
}
/**
* 使用指定参数,创建一个ConcurrentLRUHashMap
*
* @param segementCapacity
* Segement最大容量
*/
public ConcurrentLRUHashMap(int segementCapacity) {
this(segementCapacity, DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL);
}
/**
* 使用默认参数,创建一个ConcurrentLRUHashMap,存放元素最大数默认为1000, 加载因子为0.75,并发级别16
*/
public ConcurrentLRUHashMap() {
this(DEFAULT_SEGEMENT_MAX_CAPACITY, DEFAULT_LOAD_FACTOR,
DEFAULT_CONCURRENCY_LEVEL);
}
/**
* Returns true if this map contains no key-value mappings.
*
* @return true if this map contains no key-value mappings
*/
public boolean isEmpty() {
final Segment[] segments = this.segments;
/*
* We keep track of per-segment modCounts to avoid ABA problems in which
* an element in one segment was added and in another removed during
* traversal, in which case the table was never actually empty at any
* point. Note the similar use of modCounts in the size() and
* containsValue() methods, which are the only other methods also
* susceptible to ABA problems.
*/
int[] mc = new int[segments.length];
int mcsum = 0;
for (int i = 0; i < segments.length; ++i) {
if (segments[i].count != 0)
return false;
else
mcsum += mc[i] = segments[i].modCount;
}
// If mcsum happens to be zero, then we know we got a snapshot
// before any modifications at all were made. This is
// probably common enough to bother tracking.
if (mcsum != 0) {
for (int i = 0; i < segments.length; ++i) {
if (segments[i].count != 0 || mc[i] != segments[i].modCount)
return false;
}
}
return true;
}
/**
* Returns the number of key-value mappings in this map. If the map contains
* more than Integer.MAX_VALUE elements, returns
* Integer.MAX_VALUE.
*
* @return the number of key-value mappings in this map
*/
public int size() {
final Segment[] segments = this.segments;
long sum = 0;
long check = 0;
int[] mc = new int[segments.length];
// Try a few times to get accurate count. On failure due to
// continuous async changes in table, resort to locking.
for (int k = 0; k < RETRIES_BEFORE_LOCK; ++k) {
check = 0;
sum = 0;
int mcsum = 0;
for (int i = 0; i < segments.length; ++i) {
sum += segments[i].count;
mcsum += mc[i] = segments[i].modCount;
}
if (mcsum != 0) {
for (int i = 0; i < segments.length; ++i) {
check += segments[i].count;
if (mc[i] != segments[i].modCount) {
check = -1; // force retry
break;
}
}
}
if (check == sum)
break;
}
if (check != sum) { // Resort to locking all segments
sum = 0;
for (int i = 0; i < segments.length; ++i)
segments[i].lock();
for (int i = 0; i < segments.length; ++i)
sum += segments[i].count;
for (int i = 0; i < segments.length; ++i)
segments[i].unlock();
}
if (sum > Integer.MAX_VALUE)
return Integer.MAX_VALUE;
else
return (int) sum;
}
/**
* Returns the value to which the specified key is mapped, or {@code null}
* if this map contains no mapping for the key.
*
*
* More formally, if this map contains a mapping from a key {@code k} to a
* value {@code v} such that {@code key.equals(k)}, then this method returns
* {@code v}; otherwise it returns {@code null}. (There can be at most one
* such mapping.)
*
* @throws NullPointerException
* if the specified key is null
*/
public V get(Object key) {
int hash = hash(key.hashCode());
return segmentFor(hash).get(key, hash);
}
/**
* Tests if the specified object is a key in this table.
*
* @param key
* possible key
* @return true if and only if the specified object is a key in
* this table, as determined by the equals method;
* false otherwise.
* @throws NullPointerException
* if the specified key is null
*/
public boolean containsKey(Object key) {
int hash = hash(key.hashCode());
return segmentFor(hash).containsKey(key, hash);
}
/**
* Returns true if this map maps one or more keys to the specified
* value. Note: This method requires a full internal traversal of the hash
* table, and so is much slower than method containsKey.
*
* @param value
* value whose presence in this map is to be tested
* @return true if this map maps one or more keys to the specified
* value
* @throws NullPointerException
* if the specified value is null
*/
public boolean containsValue(Object value) {
if (value == null)
throw new NullPointerException();
// See explanation of modCount use above
final Segment[] segments = this.segments;
int[] mc = new int[segments.length];
// Try a few times without locking
for (int k = 0; k < RETRIES_BEFORE_LOCK; ++k) {
int mcsum = 0;
for (int i = 0; i < segments.length; ++i) {
mcsum += mc[i] = segments[i].modCount;
if (segments[i].containsValue(value))
return true;
}
boolean cleanSweep = true;
if (mcsum != 0) {
for (int i = 0; i < segments.length; ++i) {
if (mc[i] != segments[i].modCount) {
cleanSweep = false;
break;
}
}
}
if (cleanSweep)
return false;
}
// Resort to locking all segments
for (int i = 0; i < segments.length; ++i)
segments[i].lock();
boolean found = false;
try {
for (int i = 0; i < segments.length; ++i) {
if (segments[i].containsValue(value)) {
found = true;
break;
}
}
} finally {
for (int i = 0; i < segments.length; ++i)
segments[i].unlock();
}
return found;
}
/**
* Legacy method testing if some key maps into the specified value in this
* table. This method is identical in functionality to
* {@link #containsValue}, and exists solely to ensure full compatibility
* with class {@link java.util.Hashtable}, which supported this method prior
* to introduction of the Java Collections framework.
*
* @param value
* a value to search for
* @return true if and only if some key maps to the value
* argument in this table as determined by the equals
* method; false otherwise
* @throws NullPointerException
* if the specified value is null
*/
public boolean contains(Object value) {
return containsValue(value);
}
/**
* Put一个键值,加Map锁
*/
public V put(K key, V value) {
if (value == null)
throw new NullPointerException();
int hash = hash(key.hashCode());
return segmentFor(hash).put(key, hash, value, false);
}
/**
* Put一个键值,如果该Key不存在的话
*/
public V putIfAbsent(K key, V value) {
if (value == null)
throw new NullPointerException();
int hash = hash(key.hashCode());
return segmentFor(hash).put(key, hash, value, true);
}
/**
* Copies all of the mappings from the specified map to this one. These
* mappings replace any mappings that this map had for any of the keys
* currently in the specified map.
*
* @param m
* mappings to be stored in this map
*/
public void putAll(Map extends K, ? extends V> m) {
for (Map.Entry extends K, ? extends V> e : m.entrySet())
put(e.getKey(), e.getValue());
}
/**
* Removes the key (and its corresponding value) from this map. This method
* does nothing if the key is not in the map.
*
* @param key
* the key that needs to be removed
* @return the previous value associated with key, or null
* if there was no mapping for key
* @throws NullPointerException
* if the specified key is null
*/
public V remove(Object key) {
int hash = hash(key.hashCode());
return segmentFor(hash).remove(key, hash, null);
}
/**
* {@inheritDoc}
*
* @throws NullPointerException
* if the specified key is null
*/
public boolean remove(Object key, Object value) {
int hash = hash(key.hashCode());
if (value == null)
return false;
return segmentFor(hash).remove(key, hash, value) != null;
}
/**
* {@inheritDoc}
*
* @throws NullPointerException
* if any of the arguments are null
*/
public boolean replace(K key, V oldValue, V newValue) {
if (oldValue == null || newValue == null)
throw new NullPointerException();
int hash = hash(key.hashCode());
return segmentFor(hash).replace(key, hash, oldValue, newValue);
}
/**
* {@inheritDoc}
*
* @return the previous value associated with the specified key, or
* null if there was no mapping for the key
* @throws NullPointerException
* if the specified key or value is null
*/
public V replace(K key, V value) {
if (value == null)
throw new NullPointerException();
int hash = hash(key.hashCode());
return segmentFor(hash).replace(key, hash, value);
}
/**
* Removes all of the mappings from this map.
*/
public void clear() {
for (int i = 0; i < segments.length; ++i)
segments[i].clear();
}
/**
* Returns a {@link Set} view of the keys contained in this map. The set is
* backed by the map, so changes to the map are reflected in the set, and
* vice-versa. The set supports element removal, which removes the
* corresponding mapping from this map, via the Iterator.remove,
* Set.remove, removeAll, retainAll, and
* clear operations. It does not support the add or
* addAll operations.
*
*
* The view's iterator is a "weakly consistent" iterator that will
* never throw {@link ConcurrentModificationException}, and guarantees to
* traverse elements as they existed upon construction of the iterator, and
* may (but is not guaranteed to) reflect any modifications subsequent to
* construction.
*/
public Set keySet() {
Set ks = keySet;
return (ks != null) ? ks : (keySet = new KeySet());
}
/**
* Returns a {@link Collection} view of the values contained in this map.
* The collection is backed by the map, so changes to the map are reflected
* in the collection, and vice-versa. The collection supports element
* removal, which removes the corresponding mapping from this map, via the
* Iterator.remove, Collection.remove, removeAll,
* retainAll, and clear operations. It does not support
* the add or addAll operations.
*
*
* The view's iterator is a "weakly consistent" iterator that will
* never throw {@link ConcurrentModificationException}, and guarantees to
* traverse elements as they existed upon construction of the iterator, and
* may (but is not guaranteed to) reflect any modifications subsequent to
* construction.
*/
public Collection values() {
Collection vs = values;
return (vs != null) ? vs : (values = new Values());
}
/**
* Returns a {@link Set} view of the mappings contained in this map. The set
* is backed by the map, so changes to the map are reflected in the set, and
* vice-versa. The set supports element removal, which removes the
* corresponding mapping from the map, via the Iterator.remove,
* Set.remove, removeAll, retainAll, and
* clear operations. It does not support the add or
* addAll operations.
*
*
* The view's iterator is a "weakly consistent" iterator that will
* never throw {@link ConcurrentModificationException}, and guarantees to
* traverse elements as they existed upon construction of the iterator, and
* may (but is not guaranteed to) reflect any modifications subsequent to
* construction.
*/
public Set> entrySet() {
Set> es = entrySet;
return (es != null) ? es : (entrySet = new EntrySet());
}
/**
* Returns an enumeration of the keys in this table.
*
* @return an enumeration of the keys in this table
* @see #keySet()
*/
public Enumeration keys() {
return new KeyIterator();
}
/**
* Returns an enumeration of the values in this table.
*
* @return an enumeration of the values in this table
* @see #values()
*/
public Enumeration elements() {
return new ValueIterator();
}
/* ---------------- Iterator Support -------------- */
abstract class HashIterator {
int nextSegmentIndex;
int nextTableIndex;
HashEntry[] currentTable;
HashEntry nextEntry;
HashEntry lastReturned;
HashIterator() {
nextSegmentIndex = segments.length - 1;
nextTableIndex = -1;
advance();
}
public boolean hasMoreElements() {
return hasNext();
}
final void advance() {
if (nextEntry != null && (nextEntry = nextEntry.next) != null)
return;
while (nextTableIndex >= 0) {
if ((nextEntry = currentTable[nextTableIndex--]) != null)
return;
}
while (nextSegmentIndex >= 0) {
Segment seg = segments[nextSegmentIndex--];
if (seg.count != 0) {
currentTable = seg.table;
for (int j = currentTable.length - 1; j >= 0; --j) {
if ((nextEntry = currentTable[j]) != null) {
nextTableIndex = j - 1;
return;
}
}
}
}
}
public boolean hasNext() {
return nextEntry != null;
}
HashEntry nextEntry() {
if (nextEntry == null)
throw new NoSuchElementException();
lastReturned = nextEntry;
advance();
return lastReturned;
}
public void remove() {
if (lastReturned == null)
throw new IllegalStateException();
ConcurrentLRUHashMap.this.remove(lastReturned.key);
lastReturned = null;
}
}
final class KeyIterator extends HashIterator implements Iterator,
Enumeration {
public K next() {
return super.nextEntry().key;
}
public K nextElement() {
return super.nextEntry().key;
}
}
final class ValueIterator extends HashIterator implements Iterator,
Enumeration {
public V next() {
return super.nextEntry().value;
}
public V nextElement() {
return super.nextEntry().value;
}
}
/**
* Custom Entry class used by EntryIterator.next(), that relays setValue
* changes to the underlying map.
*/
final class WriteThroughEntry extends AbstractMap.SimpleEntry {
/**
*
*/
private static final long serialVersionUID = -2545938966452012894L;
WriteThroughEntry(K k, V v) {
super(k, v);
}
/**
* Set our entry's value and write through to the map. The value to
* return is somewhat arbitrary here. Since a WriteThroughEntry does not
* necessarily track asynchronous changes, the most recent "previous"
* value could be different from what we return (or could even have been
* removed in which case the put will re-establish). We do not and
* cannot guarantee more.
*/
public V setValue(V value) {
if (value == null)
throw new NullPointerException();
V v = super.setValue(value);
ConcurrentLRUHashMap.this.put(getKey(), value);
return v;
}
}
final class EntryIterator extends HashIterator implements
Iterator> {
public Map.Entry next() {
HashEntry e = super.nextEntry();
return new WriteThroughEntry(e.key, e.value);
}
}
final class KeySet extends AbstractSet {
public Iterator iterator() {
return new KeyIterator();
}
public int size() {
return ConcurrentLRUHashMap.this.size();
}
public boolean contains(Object o) {
return ConcurrentLRUHashMap.this.containsKey(o);
}
public boolean remove(Object o) {
return ConcurrentLRUHashMap.this.remove(o) != null;
}
public void clear() {
ConcurrentLRUHashMap.this.clear();
}
}
final class Values extends AbstractCollection {
public Iterator iterator() {
return new ValueIterator();
}
public int size() {
return ConcurrentLRUHashMap.this.size();
}
public boolean contains(Object o) {
return ConcurrentLRUHashMap.this.containsValue(o);
}
public void clear() {
ConcurrentLRUHashMap.this.clear();
}
}
final class EntrySet extends AbstractSet> {
public Iterator> iterator() {
return new EntryIterator();
}
public boolean contains(Object o) {
if (!(o instanceof Map.Entry))
return false;
Map.Entry, ?> e = (Map.Entry, ?>) o;
V v = ConcurrentLRUHashMap.this.get(e.getKey());
return v != null && v.equals(e.getValue());
}
public boolean remove(Object o) {
if (!(o instanceof Map.Entry))
return false;
Map.Entry, ?> e = (Map.Entry, ?>) o;
return ConcurrentLRUHashMap.this.remove(e.getKey(), e.getValue());
}
public int size() {
return ConcurrentLRUHashMap.this.size();
}
public void clear() {
ConcurrentLRUHashMap.this.clear();
}
}
/* ---------------- Serialization Support -------------- */
/**
* Save the state of the ConcurrentHashMap instance to a stream
* (i.e., serialize it).
*
* @param s
* the stream
* @serialData the key (Object) and value (Object) for each key-value
* mapping, followed by a null pair. The key-value mappings are
* emitted in no particular order.
*/
private void writeObject(java.io.ObjectOutputStream s) throws IOException {
s.defaultWriteObject();
for (int k = 0; k < segments.length; ++k) {
Segment seg = segments[k];
seg.lock();
try {
HashEntry[] tab = seg.table;
for (int i = 0; i < tab.length; ++i) {
for (HashEntry e = tab[i]; e != null; e = e.next) {
s.writeObject(e.key);
s.writeObject(e.value);
}
}
} finally {
seg.unlock();
}
}
s.writeObject(null);
s.writeObject(null);
}
/**
* Reconstitute the ConcurrentHashMap instance from a stream (i.e.,
* deserialize it).
*
* @param s
* the stream
*/
@SuppressWarnings("unchecked")
private void readObject(java.io.ObjectInputStream s) throws IOException,
ClassNotFoundException {
s.defaultReadObject();
// Initialize each segment to be minimally sized, and let grow.
for (int i = 0; i < segments.length; ++i) {
segments[i].setTable(new HashEntry[1]);
}
// Read the keys and values, and put the mappings in the table
for (;;) {
K key = (K) s.readObject();
V value = (V) s.readObject();
if (key == null)
break;
put(key, value);
}
}
}