jdk1.8之前,通过segments实现,segments继承ReentrantLock,segments充当锁的角色,每一个table(桶)都有自己的锁,因此jdk1.8之前ConcurrentHashMap采用分段锁的机制来实现并发的更新操作,提高并发效率。
table相当于HashMap中的数组(桶),桶里每一个节点包含键,值,Hash值,称之为HashEntry,用来封装键值对(key,value);
jdk1.8时,节点依旧存在(Node),不存在segments,不采用分段锁机制实现并发;
1)取消了segments字段,采用volatile修饰table保存数据,采用table数组作为锁(同步锁);
2)将原先table数组+链表的数据结构变成table数组+链表+红黑树的数据结构;(在使用Hash表查找数据的时候,我们期待时间复杂度为O(1),如果只使用链表,会产生大量的Hash冲突,key值不能均匀分布,会使时间复杂度变为O(n),红黑树在最坏情况下,时间复杂度为logN )
public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
implements ConcurrentMap<K,V>, Serializable {
AbstractMap:一个抽象类,与HashMap一样,会有一些公有方法定义在AbstractMap里,对Map的基本操作;
ConcurrentMap:定义了一系列关于并发的操作
Serializable:表示ConcurrentHashMap能够被序列化
private static final long serialVersionUID = 7249069246763182397L;
序列化ID
private static final int DEFAULT_CAPACITY = 16;
桶的默认初始容量
static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
最大数组的大小
private static final int DEFAULT_CONCURRENCY_LEVEL = 16;
默认并发级别
private static final float LOAD_FACTOR = 0.75f;
默认加载因子
static final int TREEIFY_THRESHOLD = 8;
阈值,链表转为红黑树
static final int UNTREEIFY_THRESHOLD = 6;
阈值,红黑树转为链表
static final int MIN_TREEIFY_CAPACITY = 64;
转换阈值
static final int MOVED = -1; // hash for forwarding nodes
static final int TREEBIN = -2; // hash for roots of trees
static final int RESERVED = -3; // hash for transient reservations
static final int HASH_BITS = 0x7fffffff; // usable bits of normal node hash
hash值
static final int NCPU = Runtime.getRuntime().availableProcessors();
得到可用CPU个数
transient volatile Node[] table;
桶
private transient volatile Node[] nextTable;
扩容时需要使用的桶
public ConcurrentHashMap() {
}
无参构造函数
public ConcurrentHashMap(int initialCapacity) {
if (initialCapacity < 0)
throw new IllegalArgumentException();
int cap = ((initialCapacity >= (MAXIMUM_CAPACITY >>> 1)) ?
MAXIMUM_CAPACITY :
tableSizeFor(initialCapacity + (initialCapacity >>> 1) + 1));
this.sizeCtl = cap;
}
创建带有指定容量的Map
public ConcurrentHashMap(Map<? extends K, ? extends V> m) {
this.sizeCtl = DEFAULT_CAPACITY;
putAll(m);
}
表示可以传入一个Map映射,将Map中的所有元素添加到新的对象里面
public ConcurrentHashMap(int initialCapacity, float loadFactor) {
this(initialCapacity, loadFactor, 1);
}
传入指定容量,和指定的加载因子
public ConcurrentHashMap(int initialCapacity,
float loadFactor, int concurrencyLevel) {
if (!(loadFactor > 0.0f) || initialCapacity < 0 || concurrencyLevel <= 0)
throw new IllegalArgumentException();
if (initialCapacity < concurrencyLevel) // Use at least as many bins
initialCapacity = concurrencyLevel; // as estimated threads
long size = (long)(1.0 + (long)initialCapacity / loadFactor);
int cap = (size >= (long)MAXIMUM_CAPACITY) ?
MAXIMUM_CAPACITY : tableSizeFor((int)size);
this.sizeCtl = cap;
}
给定容量,加载因子,并发级别
public V put(K key, V value) {
return putVal(key, value, false);
}
put方法中调用了putVal方法 ,putVal方法的实现如下:
final V putVal(K key, V value, boolean onlyIfAbsent) {
if (key == null || value == null) throw new NullPointerException();
int hash = spread(key.hashCode());
int binCount = 0;
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh; K fk; V fv;
if (tab == null || (n = tab.length) == 0)
tab = initTable();
else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
if (casTabAt(tab, i, null, new Node<K,V>(hash, key, value)))
break; // no lock when adding to empty bin
}
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f);
else if (onlyIfAbsent // check first node without acquiring lock
&& fh == hash
&& ((fk = f.key) == key || (fk != null && key.equals(fk)))
&& (fv = f.val) != null)
return fv;
else {
V oldVal = null;
synchronized (f) {
if (tabAt(tab, i) == f) {
if (fh >= 0) {
binCount = 1;
for (Node<K,V> e = f;; ++binCount) {
K ek;
if (e.hash == hash &&
((ek = e.key) == key ||
(ek != null && key.equals(ek)))) {
oldVal = e.val;
if (!onlyIfAbsent)
e.val = value;
break;
}
Node<K,V> pred = e;
if ((e = e.next) == null) {
pred.next = new Node<K,V>(hash, key, value);
break;
}
}
}
else if (f instanceof TreeBin) {
Node<K,V> p;
binCount = 2;
if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key,
value)) != null) {
oldVal = p.val;
if (!onlyIfAbsent)
p.val = value;
}
}
else if (f instanceof ReservationNode)
throw new IllegalStateException("Recursive update");
}
}
if (binCount != 0) {
if (binCount >= TREEIFY_THRESHOLD)
treeifyBin(tab, i);
if (oldVal != null)
return oldVal;
break;
}
}
}
addCount(1L, binCount);
return null;
}
其中:spread(key.hashCode)是用来计算key的hash码值。之后出现了一个大的for循环(由于没有结束判断,所以可以看成是一个死循环)
if (tab == null || (n = tab.length) == 0)
tab = initTable();
这个判断是用来当table为空时做hash数组的初始化工作,调用了initTable方法:
private final Node<K,V>[] initTable() {
Node<K,V>[] tab; int sc;
while ((tab = table) == null || tab.length == 0) {
if ((sc = sizeCtl) < 0)
Thread.yield(); // lost initialization race; just spin
else if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
try {
if ((tab = table) == null || tab.length == 0) {
int n = (sc > 0) ? sc : DEFAULT_CAPACITY;
@SuppressWarnings("unchecked")
Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n];
table = tab = nt;
sc = n - (n >>> 2);
}
} finally {
sizeCtl = sc;
}
break;
}
}
return tab;
}
sizeCtl:-1 代表初始化,0代表默认状态,-(1+其他正在进行扩容操作的线程数),初始化过后,代表在此resize时map中的元素个数
在初始化的逻辑中进行如下逻辑判断
1.首先判断sizeCtl是否小于0,如果为true,则代表在多线程条件下,已经有线程在进行初始化工作,则当前线程执行yield方法,让度出自己的资源,直到sizeCtl在其他线程初始化完成后被置成0,则不满足while方法的判断条件,跳出循环.
2.如果为false,代表当前线程可以对table进行初始化,则利用CAS操作将当前的sc(即sizeCtl)置为-1,进入初始化的逻辑操作
3.n代表数组长度,由于sizeCtl为-1,所以n取为DEFAULT_CAPACITY=16, sc=sizeCtl更新为0,初始化完成
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f);
最后else:真正执行元素的添加操作,非首节点,采用了synchronized首元素的方法来保证线程安全,并且ConcurrentHashMap的底层采用的还是红黑树这种数据结构,所以当节点过多(桶数量小于64时不会转化成红黑树,而只是继续扩容。当桶的数量大于64且链表结点数超过了8,才会转化成红黑树),会把链表转化成红黑树,提高效率