引言
JDK1.8 HashMap学习
集合中List与Set都继承自Collection接口,而Map并没有继承Collenction接口,它是一种key-value映射形式的数据结构,不能有重复的key,并且一个key只能有一个value。
An object that maps keys to values. A map cannot contain duplicate keys;
each key can map to at most one value.
HashMap是Map中最常用的实现。
- 它包含了map接口提供的所有操作,并且允许 NULL 作为key和value;
- 不是线程安全的,并且无序;
- HashMap与HashTbale相似,区别仅在于HashMap非线程安全,且允许null;
JDK1.8对哈希碰撞后的拉链算法进行了优化, 当拉链上entry数量太多(默认为超过8个)时,将链表重构为红黑树。当entry数量减少到一定数量(默认值为6)时,数据结构会重新变成链表。
* This map usually acts as a binned (bucketed) hash table, but
* when bins get too large, they are transformed into bins of
* TreeNodes, each structured similarly to those in
* java.util.TreeMap. Most methods try to use normal bins, but
* relay to TreeNode methods when applicable (simply by checking
* instanceof a node). Bins of TreeNodes may be traversed and
* used like any others, but additionally support faster lookup
* when overpopulated. However, since the vast majority of bins in
* normal use are not overpopulated, checking for existence of
* tree bins may be delayed in the course of table methods.
* Because TreeNodes are about twice the size of regular nodes, we
* use them only when bins contain enough nodes to warrant use
* (see TREEIFY_THRESHOLD). And when they become too small (due to
* removal or resizing) they are converted back to plain bins. In
* usages with well-distributed user hashCodes, tree bins are
* rarely used. Ideally, under random hashCodes, the frequency of
* nodes in bins follows a Poisson distribution
静态默认值
默认初始化容量位16,并且容量只能是2的倍数
/**
* The default initial capacity - MUST be a power of two.
*/
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
最大允许容量
static final int MAXIMUM_CAPACITY = 1 << 30;
默认装载因子0.75
/**
* The load factor used when none specified in constructor.
*/
static final float DEFAULT_LOAD_FACTOR = 0.75f;
即默认的HashMap实例在插入第13个数据时,会扩容为32。
类注释中明确说明了影响Map性能的两个元素,一个是过高的初始容量,一个是过低的装载因子。而0.75是时间和空间上的一个最佳实践。
* This implementation provides constant-time performance for the basic
* operations (get and put), assuming the hash function
* disperses the elements properly among the buckets. Iteration over
* collection views requires time proportional to the "capacity" of the
* HashMap instance (the number of buckets) plus its size (the number
* of key-value mappings). Thus, it's very important not to set the initial
* capacity too high (or the load factor too low) if iteration performance is
* important.
由链表重构为红黑树阈值,默认为8
/**
* The bin count threshold for using a tree rather than list for a
* bin. Bins are converted to trees when adding an element to a
* bin with at least this many nodes. The value must be greater
* than 2 and should be at least 8 to mesh with assumptions in
* tree removal about conversion back to plain bins upon
* shrinkage.
*/
static final int TREEIFY_THRESHOLD = 8;
由红黑树变为链表阈值,默认为6
/**
* The bin count threshold for untreeifying a (split) bin during a
* resize operation. Should be less than TREEIFY_THRESHOLD, and at
* most 6 to mesh with shrinkage detection under removal.
*/
static final int UNTREEIFY_THRESHOLD = 6;
树形结构的最小容量,默认为64
/**
* The smallest table capacity for which bins may be treeified.
* (Otherwise the table is resized if too many nodes in a bin.)
* Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts
* between resizing and treeification thresholds.
*/
static final int MIN_TREEIFY_CAPACITY = 64;
主要属性
table,数据表,在初次使用时初始化,length是2的倍数
/**
* The table, initialized on first use, and resized as
* necessary. When allocated, length is always a power of two.
* (We also tolerate length zero in some operations to allow
* bootstrapping mechanics that are currently not needed.)
*/
transient Node[] table;
table开始装载的值
/**
* The next size value at which to resize (capacity * load factor).
*
* @serial
*/
int threshold;
其他属性
transient Set> entrySet;
transient int size;
transient int modCount;
final float loadFactor;
主要方法
构造方法
HashMap中有四类构造方法,分别是不传参,传递初始化容量,传递初始化容量和装载因子,传入map
public HashMap() {
this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted
}
public HashMap(int initialCapacity) {
this(initialCapacity, DEFAULT_LOAD_FACTOR);
}
public HashMap(int initialCapacity, float loadFactor) {
if (initialCapacity < 0)
throw new IllegalArgumentException("Illegal initial capacity: " +
initialCapacity);
if (initialCapacity > MAXIMUM_CAPACITY)
initialCapacity = MAXIMUM_CAPACITY;
if (loadFactor <= 0 || Float.isNaN(loadFactor))
throw new IllegalArgumentException("Illegal load factor: " +
loadFactor);
this.loadFactor = loadFactor;
this.threshold = tableSizeFor(initialCapacity);
}
public HashMap(Map extends K, ? extends V> m) {
this.loadFactor = DEFAULT_LOAD_FACTOR;
putMapEntries(m, false);
}
put
通常我们使用put(key, value)方法来设置值,其实内部实现是putVal方法
public V put(K key, V value) {
return putVal(hash(key), key, value, false, true);
}
putVal方法为final方法,有更多的参数
/**
* Implements Map.put and related methods
*
* @param hash hash for key
* @param key the key
* @param value the value to put
* @param onlyIfAbsent if true, don't change existing value
* @param evict if false, the table is in creation mode.
* @return previous value, or null if none
*/
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
boolean evict) {
Node[] tab; Node p; int n, i;
putVal第一个if即如果table为空,则通过resize方法初始化table
...
if ((tab = table) == null || (n = tab.length) == 0)
n = (tab = resize()).length;
...
final Node[] resize() {
Node[] oldTab = table;
int oldCap = (oldTab == null) ? 0 : oldTab.length;
int oldThr = threshold;
int newCap, newThr = 0;
....
else { // zero initial threshold signifies using defaults
newCap = DEFAULT_INITIAL_CAPACITY;
newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}
....
threshold = newThr;
@SuppressWarnings({"rawtypes","unchecked"})
Node[] newTab = (Node[])new Node[newCap];
table = newTab;
....
}
putVal第二个if则是通过key的hash值判断当前的hash所在数组位置是否有值,如果为null,则直接newNode
...
if ((p = tab[i = (n - 1) & hash]) == null)
tab[i] = newNode(hash, key, value, null);
...
putVal接下来的else是该方法中的主要部分,else开始时,p为putVal方法开始根据key的hash值所得的数组位置的内容
- 如果p的hash,key与参数列表中的相同,则直接将p赋值个e,条件判断结束;
- 1不成立,则判断p是否为树形结构,如果为树形结构,则走树形结果方法,条件判断结束;
- 如果1、2都成立,则循环所有列表数据。如果结束前,找到符合条件1的,则走1同样的流程,直至最后p.next为null,此时新建节点插入,并赋值给e,条件判断结束;
- 如果e不为null,并且onlyIfAbsent为false(即允许覆盖原值),或者原值为null,则覆盖原来的value为新value,回原来的value值;afterNodeAccess该方法为LinkedHashMap的后续操作;
注意:循环中binCount >= TREEIFY_THRESHOLD - 1表示,当哈希表大小超过最小树限制时,将map转为树,进行树操作,这样当下次在put时,直接走上述第2条数操作
...
else {
Node e; K k;
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k))))
e = p;
else if (p instanceof TreeNode)
e = ((TreeNode)p).putTreeVal(this, tab, hash, key, value);
else {
for (int binCount = 0; ; ++binCount) {
if ((e = p.next) == null) {
p.next = newNode(hash, key, value, null);
if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
treeifyBin(tab, hash);
break;
}
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
break;
p = e;
}
}
if (e != null) { // existing mapping for key
V oldValue = e.value;
if (!onlyIfAbsent || oldValue == null)
e.value = value;
afterNodeAccess(e);
return oldValue;
}
}
...
方法最后,主要判断如果数组大小超过转载值,则扩充容量
...
++modCount;
if (++size > threshold)
resize();
afterNodeInsertion(evict);
return null;
...
get
通常我们使用的都是通过key来get value的方法,其实内部都是使用getNode方法
public V get(Object key) {
Node e;
return (e = getNode(hash(key), key)) == null ? null : e.value;
}
getNode方法主要完成以下几步:
- 根据key的has值得到map数组位置的node,如果node的key与传入的key相同,则返回node;
- node.next为空,则返回null,否则如果树,则走树形方法,不是树,则开始循环查找;
final Node getNode(int hash, Object key) {
Node[] tab; Node first, e; int n; K k;
if ((tab = table) != null && (n = tab.length) > 0 &&
(first = tab[(n - 1) & hash]) != null) {
if (first.hash == hash && // always check first node
((k = first.key) == key || (key != null && key.equals(k))))
return first;
if ((e = first.next) != null) {
if (first instanceof TreeNode)
return ((TreeNode)first).getTreeNode(hash, key);
do {
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
return e;
} while ((e = e.next) != null);
}
}
return null;
}