本质:本体为Node类型数组——Node
,内部存储的元素特定为Node
在达到树化条件后,普通Node转化为TreeNode
1.Iterator速度与其容量大小的关系
集合视图的Iterator与 HashMap实例的“容量Capacity”(Bucket桶的数量)及其大小(键值映射的数量)成比例
如果迭代性能很重要,那么不要将初始容量设置得太高(或负载因子太低),这是非常重要的
2.HashMap两个影响性能的关键参数:初始容量和负载因子
容量代表的是哈希表中的桶数,初始容量只是创建哈希表时的容量[16]
负载因子是一个度量哈希表在其容量自动增加之前允许的满度的度量[0.75]
当哈希表中的Entry数量超过负载因子和当前容量的乘积时,将对哈希表进行重新哈希,扩增容量为原桶数*2
3.负载因子及初始容量设置策略
更高的负载因子将减少内存开销,但会增加查找成本(put,get等)
设置初始容量时应考虑到Map中存储总量及负载因子,从而最小化rehash的次数
4.更大的初始容量
rehash的成本很大,所以如果要存储很多K-V,应使用足够大的初始容量创建Map实例
并且考虑Hash碰撞问题,更大的容量(桶数)可在一定程度上改善Hash碰撞带来的性能下降
5.该实现不是sync同步的
如果多个线程同时访问一个散列映射,并且至少有一个线程从结构上修改了该映射,则必须在外部对其进行同步。
Map m = Collections.synchronizedMap(new HashMap(...)); //最好在创建时完成
6.fail-faster
这个类的所有“集合视图方法”返回的迭代器是快速失效的——如果在创建迭代器之后的任何时候
以任何方式(除了通过迭代器自己的删除方法)对映射进行结构修改,迭代器将抛出ConcurrentModificationException
注意,不能保证迭代器的快速故障行为,因为通常来说,在存在非同步并发修改的情况下,不可能做出任何严格的保证。
因此,编写一个依赖于这个异常的程序是错误的:迭代器的快速故障行为应该只用于检测bug。
结构修改是指增加或删除一个或多个映射的操作;仅仅更改与一个实例已经包含的键相关联的值并不是结构修改。
1.bucket & tree bucket (bin/Treebin)
一开始Hashtable是一个个普通bucket(一个元素/一链多个元素),但当1个bucket内含元素过多时转换为TreeNodes
TreeNode结构的bucket可以像普通bucket一样遍历和使用,但在内含过多元素时支持更快的查找
需要注意的是,一个bucket的树化转换 会兼顾其他的bucket情况,如果多数bucket没有过度填充,则会推迟树化
2.bucket内部顺序
在树化后,通过 hasCode 方法来进行排序,但在单链时,如果两个元素同属一类且实现了Comparable接口
那么就使用其compareTo方法来进行排序。这样额外的复杂性是有必要的,为缓和性能下降
在hashCode()方法返回分布不佳的值或许多键共享一个hashCode的最坏情况下,提供O(log n)操作
3.TreeNode & NormalNode
因为树节点的大小大约是普通节点的两倍,故此只在bucket已过度填充时才转换为TreeNode
当它们变得太小(由于删除或调整大小)时,就会被转换回普通节点。在使用分布良好的hashcodes时,很少使用树节点
理想情况下,良好的随机hashCodes,各bucket中的节点存在频率遵循泊松分布,默认负载因子为0.75
平均参数约为0.5,尽管由于resizing granularity而存在较大的差异。忽略方差
单个bucket的节点数量k的预期出现次数是(exp(-0.5) pow(0.5, k) / factorial(k))
0: 0.60653066
1: 0.30326533
2: 0.07581633
3: 0.01263606
4: 0.00157952
5: 0.00015795
6: 0.00001316
7: 0.00000094
8: 0.00000006
一个桶内包含大于8个节点的几率 小于 1千万分之一[在良好的Hash函数下]
Treebin.root通常是第一个节点。但当Iterator.remove时根可能在其他地方,但可通过TreeNode.root()恢复
HashCode只需计算一次,每个需要哈希值的函数都有一个参数用于接收哈希值
大多数内部方法也接受“tab”参数,它通常是当前table,但在调整大小或转换时可能是新表或旧表
当bin list/单链被treeified、split或untreeified时,我们将它们保持在相同的相对访问/遍历顺序,
即为了更好地保存局部有序,稍微简化对调用iterator.remove时的splits和traversals的处理
当在插入时使用比较器时,为了在重新平衡时保持总排序(或尽可能接近这里的要求),
我们将classes和identityHashCodes作为关键字进行比较
普通vs树模式之间的使用和转换由于LinkedHashMap子类的存在而变得复杂。?不懂再议
请参阅下面的定义在插入,删除和访问时调用的钩子方法,这些钩子方法允许LinkedHashMap内部保持独立于这些机制
(这还需要将映射实例传递给一些可能创建新节点的实用工具方法。)
public class HashMap<K,V>
extends AbstractMap<K,V>//implements Map
implements Map<K,V>, Cloneable, Serializable
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; //默认初始容量16
static final int MAXIMUM_CAPACITY = 1 << 30; //最大容量
static final float DEFAULT_LOAD_FACTOR = 0.75f; //默认负载因子
static final int TREEIFY_THRESHOLD = 8; //树化阈值
static final int UNTREEIFY_THRESHOLD = 6; //链化阈值
static final int MIN_TREEIFY_CAPACITY = 64; //树化前提·要求最小容量为64否则先resize
//如果一个bin中有太多节点,就会重新调整表的大小,此时会有resizing和treeification阀值之间的冲突
//The table
//第一次使用时才初始化;当分配时,数组长度总是2的幂。允许0传入有对应的引导机制·tableSizeFor
transient Node<K,V>[] table;
//Holds cached entrySet()
//注意,AbstractMap字段用于keySet()和values()。
transient Set<Map.Entry<K,V>> entrySet;
//k-v总数
transient int size;
//结构性修改次数
//如增减映射数量,或者修改Map内部结构(rehash)
transient int modCount;
//进行下一次resize的容量阈值
//如果还没有分配表数组,则该字段持有初始数组容量,或者为零,表示DEFAULT_INITIAL_CAPACITY
int threshold;
//负载因子(真正起作用的变量)
final float loadFactor;
static class Node<K,V> implements Map.Entry<K,V> {
final int hash;
final K key;
V value;
Node<K,V> next;
Node(int hash, K key, V value, Node<K,V> next) {
this.hash = hash;
this.key = key;
this.value = value;
this.next = next;
}
public final K getKey() { return key; }
public final V getValue() { return value; }
public final String toString() { return key + "=" + value; }
public final int hashCode() {
return Objects.hashCode(key) ^ Objects.hashCode(value);
}
public final V setValue(V newValue) {
V oldValue = value;
value = newValue;
return oldValue;
}
public final boolean equals(Object o) {
if (o == this)
return true;
if (o instanceof Map.Entry) {
Map.Entry<?,?> e = (Map.Entry<?,?>)o;
if (Objects.equals(key, e.getKey()) &&
Objects.equals(value, e.getValue()))
return true;
}
return false;
}
}
//哈希函数
static final int hash(Object key) {
int h;
return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}
//类型比较器
//如果x的类是C类且实现Comparable返回C,否则返回null
static Class<?> comparableClassFor(Object x) {
if (x instanceof Comparable) {
Class<?> c; Type[] ts, as; Type t; ParameterizedType p;
if ((c = x.getClass()) == String.class) // bypass checks
return c;
if ((ts = c.getGenericInterfaces()) != null) {
for (int i = 0; i < ts.length; ++i) {
if (((t = ts[i]) instanceof ParameterizedType) &&
((p = (ParameterizedType)t).getRawType() ==
Comparable.class) &&
(as = p.getActualTypeArguments()) != null &&
as.length == 1 && as[0] == c) // type arg is c
return c;
}
}
}
return null;
}
//比较器
static int compareComparables(Class<?> kc, Object k, Object x) {
return (x == null || x.getClass() != kc ? 0 :
((Comparable)k).compareTo(x));
}
//计算 输入容量 -转换-> 2^幂次 [输入(64,128)则最终容量为128]
static final int tableSizeFor(int cap) {
int n = cap - 1;
n |= n >>> 1;
n |= n >>> 2;
n |= n >>> 4;
n |= n >>> 8;
n |= n >>> 16;
return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
}
构造器*4,增*2,删*2,改*0,查*8
增*1,删*1,改*7,查*2 [Java8·Map接口扩展函数]
1.自定义 初始容量+负载因子
2.自定义 初始容量
3.默认负载因子,默认初始化容量
4.转换其他Map对象为HashMap
//1
public HashMap(int initialCapacity, float loadFactor) {
if (initialCapacity < 0)
throw new IllegalArgumentException("Illegal initial capacity: " +
initialCapacity);
if (initialCapacity > MAXIMUM_CAPACITY)
initialCapacity = MAXIMUM_CAPACITY;
if (loadFactor <= 0 || Float.isNaN(loadFactor))
throw new IllegalArgumentException("Illegal load factor: " +
loadFactor);
this.loadFactor = loadFactor;
this.threshold = tableSizeFor(initialCapacity); //new时不初始化Table,而是存储至threshold
}
//2 调用1 用threshold存储初始容量
public HashMap(int initialCapacity) { this(initialCapacity, DEFAULT_LOAD_FACTOR); }
//3
public HashMap() { this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted }
//4
public HashMap(Map<? extends K, ? extends V> m) {
this.loadFactor = DEFAULT_LOAD_FACTOR;
putMapEntries(m, false); //false代表创建模式 为继承HashMap的LinkedHashMap类服务
}
final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) {
int s = m.size();
if (s > 0) {
if (table == null) { // pre-size
float ft = ((float)s / loadFactor) + 1.0F;
int t = ((ft < (float)MAXIMUM_CAPACITY) ?
(int)ft : MAXIMUM_CAPACITY); //先判断应扩容容量是否超过最大容量
if (t > threshold)
threshold = tableSizeFor(t); //将应扩容容量赋给扩容阈值
}
else if (s > threshold)
resize(); //如果Entry个数大于扩容阈值resize
for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) {
K key = e.getKey();
V value = e.getValue();
putVal(hash(key), key, value, false, evict); //for循环放入k-v
}
}
}
增*2
//1.增单个
public V put(K key, V value) { return putVal(hash(key), key, value, false, true); }
final V putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict) {
Node<K,V>[] tab; Node<K,V> p; int n, i;
if ((tab = table) == null || (n = tab.length) == 0) //重点: 初始化tab时机 HERE
n = (tab = resize()).length; //初始容量放入 threshold 等待使用
if ((p = tab[i = (n - 1) & hash]) == null) //如果目标桶是NULL则初始化1个Node
tab[i] = newNode(hash, key, value, null);
else { //如果目标桶非NULL,则插入单向链
Node<K,V> e; K k;
//IF1 如果 根节点.Key == put.Key 则 将 原Node p 赋值给 临时Node e,准备替换
if (p.hash == hash && ((k = p.key) == key || (key != null && key.equals(k))))
e = p;
//IF2 如果 已经树化 则调用 TreeNode.putTreeVal()
else if (p instanceof TreeNode)
e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
//ELSE3 单链表
else {
//遍历单链表
for (int binCount = 0; ; ++binCount) {
//IF1 next如果为空直接new 插入新节点
if ((e = p.next) == null) {
p.next = newNode(hash, key, value, null);
if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
treeifyBin(tab, hash); //重点: 树化treeifyBin时机 HERE
break;
}
//IF2 找到 相等 Key 单链子节点 [e = p.next]
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
break;
p = e; //将 e=p.next 赋值给 p,遍历下一个节点
}
}
// 替换已存在Key的旧Value 并返回旧V [Put结果2]
if (e != null) { // existing mapping for key
V oldValue = e.value;
if (!onlyIfAbsent || oldValue == null)
e.value = value;
afterNodeAccess(e);
return oldValue;
}
}
++modCount; //结构化修改
if (++size > threshold) //超过下一次扩容阈值 进行 resize
resize();
afterNodeInsertion(evict); //为子类 LinkedHashMap 服务
return null; //新增未修改 [Put结果1]
}
final Node<K,V>[] resize() { //增大|减小 table 桶数 并重计算Hash及分配
Node<K,V>[] oldTab = table;
int oldCap = (oldTab == null) ? 0 : oldTab.length;
int oldThr = threshold;
int newCap, newThr = 0;
if (oldCap > 0) {
if (oldCap >= MAXIMUM_CAPACITY) { //已有容量已经达到最大上限
threshold = Integer.MAX_VALUE;
return oldTab; //返回 oldTab
}
else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
oldCap >= DEFAULT_INITIAL_CAPACITY) // 16桶以上tab扩容
newThr = oldThr << 1; // 新阈值更新为旧阈值2倍
}
else if (oldThr > 0) // 以 下一次扩容阈值 作为新容量
newCap = oldThr;
else { // 初始化扩容 使用 DEFAULT_INITIAL_CAPACITY 16
newCap = DEFAULT_INITIAL_CAPACITY;
newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}
if (newThr == 0) {
float ft = (float)newCap * loadFactor;
newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
(int)ft : Integer.MAX_VALUE);
}
threshold = newThr; //更新阈值变量
@SuppressWarnings({"rawtypes","unchecked"})
Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap]; // new tab 实例
table = newTab; //更新tab实例
if (oldTab != null) { //遍历,重计算hash值,转移
for (int j = 0; j < oldCap; ++j) {
Node<K,V> e;
if ((e = oldTab[j]) != null) {
oldTab[j] = null;
if (e.next == null)
newTab[e.hash & (newCap - 1)] = e;
else if (e instanceof TreeNode)
((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
else { // preserve order
Node<K,V> loHead = null, loTail = null;
Node<K,V> hiHead = null, hiTail = null;
Node<K,V> next;
do {
next = e.next;
if ((e.hash & oldCap) == 0) {
if (loTail == null)
loHead = e;
else
loTail.next = e;
loTail = e;
}
else {
if (hiTail == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
}
} while ((e = next) != null);
if (loTail != null) {
loTail.next = null;
newTab[j] = loHead;
}
if (hiTail != null) {
hiTail.next = null;
newTab[j + oldCap] = hiHead;
}
}
}
}
}
return newTab;
}
final void treeifyBin(Node<K,V>[] tab, int hash) { //树化
int n, index; Node<K,V> e;
if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY) //延迟树化
resize(); //最小树化tab容量64 16-32,32-64 2次resize
else if ((e = tab[index = (n - 1) & hash]) != null) {
TreeNode<K,V> hd = null, tl = null;
do {
TreeNode<K,V> p = replacementTreeNode(e, null){
return new TreeNode<>(e.hash, e.key, e.value, next);
}; //普通节点->树节点
if (tl == null)
hd = p;
else {
p.prev = tl;
tl.next = p;
}
tl = p;
} while ((e = e.next) != null);
if ((tab[index] = hd) != null)
hd.treeify(tab); //真正树型化
}
}
Class TreeNode
final void treeify(Node<K,V>[] tab) { //红黑树
TreeNode<K,V> root = null;
for (TreeNode<K,V> x = this, next; x != null; x = next) { //遍历简历建立红黑树
next = (TreeNode<K,V>)x.next;
x.left = x.right = null;
if (root == null) { //root初始化赋值
x.parent = null;
x.red = false;
root = x;
}
else { //链接其他TreeNode
K k = x.key;
int h = x.hash;
Class<?> kc = null;
for (TreeNode<K,V> p = root;;) { //插为NULL的左|右
int dir, ph;
K pk = p.key;
if ((ph = p.hash) > h)
dir = -1;
else if (ph < h)
dir = 1;
else if ((kc == null &&
(kc = comparableClassFor(k)) == null) ||
(dir = compareComparables(kc, k, pk)) == 0)
dir = tieBreakOrder(k, pk); //用于在相等的哈希码和不可比较的情况下插入排序
TreeNode<K,V> xp = p;
if ((p = (dir <= 0) ? p.left : p.right) == null) {
x.parent = xp;
if (dir <= 0)
xp.left = x;
else
xp.right = x;
root = balanceInsertion(root, x);
break;
}
}
}
}
moveRootToFront(tab, root); //确保给定root是其bin的第一个节点
}
static int tieBreakOrder(Object a, Object b) {
int d;
if (a == null || b == null ||
(d = a.getClass().getName().
compareTo(b.getClass().getName())) == 0)
d = (System.identityHashCode(a) <= System.identityHashCode(b) ?
-1 : 1);
return d;
}
static <K,V> void moveRootToFront(Node<K,V>[] tab, TreeNode<K,V> root) {
int n;
if (root != null && tab != null && (n = tab.length) > 0) {
int index = (n - 1) & root.hash;
TreeNode<K,V> first = (TreeNode<K,V>)tab[index];
if (root != first) {
Node<K,V> rn;
tab[index] = root;
TreeNode<K,V> rp = root.prev;
if ((rn = root.next) != null)
((TreeNode<K,V>)rn).prev = rp;
if (rp != null)
rp.next = rn;
if (first != null)
first.prev = root;
root.next = first;
root.prev = null;
}
assert checkInvariants(root);
}
}
//2.增全部
//注: 与 转换其他Map为HashMap 同底层原理 · for putVal 循环插入单个
public void putAll(Map<? extends K, ? extends V> m) { putMapEntries(m, true); }
删*2
//1.删单个Key并返回Value/NULL
public V remove(Object key) {
Node<K,V> e;
return (e = removeNode(hash(key), key, null, false, true)) == null ?
null : e.value;
}
final Node<K,V> removeNode(int hash, Object key, Object value,
boolean matchValue, boolean movable) {
//matchValue为真,则仅在值相等时删除
//movable为false,则在删除时不移动其他节点
Node<K,V>[] tab; Node<K,V> p; int n, index;
if ((tab = table) != null && (n = tab.length) > 0 &&
(p = tab[index = (n - 1) & hash]) != null) {
Node<K,V> node = null, e; K k; V v;
if (p.hash == hash && //IF1 hash相等 && key相等 || key非NULL && K.重写equals(Node.key)相等
((k = p.key) == key || (key != null && key.equals(k))))
node = p; //则将 当前根Node 赋值给 临时变量 node
else if ((e = p.next) != null) { //IF2 子链Node匹配
if (p instanceof TreeNode) //如果已经树化,遍历TreeNode匹配
node = ((TreeNode<K,V>)p).getTreeNode(hash, key);
else { //仍是单向链,遍历单向链匹配
do {
if (e.hash == hash &&
((k = e.key) == key ||
(key != null && key.equals(k)))) {
node = e; //将当前节点e 赋值给 临时遍历 node
break;
}
p = e;
} while ((e = e.next) != null);
}
}
//如果找到了 目标node节点
if (node != null && (!matchValue || (v = node.value) == value ||
(value != null && value.equals(v)))) {
if (node instanceof TreeNode) //已经树化,则调用TreeNode.removeTreeNode
((TreeNode<K,V>)node).removeTreeNode(this, tab, movable);
else if (node == p) //普通节点,链首删除 ?
tab[index] = node.next;
else //普通节点,链中删除
p.next = node.next; //将删除节点的前一个节点.next赋值为删除节点的.next
++modCount;
--size;
afterNodeRemoval(node); //为子类LinkedHashMap服务
return node;
}
}
return null;
}
//2.删全部
public void clear() {
Node<K,V>[] tab;
modCount++;
if ((tab = table) != null && size > 0) {
size = 0;
for (int i = 0; i < tab.length; ++i)
tab[i] = null; //循环tab[i]置NULL
}
}
改*0
查*8
//1.查k-v总数
public int size() { return size; }
//2.查空
public boolean isEmpty() { return size == 0; }
//3.查Key取Value或NULL
public V get(Object key) {
Node<K,V> e;
return (e = getNode(hash(key), key)) == null ? null : e.value;
}
final Node<K,V> getNode(int hash, Object key) {
Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
//IF1: table非NULL && table.length>0 && hash对应的桶位即Node非NULL
if ((tab = table) != null && (n = tab.length) > 0 &&
(first = tab[(n - 1) & hash]) != null) {
//IF2: first根节点即是查找Key的hash && first.Key等于查找Key
或 查找Key不为NULL && 查找Key.equals(first.Key) => 返回根节点
if (first.hash == hash && // always check first node
((k = first.key) == key || (key != null && key.equals(k))))
return first;
//IF3: first根节点有下一个节点不为NULL 则进入遍历
if ((e = first.next) != null) {
//IF4: 如果这个桶已经树化,则调用TreeNode的getTreeNode方法进行遍历
if (first instanceof TreeNode)
return ((TreeNode<K,V>)first).getTreeNode(hash, key);
//未树化,就简单的遍历单向链即可
do {
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
return e;
} while ((e = e.next) != null);
}
}
return null;
}
//4.查包含Key
public boolean containsKey(Object key) { return getNode(hash(key), key) != null; }
//5.查包含Value
public boolean containsValue(Object value) {
Node<K,V>[] tab; V v;
if ((tab = table) != null && size > 0) { //双for楞遍历
for (int i = 0; i < tab.length; ++i) {
for (Node<K,V> e = tab[i]; e != null; e = e.next) {
if ((v = e.value) == value ||
(value != null && value.equals(v)))
return true;
}
}
}
return false;
}
//6.查取KeySet [HashMap =KeySet+ValueSet || =EntrySet]
public Set<K> keySet() {
Set<K> ks = keySet;
if (ks == null) {
ks = new KeySet(); //KeySet本身不存储Key,而是其Iterator()底层去遍历tab[]
keySet = ks;
}
return ks;
}
final class KeySet extends AbstractSet<K> {
public final int size() { return size; }
public final void clear() { HashMap.this.clear(); }
public final Iterator<K> iterator() { return new KeyIterator(); }
public final boolean contains(Object o) { return containsKey(o); }
public final boolean remove(Object key) {
return removeNode(hash(key), key, null, false, true) != null;
}
public final Spliterator<K> spliterator() {
return new KeySpliterator<>(HashMap.this, 0, -1, 0, 0);
}
public final void forEach(Consumer<? super K> action) {
Node<K,V>[] tab;
if (action == null)
throw new NullPointerException();
if (size > 0 && (tab = table) != null) {
int mc = modCount;
for (int i = 0; i < tab.length; ++i) {
for (Node<K,V> e = tab[i]; e != null; e = e.next)
action.accept(e.key);
}
if (modCount != mc)
throw new ConcurrentModificationException();
}
}
}
//7.查取ValueCollection [同KeySet]
public Collection<V> values() {
Collection<V> vs = values;
if (vs == null) {
vs = new Values();
values = vs;
}
return vs;
}
final class Values extends AbstractCollection<V> {
public final int size() { return size; }
public final void clear() { HashMap.this.clear(); }
public final Iterator<V> iterator() { return new ValueIterator(); }
public final boolean contains(Object o) { return containsValue(o); }
public final Spliterator<V> spliterator() {
return new ValueSpliterator<>(HashMap.this, 0, -1, 0, 0);
}
public final void forEach(Consumer<? super V> action) {
Node<K,V>[] tab;
if (action == null)
throw new NullPointerException();
if (size > 0 && (tab = table) != null) {
int mc = modCount;
for (int i = 0; i < tab.length; ++i) {
for (Node<K,V> e = tab[i]; e != null; e = e.next)
action.accept(e.value);
}
if (modCount != mc)
throw new ConcurrentModificationException();
}
}
}
//8.查取EntrySet
public Set<Map.Entry<K,V>> entrySet() {
Set<Map.Entry<K,V>> es;
return (es = entrySet) == null ? (entrySet = new EntrySet()) : es;
}
final class EntrySet extends AbstractSet<Map.Entry<K,V>> {
public final int size() { return size; }
public final void clear() { HashMap.this.clear(); }
public final Iterator<Map.Entry<K,V>> iterator() {
return new EntryIterator();
}
public final boolean contains(Object o) {
if (!(o instanceof Map.Entry))
return false;
Map.Entry<?,?> e = (Map.Entry<?,?>) o;
Object key = e.getKey();
Node<K,V> candidate = getNode(hash(key), key);
return candidate != null && candidate.equals(e);
}
public final boolean remove(Object o) {
if (o instanceof Map.Entry) {
Map.Entry<?,?> e = (Map.Entry<?,?>) o;
Object key = e.getKey();
Object value = e.getValue();
return removeNode(hash(key), key, value, true, true) != null;
}
return false;
}
public final Spliterator<Map.Entry<K,V>> spliterator() {
return new EntrySpliterator<>(HashMap.this, 0, -1, 0, 0);
}
public final void forEach(Consumer<? super Map.Entry<K,V>> action) {
Node<K,V>[] tab;
if (action == null)
throw new NullPointerException();
if (size > 0 && (tab = table) != null) {
int mc = modCount;
for (int i = 0; i < tab.length; ++i) {
for (Node<K,V> e = tab[i]; e != null; e = e.next)
action.accept(e);
}
if (modCount != mc)
throw new ConcurrentModificationException();
}
}
}
//Java 8 Map接口·扩展方法
增1删1改7查2
增*1
//1.Key不存在,增K-V
V putIfAbsent(K key, V value)
删*1
//1.当且仅当KV完全相同时,删单个
boolean remove(Object key, Object value)
改*7
//1.替换指定K的V,返回boolean
boolean replace(K key, V oldValue, V newValue)
//2.替换指定K的V,返回Value|NULL
V replace(K key, V value)
//3.计算改·指定Key的V为NULL
V computeIfAbsent(K key, Function<? super K, ? extends V> mappingFunction)
//4.计算改·指定Key的V存在
V computeIfPresent(K key, BiFunction<? super K, ? super V, ? extends V> remappingFunction)
//5.计算改·K·BiFunction二元运算函数
V compute(K key, BiFunction<? super K, ? super V, ? extends V> remappingFunction)
//6.计算改·全部Key
void replaceAll(BiFunction<? super K, ? super V, ? extends V> function)
//7.合并改,如果value为NULL则删除,如果Key不存在则可视为添加
V merge(K key, V value, BiFunction<? super V, ? super V, ? extends V> remappingFunction)
查*2
//1.有默认值垫底的,查Key取Value
V getOrDefault(Object key, V defaultValue)
//2.遍历全部
void forEach(BiConsumer<? super K, ? super V> action)
public Object clone()
loadFactor()
capacity()
private void writeObject()
private void readObject()
HashIterator
HashMapSpliterator
KeySpliterator
ValueSpliterator
EntrySpliterator
以下private-package方法被设计为可由LinkedHashMap覆盖,但不被任何其他子类覆盖。
可被 LinkedHashMap, view classes, HashSet 使用
newNode()
replacementNode()
newTreeNode()
replacementTreeNode()
reinitialize()
afterNodeAccess()
afterNodeInsertion()
afterNodeRemoval()
internalWriteEntries()
22832 chars,588 line 单开一篇吧