jdk源码之HashMap

参考自https://www.jianshu.com/p/aa017a3ddc40

1,在idea中查看源码

图片.png

2查看源码

2.1结点数据结构

 static class Node implements Map.Entry {
        final int hash;//hash值,也就是存放的位置
        final K key;//key值
        V value;//key对应的value值
        Node next;//指向下一个结点

        Node(int hash, K key, V value, Node next) {
            this.hash = hash;
            this.key = key;
            this.value = value;
            this.next = next;
        }

        public final K getKey()        { return key; }
        public final V getValue()      { return value; }
        public final String toString() { return key + "=" + value; }

        public final int hashCode() {
            return Objects.hashCode(key) ^ Objects.hashCode(value);
        }

        public final V setValue(V newValue) {
            V oldValue = value;
            value = newValue;
            return oldValue;
        }

        public final boolean equals(Object o) {
            if (o == this)
                return true;
            if (o instanceof Map.Entry) {
                Map.Entry e = (Map.Entry)o;
                if (Objects.equals(key, e.getKey()) &&
                    Objects.equals(value, e.getValue()))
                    return true;
            }
            return false;
        }
    }
图片.png

2.2 hashMap中的一些属性

图片.png
  /**
     * The table, initialized on first use, and resized as
     * necessary.表,第一次使用时,进行初始化,必要时,重置大小
 When allocated, length is always a power of two.
分配时,长度一定是2的幂
     * (We also tolerate length zero in some operations to allow
     * bootstrapping mechanics that are currently not needed.)
在一些操作中我们设置长度为0,允许当前并不需要的引导机制
     */
    transient Node[] table;
 /**
     * The number of key-value mappings contained in this map.
在这个map中包含的key-value映射的数量
     */
    transient int size;
 /**
     * The number of times this HashMap has been structurally modified
     * Structural modifications are those that change the number of mappings in
     * the HashMap or otherwise modify its internal structure (e.g.,
     * rehash).  This field is used to make iterators on Collection-views of
     * the HashMap fail-fast.  (See ConcurrentModificationException).
     */
    transient int modCount;
 /**
     * The next size value at which to resize (capacity * load factor).
     *threshold是作为扩容的阈值而存在的,它是由负载银子决定的
     * @serial
     */
    // (The javadoc description is true upon serialization.
    // Additionally, if the table array has not been allocated, this
    // field holds the initial array capacity, or zero signifying
    // DEFAULT_INITIAL_CAPACITY.)

    int threshold;
 /**
     * The load factor for the hash table.
     *hash table的负载因子
     * @serial
     */
    final float loadFactor;
 /**
     * The default initial capacity - MUST be a power of two.
初始化容器大小,一定是2的幂
     */
    static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16

    /**
     * The maximum capacity, used if a higher value is implicitly specified
     * by either of the constructors with arguments.
容器最大的容量,由构造函数指定
     * MUST be a power of two <= 1<<30.

     */
    static final int MAXIMUM_CAPACITY = 1 << 30;

    /**
     * The load factor used when none specified in constructor.
自动的负载因子是0.75f
     */
    static final float DEFAULT_LOAD_FACTOR = 0.75f;
 /**
     * Computes key.hashCode() and spreads (XORs) higher bits of hash
     * to lower.  Because the table uses power-of-two masking, sets of
     * hashes that vary only in bits above the current mask will
     * always collide. 
       通过key的hashCode计算和散列的高位降低.因为仅通过hash函数计算总是会发生碰撞
       (Among known examples are sets of Float keys
     * holding consecutive whole numbers in small tables.) 
在已知了例子中一系列的float的key保持着连贯的数
 So we
     * apply a transform that spreads the impact of higher bits
     * downward. 
所以我们应用一个转化向下传播高位的影响
There is a tradeoff between speed, utility, and
     * quality of bit-spreading. 
bit传播的速度,效果和质量存在着制衡
Because many common sets of hashes
     * are already reasonably distributed (so don't benefit from
     * spreading), 
因为一些常见的hash set已经合理的分配了(我们不会从中受益)


2.3 put操作

and because we use trees to handle large sets of
     * collisions in bins, 
并且我们通过tree来掌控大集合的碰撞
we just XOR some shifted bits in the
     * cheapest possible way to reduce systematic lossage,
我们仅通过散列的一些移位来减少系统的损失
 as well as
     * to incorporate impact of the highest bits 
以及最高位的影响
that would otherwise
     * never be used in index calculations because of table bounds.
否则我们无法使用index由于table的边界
     */
  static final int hash(Object key) {
        int h;
        return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
    }
 final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
                   boolean evict) {
        Node[] tab; Node p; int n, i;
        //table 是否为空或者长度是否为0 
       if ((tab = table) == null || (n = tab.length) == 0)
       //调用resize方法来初始化table
            n = (tab = resize()).length;
       //然后计算记录的hashcode,如果为空,则创建newNode,此时没有hash冲突
        if ((p = tab[i = (n - 1) & hash]) == null)
            tab[i] = newNode(hash, key, value, null);
        else {
       //这时index的位置不为空,发生了hash冲突
            Node e; K k;
       //这时候记录和index的记录完全相同,则直接覆盖
            if (p.hash == hash &&
                ((k = p.key) == key || (key != null && key.equals(k))))
                e = p;
       //如果是红黑树,则通过putTreeVal将结点添加到其中
            else if (p instanceof TreeNode)
                e = ((TreeNode)p).putTreeVal(this, tab, hash, key, value);
            else {
                for (int binCount = 0; ; ++binCount) {
                    if ((e = p.next) == null) {
                 //第一次插入,插入到最尾部
                        p.next = newNode(hash, key, value, null);
                 //如果超过了阈值,则转化为红黑树
                        if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
                            treeifyBin(tab, hash);
                        break;
                    }
                    if (e.hash == hash &&
                        ((k = e.key) == key || (key != null && key.equals(k))))
                        break;
                    p = e;
                }
            }
            //新纪录会覆盖老记录
            if (e != null) { // existing mapping for key
                V oldValue = e.value;
                if (!onlyIfAbsent || oldValue == null)
                    e.value = value;
                afterNodeAccess(e);
                return oldValue;
            }
        }
        //是否超过阈值
        ++modCount;
        //超过扩容
        if (++size > threshold)
            resize();
        afterNodeInsertion(evict);
        return null;
    }

2.4扩容操作

  final Node[] resize() {
        Node[] oldTab = table;
        int oldCap = (oldTab == null) ? 0 : oldTab.length;
        int oldThr = threshold;
        int newCap, newThr = 0;
        if (oldCap > 0) {
            if (oldCap >= MAXIMUM_CAPACITY) {
                threshold = Integer.MAX_VALUE;
                return oldTab;
            }
            else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
                     oldCap >= DEFAULT_INITIAL_CAPACITY)
                newThr = oldThr << 1; // double threshold
        }
        else if (oldThr > 0) // initial capacity was placed in threshold
            newCap = oldThr;
        else {               // zero initial threshold signifies using defaults
            newCap = DEFAULT_INITIAL_CAPACITY;
            newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
        }
        if (newThr == 0) {
            float ft = (float)newCap * loadFactor;
            newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
                      (int)ft : Integer.MAX_VALUE);
        }
        threshold = newThr;
        @SuppressWarnings({"rawtypes","unchecked"})
            Node[] newTab = (Node[])new Node[newCap];
        table = newTab;
        if (oldTab != null) {
            for (int j = 0; j < oldCap; ++j) {
                Node e;
                if ((e = oldTab[j]) != null) {
                    oldTab[j] = null;
                    if (e.next == null)
                        newTab[e.hash & (newCap - 1)] = e;
                    else if (e instanceof TreeNode)
                        ((TreeNode)e).split(this, newTab, j, oldCap);
                    else { // preserve order
                        Node loHead = null, loTail = null;
                        Node hiHead = null, hiTail = null;
                        Node next;
                        do {
                            next = e.next;
                            if ((e.hash & oldCap) == 0) {
                                if (loTail == null)
                                    loHead = e;
                                else
                                    loTail.next = e;
                                loTail = e;
                            }
                            else {
                                if (hiTail == null)
                                    hiHead = e;
                                else
                                    hiTail.next = e;
                                hiTail = e;
                            }
                        } while ((e = next) != null);
                        if (loTail != null) {
                            loTail.next = null;
                            newTab[j] = loHead;
                        }
                        if (hiTail != null) {
                            hiTail.next = null;
                            newTab[j + oldCap] = hiHead;
                        }
                    }
                }
            }
        }
        return newTab;
    }

我们知道所谓扩容,就是新申请一个较大容量的数组table,然后将原来的table中的内容都重新计算哈希落到新的数组table中来,然后将老的table释放掉。这里面有两个关键点,一个是新哈希数组的申请以及老哈希数组的释放,另外一个是重新计算记录的哈希值以将其插入到新的table中去。首先第一个问题是,扩容会扩大到多少,通过观察上面的代码可以确定,每次扩容都会扩大table的容量为原来的两倍,当然有一个最大值,如果HashMap的容量已经达到最大值了,那么就不会再进行扩容操作了。第二个问题是HashMap是如何在扩容之后将记录从老的table迁移到新的table中来的。上文中已经提到,table的长度确保是2的n次方,那么有意思的是,每次扩容容量变为原来的两倍,那么一个记录在新table中的位置要么就和原来一样,要么就需要迁移到(oldCap + index)的位置上。

假设原来的table大小为4,那么扩容之后会变为8,那么对于一个元素A来说,如果他的hashCode值为3,那么他在原来的table
上的位置为(3 & 3) = 3,那么新位置呢?(3 & 7) = 3,这种情况下元素A的index和原来的index是一致的不用变。再来看一个
元素B,他的hashCode值为47,那么在原来table中的位置为(47 & 3) = 3,在新table中的位置为(47 & 7) = 7,也就
是(3 + 4),正好偏移了oldCap个单位。

那么如何快速确定一个记录迁移的位置呢?因为我们的计算方法为:(hashCode & (length - 1)),而扩容将导致(length - 1)会新增一个1,也就是说,hashCode将会多一位来做判断,如果这个需要新判断的位置上为0,那么index不变,否则变为需要迁移到(oldIndex + oldCap)这个位置上去,下面举个例子吧:

还是上面的两个元素A和B,哈希值分别为3和47,在table长度为4的情况下,因为(3) = (11),所以A和B会有两位参与运算来
获得index,A和B的二进制分别为:

3 : 11
47: 101111

在table的length为4的前提下:

3-> 11 & 11 = 3
47-> 000011 & 101111 = 3

在扩容后,length变为8:
3-> 011 & 111 = 3
47-> 10111 & 00111 = 7

对于3来说,新增的参与运算的位为0,所以index不变,而对于47来说,新增的参与运算的位为1,所以
index需要变为(index + oldCap)

2.6 get操作

 public V get(Object key) {
        Node e;
        return (e = getNode(hash(key), key)) == null ? null : e.value;
    }

    final Node getNode(int hash, Object key) {
        Node[] tab; Node first, e; int n; K k;
        if ((tab = table) != null && (n = tab.length) > 0 &&
            (first = tab[(n - 1) & hash]) != null) {
            if (first.hash == hash && // always check first node
                ((k = first.key) == key || (key != null && key.equals(k))))
                return first;
            if ((e = first.next) != null) {
                if (first instanceof TreeNode)
                    return ((TreeNode)first).getTreeNode(hash, key);
                do {
                    if (e.hash == hash &&
                        ((k = e.key) == key || (key != null && key.equals(k))))
                        return e;
                } while ((e = e.next) != null);
            }
        }
        return null;
    }

首先会获得当前table的一个快照,然后根据需要查找的记录的key的hashCode来定位到table中的index,如果该位置为null,说明没有没有记录落到该位置上,也就不存在我们查找的记录,直接返回null。如果该位置不为null,说明至少有一个记录落到该位置上来,那么就判断该位置的第一个记录是否使我们查找的记录,如果是则直接返回,否则,根据该index上是一条链表还是一棵红黑树来分别查找我们需要的记录,找到则返回记录,否则返回null。

 public boolean containsKey(Object key) {
        return getNode(hash(key), key) != null;
    }

这个方法调用了getNode来从table中获得一个Node,返回null,说明不存在该记录,否则存在,containsKey方法和get方法都是通过调用getNode方法来进行的,但是他们的区别在于get方法在判断得到的Node不为null的情况下任然可能返回null,因为Node的value可能为null,所以应该在合适的时候调用合适的方法

你可能感兴趣的:(jdk源码之HashMap)