ConcurrentHashMap源码分析-Java8

1.ConcurrentHashMap特性

说明:因为ConcurrentHashMap单词太长,所以下面均适用CHM替代ConcurrentHashMap

  • 同为线程安全集合,但CHM没有任何访问操作需要锁定全表。这也注定了CHM上的操作效率之高。
  • 表访问需要volatile/atomic读,写和CAS.这通过使用内在函数(sun.misc.Unsafe)完成。
  • 向一个空bin中插入节点是通过CAS完成的,其它的更新操作(insert,delete,update)都需要锁lock。
  • 从JDK8开始,CHM使用CAS算法替代了Segment的概念,保证线程安全。
  • 构成结构:bin数组+链表+红黑树
    红黑树被包装在TreeBin内
  • 扩容机制:2倍扩容
  • 常量:
    • 默认容量:16;
    • 负载因子:0.75
      说明:在构造函数中重写此值只会影响初始表的容量,而不会使用实际的浮点值。
    • 链表转红黑树阈值:8
    • 红黑树转链表阈值:6
    • table转为红黑树阈值:64
    • resize时的最小并行度:16(因为默认16个bin)
  • CHM的key,value都不能为null
  • 访问操作(get等)和更新操作(remove等)同时发生时,根据happens-before原则,更新操作先执行,读操作后执行,从而保证了检索操作获取的值一定是最新值。
  • 聚合状态方法的结果包括:size,isEmpty,containsValue通常都是仅当map不在其它线程中进行并发更新时才有用。
  • 批量操作可以接受一个并行阈值参数parallelismThreshold。
    • 如果当前map的size预估值比给定的阈值小,则方法顺序执行。
    • 如果给定阈值=Long.MAX_VALUE,则不会出现并行操作。
    • 如果给定阈值=1,则会导致并行最大化,通过使用ForkJoinPool.commonPool()方法,对子任务分离。
  • 并行操作通常比顺序操作快,但不能保证一定是这样。并行操作更慢的情况有:
    • 如果并行计算的基础工作比计算本身更昂贵,那么小map上的并行操作可能比顺序形式执行更慢。
    • 如果所有处理器都忙于执行不相关的任务,并行化可能无法实现太多的实际并行性。(无法形成流水线操作)
  • 支持序列化,不支持浅拷贝
  • 两个线程访问同一个bin中不同的元素的锁争用概率为:1 / (8 * #elements)
  • TreeBins存在的意义:保护了我们免于因过度resize带来的最坏影响。
  • 每一个bin中的元素到达新的bin后要么索引不变,要么产生2的次幂的位移。我们通过捕获旧节点可以重用的情况来消除不必要的节点创建。平均而言,当table进行resize时,只有1/6的节点需要进行clone。
  • table进行resize时,其它线程可以加入协助resize的过程(这不是为了获取锁),从而使得平均聚合等待时间变短。
  • 在遇到转发节点时,遍历会移动到新table而无需重新访问节点
  • 能够用TreeMap替代TreeBin?
    • 不能。
    • 原因:TreeBins的查询及与查询相关的操作都使用了一种特殊的比较形式。TreeBins中包含的元素可能在实现Comparable上的原则不一样,所以对于它们之间的比较,则无法调用CompareTo()方法。为了解决这一问题,tree通过hash值对其排序。如果Comparable.compareTo 可用的话,再用这个方法对元素排序。在查找节点时,如果元素不具有可比性或比较为0,则可能需要对此节点对左右孩子都进行查询。如果所有元素都不是可比较的并且具有相同的哈希值,则需要对全table进行扫描。
  • TreeBins也需要额外的锁定机制。list更新过程中依旧可以进行遍历,但是红黑树在更新时却不能进行遍历,因为红黑树的调整可能会改变树的根节点,也可能改变各个节点之间的连接情况。
  • TreeBins包含一个简单的读写锁定机制,依赖于主要的同步策略:
    • 插入,删除的结构调整会调用lock机制;
    • 如果在结构调整前有读操作,则必须读操作完成后,再进行结构的调整操作。遵循happes-before原则。
  • 扩展AbstractMap,但这只是仅仅为了与这个类的以前版本兼容。

2.部分内部类、方法、域的分析

  • hash值求法
    • spread(int h)
      • 哈希值=(h ^ (h >>> 16)) & HASH_BITS;//消除异或结果的最高位影响
      • h:key的hashcode
    • 方法分析:hash值无符号右移16位原因:因为table使用的是2的整数次幂的掩码,仅在当前掩码之上的位上变化的散列集将会总是碰撞。(比如,Float键的集合在小table中保持连续的整数)所以我们应用一个转换,将高位的影响向下扩展。这是在速度,性能,分布上做的一个平衡。 因为许多常见的哈希集合已经合理分布(它们就不会在spread机制中受益)因为我们已经使用了红黑树对bin中的大量碰撞做了处理,因此我们只是以最简单的方式做一些移位,然后进行异或运算,以减少系统损失,并合并由于表边界而不会用于索引计算的最高位的影响(也就是&运算)。
  • 扩容方法:
    • tableSizeFor(int c)
      • c: c=1.5*capaticy+1;
      • 返回值:>=c的第一个2的整数次幂
    • 方法分析:
      如果c为2的整数次幂,则返回c;
      如果c不是2的整数次幂,则返回第一个比c大的2的整数次幂;
      eg:c=16,则返回结果为16;c=30,则返回结果为32;
  • 三个访问table的方法
    • 方法说明:
      • 都是属于volatile类型的方法,所以即使在resize的过程中,访问对table中的元素获取的结果也是正确的
      • 对setTabAt()方法的调用总是发生在lock区域内,所以原则上不需要完整的volatile语义,但是目前的代码还是保守地选择了volatile方式。
    • 方法
      • static final Node tabAt(Node[] tab, int i)
      • static final boolean casTabAt(Node[] tab, int i,
        Node c, Node v)
      • static final void setTabAt(Node[] tab, int i, Node v)
  • bin数组:transient volatile Node[] table;
  • sizeCtl:用于控制table初始化和resize的一个变量。
    • 值为负数:table正在初始化or正在resize
    • sizeCtl=-1:正在初始化;
    • sizeCtl=-(1+n):当前有n个线程正在进行resize;
    • 当table未初始化时,保存创建时使用的初始表大小,或默认为0。初始化后,保存下一个要调整table大小的元素计数值。
  • 4个构造函数
    • ConcurrentHashMap()
    • ConcurrentHashMap(int initialCapacity)
    • ConcurrentHashMap(Map

3.code分析

package sourcecode.analysis;

    /**
     * @Author: cxh
     * @CreateTime: 18/4/3 16:29
     * @ProjectName: JavaBaseTest
     */

    import java.io.ObjectStreamField;
    import java.io.Serializable;
    import java.lang.*;
    import java.lang.reflect.ParameterizedType;
    import java.lang.reflect.Type;
    import java.util.AbstractMap;
    import java.util.Arrays;
    import java.util.Collection;
    import java.util.Comparator;
    import java.util.Enumeration;
    import java.util.HashMap;
    import java.util.Hashtable;
    import java.util.Iterator;
    import java.util.Map;
    import java.util.NoSuchElementException;
    import java.util.Set;
    import java.util.Spliterator;
    import java.util.concurrent.ConcurrentMap;
    import java.util.concurrent.CountedCompleter;
    import java.util.concurrent.ForkJoinPool;
    import java.util.concurrent.ThreadLocalRandom;
    import java.util.concurrent.atomic.AtomicReference;
    import java.util.concurrent.locks.LockSupport;
    import java.util.concurrent.locks.ReentrantLock;
    import java.util.function.BiConsumer;
    import java.util.function.BiFunction;
    import java.util.function.BinaryOperator;
    import java.util.function.Consumer;
    import java.util.function.DoubleBinaryOperator;
    import java.util.function.Function;
    import java.util.function.IntBinaryOperator;
    import java.util.function.LongBinaryOperator;
    import java.util.function.ToDoubleBiFunction;
    import java.util.function.ToDoubleFunction;
    import java.util.function.ToIntBiFunction;
    import java.util.function.ToIntFunction;
    import java.util.function.ToLongBiFunction;
    import java.util.function.ToLongFunction;
    import java.util.stream.Stream;

    /**
     * Hashtable支持高并发检索,同时支持高并发更新.
     * ConcurrentHashMap和Hashtable遵守相同的功能规范,并且包含与Hashtable每种方法相对应的方法版本.
     * 但是,尽管所有操作都是线程安全的,但检索操作并不需要锁定,并且没有任何访问操作需要锁定全表.
     * 这个类可以在依赖线程安全性的程序中与Hashtable完全互操作,但不依赖于它的同步细节.
     *
     * 检索操作(包括get)通常不会对表加锁,因此可能会和更新操作(如put,remove)同时发生.
     * 检索反映了最近完成的更新操作的结果.更正式的说:对一个key的value的更新操作和读操作遵守happens-before
     * 原则,所以同时发生检索和更新操作时,更新操作先执行,读操作后执行,从而保证了检索操作获取的值一定是最新线程
     * 更新的值.
     * 对整体操作(如putAll和clear),并发检索可能只会反映出一部分条目的插入和删除.同样的,Iterators, Spliterators
     * 和Enumerations只是在一定程度上反映了哈希表的状态or反映的是从iterator/enumeration它们创建后哈希表的状态.
     * 它们并不会抛出并发异常ConcurrentModificationException.但是,迭代器被设计为一次只能由一个线程使用.
     * 请记住,聚合状态方法的结果包括:size,isEmpty,containsValue通常都是仅当map不在其它线程中进行并发更新时才有用.
     * 否则,这些方法的结果反映了可能足以用于监视或估计目的的暂态,但不适用于程序控制。
     *
     * 当碰撞冲突很多时(如不同hash值的key通过取模运算后进入了相同的slot),table会自动扩容,
     * slot扩容后大小为原来的2倍(和扩容时0.75负载因子阈值保持一致)
     * 随着映射的添加和删除,这个平均值可能会有很大的变化,但是总的来说,针对散列表,这已经是在时间/空间上做了折衷.
     * 但是,调整这个或任何其他类型的哈希表可能是一个相对较慢的操作.在可能的情况下,通过构造器函数,指定initialCapacity
     * 是比较好的方式.另外一个可选的构造函数参数loadFactor提供了另一种定制初始表容量的方法,通过指定要用于计算给定元素数
     * 分配空间量的表密度来自定义初始表容量.此外,为了与此类的以前版本兼容,构造函数可以选择指定预期的concurrencyLevel
     * 作为内部大小调整的附加提示.
     * 注意:如果很多key都用一样的hashcode,则哈希表的性能一定会降低.为了减弱这种影响,当key是可比较的对象时(实现了
     * Comparable接口),则ConcurrentHashMap可以通过对key的排序来打破这种关系.
     *
     * ConcurrentHashMap的Set对象创建方式有:newKeySet(),newKeySet(int),
     * 如果所有的key都是有效的,且values都无效(or所有的value值都一样),则还有一种视图创建方式:keySet(Object).
     *
     * ConcurrentHashMap可以用作可伸缩频率map(直方图或多重集的一种形式),这可以使用LongAdder作为value,
     * 通过computeIfAbsent来初始化.比如:
     * 向ConcurrentHashMap类型的变量freqs添加一个count,你可以使用lambda表达式如下:
     * freqs.computeIfAbsent(k -> new LongAdder()).increment();
     *
     * 此类及其视图和迭代器实现了Map和Iterator接口的所有可选方法。
     *
     * ConcurrentHashMap的key和value都不能为null,这一点和hashtable一致.
     *
     * ConcurrentHashMap支持一组顺序操作和并行的批量操作,和大多数Stream方法不同,该操作被设计为线程安全
     * 且经常应用于即使由其他线程同时更新的映射;例如,在计算共享注册表中值的快照摘要时。
     * 有3种操作,每一种有4种形式,接受一个包含keys,values,Entries,(key,value)参数的函数.
     * 因为ConcurrentHashMap的元素没有以任何特定的方式排序,并且可能在不同的并行执行中以不同的顺序处理,
     * 所以提供的函数的正确性不应取决于任何排序,及在过程中可能会瞬时改变的对象or值;除了forEach方法,其它方法在理想情况下,
     * 应该是不会改变ConcurrentHashMap的.
     * Map.Entry上的块操作不支持方法setValue().
     *
     * forEach:对每个元素执行给定操作.
     *
     * search:返回在每个元素上应用给定函数的第一个可用的非null元素;找到后不再进行后续的搜索.
     *
     * reduce:计算每一个元素.
     * 设计的reduce函数不能依赖元素顺序
     *
     * 批量操作可以接受一个并行阈值参数parallelismThreshold.
     * 如果当前map的size预估值比给定的阈值小,,则方法顺序执行.
     * 所以,,如果给定阈值=Long.MAX_VALUE,则不会出现并行操作.
     *      如果给定阈值=1,则会导致并行最大化,通过使用ForkJoinPool.commonPool()方法,对子任务分离.
     * 通常情况下,您最初会选择这些极端值中的一个,然后衡量使用中间开销与吞吐量之间的值的性能。
     *
     * 批量操作的并发属性来自ConcurrentHashMap的并发属性:
     * 插入,更新操作happens-before访问操作.
     * 任何批量操作的结果反映了这些每元素关系的组成(但是,除非以某种方式知道它是静止的,否则就整个map而言,不一定是原子的)
     * 相反,因为映射中的键和值永远不会为null,所以null作为当前缺少任何结果的可靠原子指标。
     * 为了保持这个属性,null作为所有非标量约简操作的隐含基础。
     * 对于double,long和int版本,base应该与其他任何值结合时返回其他值(更正式地说,它应该是减少的标识元素)。
     * 最常见的reduce函数接口有这些属性;例如,用MAX_VALUE或0或作为计算和的初始值。
     *
     * 作为参数提供的搜索和转换函数应该类似地返回null来指示无结果(在这种情况下,它不被使用)
     * 在映射的reduce函数接口中,这也使得变换能够用作过滤器,如果元素不能合并,则返回null.
     * 在search或者reduce操作中使用它们前,您可以通过在“null意味着现在没有”规则下自己组合它们来创建复合转换和过滤。
     *
     * 接受和/或返回Entry参数的方法维护键值关联.注意可以使用AbstractMap.SimpleEntry(k,v)作为空白entry参数.
     *
     * 批量操作可能会突然完成,从而引发调用函数抛出异常.
     * 请记住:在处理这样的异常时,其他并发执行的函数也可能引发异常,或者即使第一个异常没有发生,其它异常也可能发生。
     *
     * 并行操作通常比顺序操作快,但不能保证一定是这样.
     * 并行操作更慢的情况有:
     * 1.如果并行计算的基础工作比计算本身更昂贵,那么小map上的并行操作可能比顺序形式执行更慢。
     * 2.如果所有处理器都忙于执行不相关的任务,并行化可能无法实现太多的实际并行性。(无法形成流水线操作)
     *
     * 支持序列化,不支持浅拷贝
     *
     * @since 1.5
     * @author Doug Lea
     * @param  the type of keys maintained by this map
     * @param  the type of mapped values
     */
    public class ConcurrentHashMap extends AbstractMap
            implements ConcurrentMap, Serializable {
        private static final long serialVersionUID = 7249069246763182397L;

        /*
         * 概述:
         *
         * 这个散列表的主要设计目标是保持并发可读性(通常方法get(),但也包括迭代器和相关方法),
         * 同时最小化更新争用。次要目标是保持空间消耗与java.util.HashMap大致相同或更好,并支
         * 持多线程在空表上较高的初始插入速率。
         *
         * 该映射通常用作分箱(分段)散列表。每个键值映射都保存在一个节点中。大多数节点是具有散列,
         * 键,值和下一个字段的基本节点类的实例。但是,存在各种子类:TreeNodes排列在平衡树中,而不是列表。
         * TreeBins拥有TreeNodes集合的根。转发节点在调整大小期间放置在bin的头部。 ReservationNodes用作占位符,
         * 同时在computeIfAbsent和其它相关的方法中中建立值.
         * 类型TreeBin,ForwardingNode和ReservationNode不包含普通的key,value或hash值,并且在搜索期间很容易
         * 区分,因为它们具有负散列字段和空键和值字段。 (这些特殊节点要么不常见,要么是暂时的,所以携带一些未使用的
         * 字段的影响是微不足道的。)
         *
         *
         * 在第一次插入时,table大小被惰性初始化为2的整数次幂。表中的每个bin通常包含一个节点列表(通常,列表只有零个或
         * 一个节点)。表访问需要volatile/atomic读,写和CAS获取锁。因为在不增加指针的情况这些操作无法实现,
         * 所以我们使用内在函数(sun.misc.Unsafe)操作。
         *
         * 我们使用节点散列字段的顶部(符号)位来进行控制 -- 由于地址限制,它无论如何都是可用的。具有负散列字段的节点是
         * 被特别处理的,或者map方法中直接被忽略.
         *
         * 向一个空bin中插入节点是通过CAS完成的.在大多数key/hash分布中,这是一种很常见的put操作.其它的更新操作
         * (insert,delete,update)都需要锁lock.因为如果每一个bin都分配一个单独的lock会比较浪费存储空间,所以改为使用bin列表的
         * 第一个节点本身作为锁。对这些锁的锁定支持依赖于内置的“同步”监视器。
         *
         * 但是,使用列表的第一个节点作为锁本身并不足以满足:当一个节点被锁定时,任何更新都必须首先验证它在锁定之后仍然是第一个节点,
         * 如果不是,则重试.因为新节点总是追加到列表尾部,所以一旦节点首先进入一个容器,它将保持第一个直到被删除或容器变为无效(在调整大小时)。
         *
         * 每个分区都有锁的主要缺点是:由同一个锁保护的分区列表中的其他节点上的其他更新操作可能会被延迟,比如equals()和映射等相关的函数执行时,
         * 会花费很长时间.然而,统计上,在随机哈希码下,这不是一个常见问题。理想情况下,给定size的调整阈值为0.75,bin中节点的频率遵循平均
         * 约为0.5的泊松分布,尽管调整size大小时泊松分布方差很大.忽略方差,列表大小k的预期出现是(exp(-0.5)* pow(0.5,k)/ factorial(k))。
         * 第一个值是:
         * 0:    0.60653066
         * 1:    0.30326533
         * 2:    0.07581633
         * 3:    0.01263606
         * 4:    0.00157952
         * 5:    0.00015795
         * 6:    0.00001316
         * 7:    0.00000094
         * 8:    0.00000006
         * 其它值:只要是在10,000,000范围内,都会小于1.
         *
         * 两个线程访问同一个bin中不同的元素的锁争用概率为:1 / (8 * #elements)
         *
         * 实际中遇到的哈希码分布有时会明显偏离均匀随机性。这包括N>(1 << 30)的情况,所以一些key必然会出现碰撞。
         * 因此,我们使用一个二级策略,该策略适用于bin中节点数超过阈值的情况。这些TreeBins使用平衡树来保存节点(一种特殊形式的红黑树),
         * 将搜索时间限制在O(log N).TreeBin中的每个搜索步骤至少比常规列表中的搜索步骤慢两倍,但考虑到N不能超过(1 << 64)(在内存地址
         * 用完之前),所以搜索步骤,锁的持有时间等等,都会受到限制,从而搜索节点个数会控制在100个以内.TreeBin节点(TreeNodes)也保持与
         * 常规节点相同的“下一个”遍历指针,所以可以以相同的方式在迭代器中遍历。
         *
         * 当table中元素个数超过百分比阈值(名义上为0.75,但请参见下文)时,会调整table的大小。
         * 启动线程负责分配并设置替换数组,后续使用这个concurrentHashMap的其它线程在发现元素个数超过负载因子规定大小时,都可以对table进行
         * resize操作.TreeBins的使用保护了我们免于因过度resize带来的最坏影响.
         * resize的过程是将旧table中的每一个bin都复制到新table的遍历复制过程.然而,线程要求在传输之前通过字段transferIndex传输小块索引,
         * 从而减少争用。字段sizeCtl中的生成戳记确保重新定位不重叠。因为resize时,按照2的整数次幂进行扩容,所以每一个bin中的元素到达新的bin
         * 后要么索引不变,要么产生2的次幂的位移.我们通过捕获旧节点可以重用的情况来消除不必要的节点创建,因为它们的下一个域不会改变.
         * 平均而言,当tableresize时,只有1/6的节点需要进行clone.
         * 被替换掉的节点只要不再被读线程引用,则会被GC回收.
         * 元素都被转移到新table后,旧table的bin中只包含一个转发节点(其hash域为MOVED),这一节点将新table作为它的key.在遇到转发节点时,查找
         * 和更新操作会转到新table中重新执行.
         *
         * 每一个bin从旧table到新table的转移都需要获取其bin的锁,这一过程中,可以阻止想要获取这个bin的lock的线程进行等待.
         * 但是其它线程可以加入协助resize的过程(这不是为了获取锁),从而使得平均聚合等待时间变短.
         * 转移还需要保证:无论是新table,还是旧table,只要是可访问的bin,都要保证其能进行遍历.
         * 这部分是通过从最后一个bin(table.length - 1)开始向第一个方向进行的。
         * 在遇到转发节点时,遍历会移动到新table而无需重新访问节点。为了保证无序移动时也不跳过中间节点,在遍历期间首次遇到转发节点时会创建一个
         * 堆栈,以便在稍后处理当前table时保持其位置.对这些保存/恢复机制的需求相对较少,但是当遇到一个转发节点时,通常会有更多的节点.
         * 所以Traversers使用简单的缓存方案来避免创建这么多新的TableStack节点。
         * 遍历方案也适用于部分遍历bin(通过一个可选的Traverser构造函数)来支持分区聚合操作。
         *
         * 用到了表的延迟初始化
         *
         * 元素个数的count值由LongAdder来维护.通过对其特殊设置,避免使用LongAdder来访问导致创建多个CounterCell的隐式竞争检测.
         * 计数器机制避免更新上的争用,但如果在并发访问期间读取频率太高,可能会遇到缓存抖动。为避免频繁读,仅在添加到已拥有两个或更多节点的
         * bin时尝试调整竞争大小。在统一的散列分布下,发生在阈值处的概率约为13%,这意味着只有大约1/8需要检查阈值(并且在调整大小之后,很少这
         * 样做)。
         *
         * TreeBins的查询及与查询相关的操作都使用了一种特殊的比较形式(这也是为什么不能使用现有集合TreeMap的原因).TreeBins中包含的元素可能
         * 在实现Comparable上的原则不一样,所以对于它们之间的比较,则无法调用CompareTo()方法.为了解决这一问题,tree通过hash值对其排序.如果
         * Comparable.compareTo 可用的话,再用这个方法对元素排序.在查找节点时,如果元素不具有可比性或比较为0,则可能需要对此节点对左右孩子
         * 都进行查询.如果所有元素都不是可比较的并且具有相同的哈希值,则需要对全table进行扫描.
         * 插入节点调整平衡时,为了保证总体有序,我们将类和identityHashCodes作为等同处理.
         * 红黑树调整平衡的算法是对CLR算法的改进.
         *
         * TreeBins也需要额外的锁定机制。list更新过程中依旧可以进行遍历,但是红黑树在更新时却不能进行遍历,因为红黑树的调整可能会改变树的根节点,
         * 也可能改变各个节点之间的连接情况.TreeBins包含一个简单的读写锁定机制,依赖于主要的同步策略:插入,删除的结构调整会调用lock机制;如果
         * 在结构调整前有读操作,则必须读操作完成后,再进行结构的调整操作.由于只可能有一个waiter,所以可以简单的使用一个waiter域来阻止所有的写
         * 操作.然后,读操作永远不需要被阻塞.如果保持根锁定,它们沿next指针遍历,直到锁定变为可用或列表遍历完为止.这类情况下并遍历不快,
         * 但是可以最大限度地提高总预期吞吐量.
         *
         * 为了保持与以前的API和序列化兼容,这个类的版本引入了几个特别的内容:主要是:保留了构造器参数concurrencyLevel,但并未使用.
         * 我们接受一个loadFactor构造函数参数,但只将它应用于初始表容量(这个参数的使用仅此一次).我们还声明了一个未使用的“Segment”类,
         * 它只在序列化时以最小的形式实例化。
         *
         * 另外,它扩展了AbstractMap,但这只是仅仅为了与这个类的以前版本兼容.
         *
         * ConcurrentHashMap代码的组织顺序:
         * 1.主要的静态声明+工具类
         * 2.主要的public方法
         * 3.扩容方法,树,遍历器,批量操作.
         */

        /* ---------------- 常量 -------------- */

        /**
         * table的最大容量.
         * 为什么不是1<<32? 因为32位散列字段的前两位用于控制目的.
         * 1<<30=1073741824
         */
        private static final int MAXIMUM_CAPACITY = 1 << 30;

        //默认容量:16,和HashMap一样.
        private static final int DEFAULT_CAPACITY = 16;

        //数组最大长度,在toArray等相关方法中用到
        static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;

        //默认并发级别,已经不再使用的字段,之所以还存在只是为了和之前的版本兼容.
        private static final int DEFAULT_CONCURRENCY_LEVEL = 16;

        //此表的加载因子。在构造函数中重写此值只会影响初始表的容量,而不会使用实际的浮点值.
        private static final float LOAD_FACTOR = 0.75f;

        //添加当前元素,bin中元素个数=8,则链表转为树
        static final int TREEIFY_THRESHOLD = 8;

        //bin中元素个数到达6个,则树转链表
        static final int UNTREEIFY_THRESHOLD = 6;

        //table转为树的阈值:64,此值最小为4*TREEIFY_THRESHOLD,显然,这里设定了64为初始值.
        static final int MIN_TREEIFY_CAPACITY = 64;

        //table扩容时,bin转移个数,最小为默认的DEFAULT_CAPACITY=16.
        //因为扩容时,可以多个线程同时操作,所以16个bin会被分配给多个的线程进行转移
        private static final int MIN_TRANSFER_STRIDE = 16;

        /**
         * The number of bits used for generation stamp in sizeCtl.
         * Must be at least 6 for 32bit arrays.
         * 用来控制扩容,单线程进入的变量
         * 32位数组时,最小值为6
         */
        private static int RESIZE_STAMP_BITS = 16;

        //resize时的线程最大个数
        private static final int MAX_RESIZERS = (1 << (32 - RESIZE_STAMP_BITS)) - 1;

        /**
         * The bit shift for recording size stamp in sizeCtl.
         * 用来控制扩容,单线程进入的变量
         */
        private static final int RESIZE_STAMP_SHIFT = 32 - RESIZE_STAMP_BITS;

        //节点hash域的编码
        static final int MOVED     = -1; // forwarding nodes的hash值
        static final int TREEBIN   = -2; // roots of trees的hash值
        static final int RESERVED  = -3; // transient reservations的hash值
        static final int HASH_BITS = 0x7fffffff; // 正常散列节点的可用二进制位

        //当前可用cpu数量
        static final int NCPU = Runtime.getRuntime().availableProcessors();

        //用于序列化兼容性
        private static final ObjectStreamField[] serialPersistentFields = {
                new ObjectStreamField("segments", Segment[].class),
                new ObjectStreamField("segmentMask", Integer.TYPE),
                new ObjectStreamField("segmentShift", Integer.TYPE)
        };

        /* ---------------- 节点 -------------- */

        /**
         * static内部类:最核心的内部类,包装了key-value条目.
         * 特别注意:
         * 1.不支持setValue方法.
         * 2.包含负数哈希值的node子类是特殊的,允许key和value为null.
         * 3.ConcurrentHashMap不允许key和value为null
         */
        static class Node implements Map.Entry {
            final int hash;
            final K key;
            volatile V val;//比hashmap多了关键字volatile
            volatile Node next;//比hashmap多了关键字volatile

            Node(int hash, K key, V val, Node next) {
                this.hash = hash;
                this.key = key;
                this.val = val;
                this.next = next;
            }

            public final K getKey()       { return key; }
            public final V getValue()     { return val; }
            //entry的hash值=key和value的hash值求异或,和hashmap相同
            public final int hashCode()   { return key.hashCode() ^ val.hashCode(); }
            public final String toString(){ return key + "=" + val; }
            //本方法不被支持
            public final V setValue(V value) {
                throw new UnsupportedOperationException();
            }

            /**
             * 检查步骤:
             * 1.是Map.Entry类型
             * 2.key和value都等价
             */
            public final boolean equals(Object o) {
                Object k, v, u; Map.Entry e;
                return ((o instanceof Map.Entry) &&
                        (k = (e = (Map.Entry)o).getKey()) != null &&
                        (v = e.getValue()) != null &&
                        (k == key || k.equals(key)) &&
                        (v == (u = val) || v.equals(u)));
            }

            //虚拟化支持map.get();在子类中被覆盖。
            Node find(int h, Object k) {
                Node e = this;
                if (k != null) {
                    do {
                        K ek;
                        if (e.hash == h &&
                                ((ek = e.key) == k || (ek != null && k.equals(ek))))
                            return e;
                    } while ((e = e.next) != null);
                }
                return null;
            }
        }

        /* ---------------- 静态工具 -------------- */

        /**
         * hash值无符号右移16位原因:
         * 因为table使用的是2的整数次幂的掩码,仅在当前掩码之上的位上变化的散列集将会总是碰撞。(比如,Float键的集合在小table中保持连续的整数)
         * 所以我们应用一个转换,将高位的影响向下扩展。这是在速度,性能,分布上做的一个平衡.
         * 因为许多常见的哈希集合已经合理分布(它们就不会在spread机制中受益)
         * 因为我们已经使用了红黑树对bin中的大量碰撞做了处理,因此我们只是以最简单的方式做一些移位,然后进行异或运算,
         * 以减少系统损失,并合并由于表边界而不会用于索引计算的最高位的影响(也就是&运算)。
         *
         */
        static final int spread(int h) {
            return (h ^ (h >>> 16)) & HASH_BITS;//消除异或结果的最高位影响
        }

        /**
         * 如果c为2的整数次幂,则返回c;
         * 如果c不是2的整数次幂,则返回第一个比c大的2的整数次幂;
         * eg:c=16,则返回结果为16;
         *    c=30,则返回结果为32;
         */
        private static final int tableSizeFor(int c) {
            int n = c - 1;
            n |= n >>> 1;
            n |= n >>> 2;
            n |= n >>> 4;
            n |= n >>> 8;
            n |= n >>> 16;
            return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
        }

        //如果传入参数x实现了Comparable接口,则返回类x,否则返回null.同HashMap
        static Class comparableClassFor(Object x) {
            if (x instanceof java.lang.Comparable) {
                Class c; Type[] ts, as; Type t; ParameterizedType p;
                if ((c = x.getClass()) == String.class) // bypass checks
                    return c;
                if ((ts = c.getGenericInterfaces()) != null) {
                    for (int i = 0; i < ts.length; ++i) {
                        if (((t = ts[i]) instanceof ParameterizedType) &&
                                ((p = (ParameterizedType)t).getRawType() ==
                                        java.lang.Comparable.class) &&
                                (as = p.getActualTypeArguments()) != null &&
                                as.length == 1 && as[0] == c) // type arg is c
                            return c;
                    }
                }
            }
            return null;
        }

        //如果x和kc类型相同,则返回k.compareTo(x)结果;否则返回0.同HashMap
        @SuppressWarnings({"rawtypes","unchecked"}) // for cast to Comparable
        static int compareComparables(Class kc, Object k, Object x) {
            return (x == null || x.getClass() != kc ? 0 :
                    ((java.lang.Comparable)k).compareTo(x));
        }

        /* ---------------- 访问table元素方法 -------------- */

        /*
         * 下面3个方法,都是属于volatile类型的方法,所以即使在resize的过程中,访问对table中的元素获取的结果也是正确的.
         * 针对tab参数,必须有非null判定.然后判定tab的长度是否>0,最后判定索引i是否合法.
         * 注意:为了纠正用户发生的任意并发错误,这些检查必须对局部变量进行操作,这些检查必须对本地变量进行操作,这些变量占了下面一些特别的内联分配。
         * 注意:对setTabAt()方法的调用总是发生在lock区域内,所以原则上不需要完整的volatile语义,但是目前的代码还是保守地选择了volatile方式.
         */

        @SuppressWarnings("unchecked")
        static final  Node tabAt(Node[] tab, int i) {
            return (Node)U.getObjectVolatile(tab, ((long)i << ASHIFT) + ABASE);
        }

        static final  boolean casTabAt(Node[] tab, int i,
                                            Node c, Node v) {
            return U.compareAndSwapObject(tab, ((long)i << ASHIFT) + ABASE, c, v);
        }

        static final  void setTabAt(Node[] tab, int i, Node v) {
            U.putObjectVolatile(tab, ((long)i << ASHIFT) + ABASE, v);
        }

        /* ---------------- 域 -------------- */

        /**
         * bin数组
         * 延迟初始化到第一次插入元素.
         * 数组长度总是2的整数次幂.
         * 可以通过迭代器进行访问.
         */
        transient volatile Node[] table;

        //resize时用到的临时table,只有在resize时,才不为null
        private transient volatile Node[] nextTable;

        //基本计数器值,主要用于没有争用时,也可作为表初始化期间的后备。通过CAS更新。
        private transient volatile long baseCount;

        /**
         * 用于控制table初始化和resize的一个变量.
         * 值为负数:table正在初始化or正在resize
         * sizeCtl=-1:正在初始化;
         * sizeCtl=-(1+n):当前有n个线程正在进行resize;
         * 当table未初始化时,保存创建时使用的初始表大小,或默认为0。初始化后,保存下一个要调整table大小的元素计数值。
         */
        private transient volatile int sizeCtl;

        //resize时,next table的索引+1,用于分割.
        //nexttable索引[0,2*n-1],故transferIndex=n
        private transient volatile int transferIndex;

        //在调整大小和/或创建CounterCells时使用的自旋锁(通过CAS锁定)。
        private transient volatile int cellsBusy;

        //这是一个计数器数组,用于保存每个bin中节点个数.
        private transient volatile CounterCell[] counterCells;

        //视图views
        private transient KeySetView keySet;
        private transient ValuesView values;
        private transient EntrySetView entrySet;


        /* ---------------- Public操作 -------------- */

        /*-------------4个构造函数-----------*/

        //table默认大小16
        public ConcurrentHashMap() {
        }

        //初始化容量为:>=1.5*initialCapacity+1的最小2的整数次幂
        public ConcurrentHashMap(int initialCapacity) {
            if (initialCapacity < 0)
                throw new IllegalArgumentException();
            int cap = ((initialCapacity >= (MAXIMUM_CAPACITY >>> 1)) ?
                    MAXIMUM_CAPACITY :
                    tableSizeFor(initialCapacity + (initialCapacity >>> 1) + 1));
            this.sizeCtl = cap;
        }

        //创建一个和输入参数map映射一样的map
        public ConcurrentHashMap(Map m) {
            this.sizeCtl = DEFAULT_CAPACITY;
            putAll(m);
        }

        public ConcurrentHashMap(int initialCapacity, float loadFactor) {
            this(initialCapacity, loadFactor, 1);
        }

        public ConcurrentHashMap(int initialCapacity,
                                 float loadFactor, int concurrencyLevel) {
            if (!(loadFactor > 0.0f) || initialCapacity < 0 || concurrencyLevel <= 0)
                throw new IllegalArgumentException();
            if (initialCapacity < concurrencyLevel)   // Use at least as many bins
                initialCapacity = concurrencyLevel;   // as estimated threads
            long size = (long)(1.0 + (long)initialCapacity / loadFactor);
            //如果initialCapacity=16,则cap=32;
            int cap = (size >= (long)MAXIMUM_CAPACITY) ?
                    MAXIMUM_CAPACITY : tableSizeFor((int)size);
            this.sizeCtl = cap;
    }

        // Original (since JDK1.2) Map methods

        public int size() {
            //节点总数n
            long n = sumCount();
            return ((n < 0L) ? 0 :
                    (n > (long)Integer.MAX_VALUE) ? Integer.MAX_VALUE :
                            (int)n);
        }

        public boolean isEmpty() {
            return sumCount() <= 0L; // ignore transient negative values
        }

        public V get(Object key) {
            Node[] tab; Node e, p; int n, eh; K ek;
            //根据hash值查找散列位置
            int h = spread(key.hashCode());
            if ((tab = table) != null && (n = tab.length) > 0 &&
                    (e = tabAt(tab, (n - 1) & h)) != null) {
                //如果tab[(n-1)&h]处的节点就是要查找的节点
                if ((eh = e.hash) == h) {
                    if ((ek = e.key) == key || (ek != null && key.equals(ek)))
                        return e.val;
                }
                //如果是树节点
                else if (eh < 0)
                    return (p = e.find(h, key)) != null ? p.val : null;
                //如果是链表节点
                while ((e = e.next) != null) {
                    if (e.hash == h &&
                            ((ek = e.key) == key || (ek != null && key.equals(ek))))
                        return e.val;
                }
            }
            return null;
        }

        //调用上面的方法
        public boolean containsKey(Object key) {
            return get(key) != null;
        }


        public boolean containsValue(Object value) {
            if (value == null)
                throw new NullPointerException();
            Node[] t;
            if ((t = table) != null) {
                Traverser it = new Traverser(t, t.length, 0, t.length);
                for (Node p; (p = it.advance()) != null; ) {
                    V v;
                    if ((v = p.val) == value || (v != null && value.equals(v)))
                        return true;
                }
            }
            return false;
        }

        //key,value不能为null
        public V put(K key, V value) {
            return putVal(key, value, false);
        }

        /** Implementation for put and putIfAbsent
         * 总体步骤:
         * 1.判定key,value合法性
         * 2.插入位置为空bin
         * 3.插入位置正在进行resize
         * 4.插入位置在table中,且该位置未进行resize
         * 5.插入完成后,判定bin中节点个数是否>=8,从而决定是否进行链表转红黑树.
         *
         */
        final V putVal(K key, V value, boolean onlyIfAbsent) {
            if (key == null || value == null) throw new NullPointerException();
            //获取hash值
            int hash = spread(key.hashCode());
            int binCount = 0;
            for (Node[] tab = table;;) {
                Node f; int n, i, fh;
                if (tab == null || (n = tab.length) == 0)
                    tab = initTable();
                //如果插入位置为空的bin
                else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
                    if (casTabAt(tab, i, null,
                            new Node(hash, key, value, null)))
                        break;                   // no lock when adding to empty bin
                }
                //如果查找位置为forwarding node
                else if ((fh = f.hash) == MOVED)
                    tab = helpTransfer(tab, f);
                //在当前table中查找插入位置
                else {
                    V oldVal = null;
                    //同步代码块,保证插入安全
                    synchronized (f) {
                        if (tabAt(tab, i) == f) {
                            //如果为链表
                            if (fh >= 0) {
                                binCount = 1;
                                for (Node e = f;; ++binCount) {
                                    K ek;
                                    //找到,更新值
                                    if (e.hash == hash &&
                                            ((ek = e.key) == key ||
                                                    (ek != null && key.equals(ek)))) {
                                        oldVal = e.val;
                                        if (!onlyIfAbsent)
                                            e.val = value;
                                        break;
                                    }
                                    Node pred = e;
                                    if ((e = e.next) == null) {
                                        pred.next = new Node(hash, key,
                                                value, null);
                                        break;
                                    }
                                }
                            }
                            //如果为红黑树
                            else if (f instanceof TreeBin) {
                                Node p;
                                binCount = 2;
                                if ((p = ((TreeBin)f).putTreeVal(hash, key,
                                        value)) != null) {
                                    oldVal = p.val;
                                    if (!onlyIfAbsent)
                                        p.val = value;
                                }
                            }
                        }
                    }
                    //插入节点后,检查bin中节点个数是否>=8,如果大于,则由链表转为红黑树
                    if (binCount != 0) {
                        if (binCount >= TREEIFY_THRESHOLD)
                            treeifyBin(tab, i);
                        if (oldVal != null)
                            return oldVal;
                        break;
                    }
                }
            }
            addCount(1L, binCount);
            return null;
        }


        public void putAll(Map m) {
            //对table的size进行设置,使得其可以容纳新加入的元素.
            tryPresize(m.size());
            for (Map.Entry e : m.entrySet())
                putVal(e.getKey(), e.getValue(), false);
        }

        public V remove(Object key) {
            return replaceNode(key, null, null);
        }

        //此方法是对4个公有方法remove/replace的辅助方法.
        final V replaceNode(Object key, V value, Object cv) {
            int hash = spread(key.hashCode());
            for (Node[] tab = table;;) {
                Node f; int n, i, fh;
                //空表or查找位置bin为null
                if (tab == null || (n = tab.length) == 0 ||
                        (f = tabAt(tab, i = (n - 1) & hash)) == null)
                    break;
                //如果查找节点为转移节点forwarding node
                else if ((fh = f.hash) == MOVED)
                    tab = helpTransfer(tab, f);
                else {
                    V oldVal = null;
                    boolean validated = false;
                    //同步代码块,保证删除安全性
                    synchronized (f) {
                        if (tabAt(tab, i) == f) {
                            //bin为链表结构
                            if (fh >= 0) {
                                validated = true;
                                for (Node e = f, pred = null;;) {
                                    K ek;
                                    if (e.hash == hash &&
                                            ((ek = e.key) == key ||
                                                    (ek != null && key.equals(ek)))) {
                                        V ev = e.val;
                                        if (cv == null || cv == ev ||
                                                (ev != null && cv.equals(ev))) {
                                            oldVal = ev;
                                            if (value != null)
                                                e.val = value;
                                            else if (pred != null)
                                                pred.next = e.next;
                                            else
                                                setTabAt(tab, i, e.next);
                                        }
                                        break;
                                    }
                                    //记录上一个访问节点
                                    pred = e;
                                    //更新e为下一个节点
                                    if ((e = e.next) == null)
                                        break;
                                }
                            }
                            //bin为红黑树结构
                            else if (f instanceof TreeBin) {
                                validated = true;
                                TreeBin t = (TreeBin)f;
                                TreeNode r, p;
                                if ((r = t.root) != null &&
                                        (p = r.findTreeNode(hash, key, null)) != null) {
                                    V pv = p.val;
                                    if (cv == null || cv == pv ||
                                            (pv != null && cv.equals(pv))) {
                                        oldVal = pv;
                                        if (value != null)
                                            p.val = value;
                                        else if (t.removeTreeNode(p))
                                            setTabAt(tab, i, untreeify(t.first));
                                    }
                                }
                            }
                        }
                    }
                    if (validated) {
                        if (oldVal != null) {
                            if (value == null)
                                //更新节点个数
                                addCount(-1L, -1);
                            return oldVal;
                        }
                        break;
                    }
                }
            }
            return null;
        }

        public void clear() {
            long delta = 0L; // negative number of deletions
            int i = 0;
            Node[] tab = table;
            while (tab != null && i < tab.length) {
                int fh;
                Node f = tabAt(tab, i);
                if (f == null)
                    ++i;
                //如果节点为转移节点forwarding node
                else if ((fh = f.hash) == MOVED) {
                    tab = helpTransfer(tab, f);
                    i = 0; // restart
                }
                else {
                    //同步代码块,删除i位置处的节点
                    synchronized (f) {
                        if (tabAt(tab, i) == f) {
                            Node p = (fh >= 0 ? f :
                                    (f instanceof TreeBin) ?
                                            ((TreeBin)f).first : null);
                            while (p != null) {
                                --delta;
                                p = p.next;
                            }
                            setTabAt(tab, i++, null);
                        }
                    }
                }
            }
            if (delta != 0L)
                addCount(delta, -1);
        }


        public KeySetView keySet() {
            KeySetView ks;
            return (ks = keySet) != null ? ks : (keySet = new KeySetView(this, null));
        }

        public Collection values() {
            ValuesView vs;
            return (vs = values) != null ? vs : (values = new ValuesView(this));
        }

        public Set> entrySet() {
            EntrySetView es;
            return (es = entrySet) != null ? es : (entrySet = new EntrySetView(this));
        }

        //返回map的hash值.
        //结果=sum(key.hashCode() ^ value.hashCode())
        public int hashCode() {
            int h = 0;
            Node[] t;
            if ((t = table) != null) {
                Traverser it = new Traverser(t, t.length, 0, t.length);
                for (Node p; (p = it.advance()) != null; )
                    h += p.key.hashCode() ^ p.val.hashCode();
            }
            return h;
        }

        //格式:{key1=value1,key2=value2,...}
        public String toString() {
            Node[] t;
            int f = (t = table) == null ? 0 : t.length;
            Traverser it = new Traverser(t, f, 0, f);
            StringBuilder sb = new StringBuilder();
            sb.append('{');
            Node p;
            if ((p = it.advance()) != null) {
                for (;;) {
                    K k = p.key;
                    V v = p.val;
                    sb.append(k == this ? "(this Map)" : k);
                    sb.append('=');
                    sb.append(v == this ? "(this Map)" : v);
                    if ((p = it.advance()) == null)
                        break;
                    sb.append(',').append(' ');
                }
            }
            return sb.append('}').toString();
        }

        /**
         * Compares the specified object with this map for equality.
         * Returns {@code true} if the given object is a map with the same
         * mappings as this map.  This operation may return misleading
         * results if either map is concurrently modified during execution
         * of this method.
         *
         * @param o object to be compared for equality with this map
         * @return {@code true} if the specified object is equal to this map
         */
        public boolean equals(Object o) {
            //内存地址是否相同
            if (o != this) {
                //是否为map类型
                if (!(o instanceof Map))
                    return false;
                //类型转化
                Map m = (Map) o;
                Node[] t;
                //记录table长度
                int f = (t = table) == null ? 0 : t.length;
                Traverser it = new Traverser(t, f, 0, f);
                //遍历table,检查和o的value一致性
                for (Node p; (p = it.advance()) != null; ) {
                    V val = p.val;
                    Object v = m.get(p.key);
                    if (v == null || (v != val && !v.equals(val)))
                        return false;
                }
                //遍历o,检查自身key和value的合法性
                for (Map.Entry e : m.entrySet()) {
                    Object mk, mv, v;
                    if ((mk = e.getKey()) == null ||
                            (mv = e.getValue()) == null ||
                            (v = get(mk)) == null ||
                            (mv != v && !mv.equals(v)))
                        return false;
                }
            }
            return true;
        }


        //旧版本中使用的类,存在的意义:序列化兼容性
        static class Segment extends ReentrantLock implements Serializable {
            private static final long serialVersionUID = 2249069246763182397L;
            final float loadFactor;
            Segment(float lf) { this.loadFactor = lf; }
        }

        //用于序列化:将concurrenthashmap写入stream中.
        private void writeObject(java.io.ObjectOutputStream s)
                throws java.io.IOException {
            // 用于序列化版本兼容
            // Emulate segment calculation from previous version of this class
            int sshift = 0;
            int ssize = 1;
            while (ssize < DEFAULT_CONCURRENCY_LEVEL) {
                ++sshift;
                ssize <<= 1;
            }
            int segmentShift = 32 - sshift;
            int segmentMask = ssize - 1;
            @SuppressWarnings("unchecked")
            Segment[] segments = (Segment[])
                    new Segment[DEFAULT_CONCURRENCY_LEVEL];
            for (int i = 0; i < segments.length; ++i)
                segments[i] = new Segment(LOAD_FACTOR);
            s.putFields().put("segments", segments);
            s.putFields().put("segmentShift", segmentShift);
            s.putFields().put("segmentMask", segmentMask);
            s.writeFields();

            Node[] t;
            if ((t = table) != null) {
                Traverser it = new Traverser(t, t.length, 0, t.length);
                for (Node p; (p = it.advance()) != null; ) {
                    s.writeObject(p.key);
                    s.writeObject(p.val);
                }
            }
            s.writeObject(null);
            s.writeObject(null);
            segments = null; // throw away
        }


        private void readObject(java.io.ObjectInputStream s)
                throws java.io.IOException, ClassNotFoundException {
            /*
             * 为了在典型情况下提高性能,我们在读取时创建节点,然后在知道大小后放置在表中.
             * 但是,我们还必须验证唯一性并处理过多的bin,这需要putVal机制的专用版本
             */
            sizeCtl = -1; // force exclusion for table construction
            s.defaultReadObject();
            long size = 0L;
            Node p = null;
            for (;;) {
                @SuppressWarnings("unchecked")
                K k = (K) s.readObject();
                @SuppressWarnings("unchecked")
                V v = (V) s.readObject();
                if (k != null && v != null) {
                    p = new Node(spread(k.hashCode()), k, v, p);
                    ++size;
                }
                else
                    break;
            }
            if (size == 0L)
                sizeCtl = 0;
            else {
                int n;
                if (size >= (long)(MAXIMUM_CAPACITY >>> 1))
                    n = MAXIMUM_CAPACITY;
                else {
                    int sz = (int)size;
                    n = tableSizeFor(sz + (sz >>> 1) + 1);
                }
                @SuppressWarnings("unchecked")
                Node[] tab = (Node[])new Node[n];
                int mask = n - 1;
                long added = 0L;
                while (p != null) {
                    boolean insertAtFront;
                    Node next = p.next, first;
                    int h = p.hash, j = h & mask;
                    if ((first = tabAt(tab, j)) == null)
                        insertAtFront = true;
                    else {
                        K k = p.key;
                        if (first.hash < 0) {
                            TreeBin t = (TreeBin)first;
                            if (t.putTreeVal(h, k, p.val) == null)
                                ++added;
                            insertAtFront = false;
                        }
                        else {
                            int binCount = 0;
                            insertAtFront = true;
                            Node q; K qk;
                            for (q = first; q != null; q = q.next) {
                                if (q.hash == h &&
                                        ((qk = q.key) == k ||
                                                (qk != null && k.equals(qk)))) {
                                    insertAtFront = false;
                                    break;
                                }
                                ++binCount;
                            }
                            if (insertAtFront && binCount >= TREEIFY_THRESHOLD) {
                                insertAtFront = false;
                                ++added;
                                p.next = first;
                                TreeNode hd = null, tl = null;
                                for (q = p; q != null; q = q.next) {
                                    TreeNode t = new TreeNode
                                            (q.hash, q.key, q.val, null, null);
                                    if ((t.prev = tl) == null)
                                        hd = t;
                                    else
                                        tl.next = t;
                                    tl = t;
                                }
                                setTabAt(tab, j, new TreeBin(hd));
                            }
                        }
                    }
                    if (insertAtFront) {
                        ++added;
                        p.next = first;
                        setTabAt(tab, j, p);
                    }
                    p = next;
                }
                table = tab;
                sizeCtl = n - (n >>> 2);
                baseCount = added;
            }
        }

        /*-------ConcurrentMap方法---------*/

        //存在,替换,返回旧value;否则不操作,返回null
        public V putIfAbsent(K key, V value) {
            return putVal(key, value, true);
        }


        public boolean remove(Object key, Object value) {
            if (key == null)
                throw new NullPointerException();
            return value != null && replaceNode(key, null, value) != null;
        }

        public boolean replace(K key, V oldValue, V newValue) {
            if (key == null || oldValue == null || newValue == null)
                throw new NullPointerException();
            return replaceNode(key, newValue, oldValue) != null;
        }

        public V replace(K key, V value) {
            if (key == null || value == null)
                throw new NullPointerException();
            return replaceNode(key, value, null);
        }

        /*--------Overrides在JDK8中Map接口扩展的默认方法--------*/

        //key有value,则返回value;
        //否则返回参数值
        public V getOrDefault(Object key, V defaultValue) {
            V v;
            return (v = get(key)) == null ? defaultValue : v;
        }

        public void forEach(BiConsumer action) {
            if (action == null) throw new NullPointerException();
            Node[] t;
            if ((t = table) != null) {
                Traverser it = new Traverser(t, t.length, 0, t.length);
                for (Node p; (p = it.advance()) != null; ) {
                    action.accept(p.key, p.val);
                }
            }
        }

        public void replaceAll(BiFunction function) {
            if (function == null) throw new NullPointerException();
            Node[] t;
            if ((t = table) != null) {
                Traverser it = new Traverser(t, t.length, 0, t.length);
                for (Node p; (p = it.advance()) != null; ) {
                    V oldValue = p.val;
                    for (K key = p.key;;) {
                        V newValue = function.apply(key, oldValue);
                        if (newValue == null)
                            throw new NullPointerException();
                        if (replaceNode(key, newValue, oldValue) != null ||
                                (oldValue = get(key)) == null)
                            break;
                    }
                }
            }
        }

        /**
         * 如果指定的键尚未与值关联,则尝试使用给定的映射函数计算其值,并将其输入到该映射中,除非是null.
         * 整个方法调用是以原子方式执行的,因此每个键最多应用一次该功能。
         * 在进行计算时,其他线程在此映射上的某些尝试更新操作可能会被阻止,因此计算应该简短并且不要尝试更新此map中的任何其他映射。
         */
        public V computeIfAbsent(K key, Function mappingFunction) {
            if (key == null || mappingFunction == null)
                throw new NullPointerException();
            int h = spread(key.hashCode());
            V val = null;
            int binCount = 0;
            for (Node[] tab = table;;) {
                Node f; int n, i, fh;
                if (tab == null || (n = tab.length) == 0)
                    tab = initTable();
                //不存在
                else if ((f = tabAt(tab, i = (n - 1) & h)) == null) {
                    Node r = new ReservationNode();
                    //同步代码块,计算新值并插入map
                    synchronized (r) {
                        if (casTabAt(tab, i, null, r)) {
                            binCount = 1;
                            Node node = null;
                            try {
                                if ((val = mappingFunction.apply(key)) != null)
                                    node = new Node(h, key, val, null);
                            } finally {
                                setTabAt(tab, i, node);
                            }
                        }
                    }
                    if (binCount != 0)
                        break;
                }
                //此节点为转移节点forwarding node
                else if ((fh = f.hash) == MOVED)
                    tab = helpTransfer(tab, f);
                //存在,且不是转移节点
                else {
                    boolean added = false;
                    //同步代码块:分链表和红黑树两种情况进行插入
                    synchronized (f) {
                        if (tabAt(tab, i) == f) {
                            if (fh >= 0) {
                                binCount = 1;
                                for (Node e = f;; ++binCount) {
                                    K ek; V ev;
                                    if (e.hash == h &&
                                            ((ek = e.key) == key ||
                                                    (ek != null && key.equals(ek)))) {
                                        val = e.val;
                                        break;
                                    }
                                    Node pred = e;
                                    if ((e = e.next) == null) {
                                        if ((val = mappingFunction.apply(key)) != null) {
                                            added = true;
                                            pred.next = new Node(h, key, val, null);
                                        }
                                        break;
                                    }
                                }
                            }
                            else if (f instanceof TreeBin) {
                                binCount = 2;
                                TreeBin t = (TreeBin)f;
                                TreeNode r, p;
                                if ((r = t.root) != null &&
                                        (p = r.findTreeNode(h, key, null)) != null)
                                    val = p.val;
                                else if ((val = mappingFunction.apply(key)) != null) {
                                    added = true;
                                    t.putTreeVal(h, key, val);
                                }
                            }
                        }
                    }
                    //插入后,判定是否需要将链表转为红黑树
                    if (binCount != 0) {
                        if (binCount >= TREEIFY_THRESHOLD)
                            treeifyBin(tab, i);
                        if (!added)
                            return val;
                        break;
                    }
                }
            }
            //增加节点个数
            if (val != null)
                addCount(1L, binCount);
            return val;
        }

        /**
         * 如果指定的键尚已经与值关联,则尝试使用给定的映射函数计算其值,并更改该映射.
         * 整个方法调用是以原子方式执行的,因此每个键最多应用一次该功能。
         * 在进行计算时,其他线程在此映射上的某些尝试更新操作可能会被阻止,因此计算应该简短并且不要尝试更新此map中的任何其他映射。
         */
        public V computeIfPresent(K key, BiFunction remappingFunction) {
            if (key == null || remappingFunction == null)
                throw new NullPointerException();
            int h = spread(key.hashCode());
            V val = null;
            int delta = 0;
            int binCount = 0;
            for (Node[] tab = table;;) {
                Node f; int n, i, fh;
                if (tab == null || (n = tab.length) == 0)
                    tab = initTable();
                //如果为null,返回
                else if ((f = tabAt(tab, i = (n - 1) & h)) == null)
                    break;
                //如果为转移节点,帮助转移
                else if ((fh = f.hash) == MOVED)
                    tab = helpTransfer(tab, f);
                //计算值并替换
                else {
                    //同步代码块,分链表和红黑树节点讨论插入.
                    synchronized (f) {
                        if (tabAt(tab, i) == f) {
                            if (fh >= 0) {
                                binCount = 1;
                                for (Node e = f, pred = null;; ++binCount) {
                                    K ek;
                                    if (e.hash == h &&
                                            ((ek = e.key) == key ||
                                                    (ek != null && key.equals(ek)))) {
                                        val = remappingFunction.apply(key, e.val);
                                        if (val != null)
                                            e.val = val;
                                        else {
                                            delta = -1;
                                            Node en = e.next;
                                            if (pred != null)
                                                pred.next = en;
                                            else
                                                setTabAt(tab, i, en);
                                        }
                                        break;
                                    }
                                    pred = e;
                                    if ((e = e.next) == null)
                                        break;
                                }
                            }
                            else if (f instanceof TreeBin) {
                                binCount = 2;
                                TreeBin t = (TreeBin)f;
                                TreeNode r, p;
                                if ((r = t.root) != null &&
                                        (p = r.findTreeNode(h, key, null)) != null) {
                                    val = remappingFunction.apply(key, p.val);
                                    if (val != null)
                                        p.val = val;
                                    else {
                                        delta = -1;
                                        if (t.removeTreeNode(p))
                                            setTabAt(tab, i, untreeify(t.first));
                                    }
                                }
                            }
                        }
                    }
                    if (binCount != 0)
                        break;
                }
            }
            if (delta != 0)
                addCount((long)delta, binCount);
            return val;
        }

        //指定key如果没有value,则为其计算一个value
        public V compute(K key,
                         BiFunction remappingFunction) {
            if (key == null || remappingFunction == null)
                throw new NullPointerException();
            int h = spread(key.hashCode());
            V val = null;
            int delta = 0;
            int binCount = 0;
            for (Node[] tab = table;;) {
                Node f; int n, i, fh;
                //如果表为null,则初始化表
                if (tab == null || (n = tab.length) == 0)
                    tab = initTable();
                //如果指定位置为null
                else if ((f = tabAt(tab, i = (n - 1) & h)) == null) {
                    Node r = new ReservationNode();
                    //同步代码块:为其计算值,并插入map
                    synchronized (r) {
                        if (casTabAt(tab, i, null, r)) {
                            binCount = 1;
                            Node node = null;
                            try {
                                if ((val = remappingFunction.apply(key, null)) != null) {
                                    delta = 1;
                                    node = new Node(h, key, val, null);
                                }
                            } finally {
                                //插入map
                                setTabAt(tab, i, node);
                            }
                        }
                    }
                    if (binCount != 0)
                        break;
                }
                //如果指定位置为转移节点,当前线程转去帮忙转移节点
                else if ((fh = f.hash) == MOVED)
                    tab = helpTransfer(tab, f);
                //同步代码块
                else {
                    synchronized (f) {
                        if (tabAt(tab, i) == f) {
                            //链表节点
                            if (fh >= 0) {
                                binCount = 1;
                                for (Node e = f, pred = null;; ++binCount) {
                                    K ek;
                                    if (e.hash == h &&
                                            ((ek = e.key) == key ||
                                                    (ek != null && key.equals(ek)))) {
                                        val = remappingFunction.apply(key, e.val);
                                        //计算的value不为null,更改原value
                                        if (val != null)
                                            e.val = val;
                                        //计算的value为null
                                        else {
                                            delta = -1;
                                            Node en = e.next;
                                            //上一个节点为为null,删除当前节点
                                            if (pred != null)
                                                pred.next = en;
                                            //上一个节点为null
                                            else
                                                setTabAt(tab, i, en);
                                        }
                                        break;
                                    }
                                    pred = e;
                                    if ((e = e.next) == null) {
                                        val = remappingFunction.apply(key, null);
                                        if (val != null) {
                                            delta = 1;
                                            pred.next =
                                                    new Node(h, key, val, null);
                                        }
                                        break;
                                    }
                                }
                            }
                            //红黑树树节点
                            else if (f instanceof TreeBin) {
                                binCount = 1;
                                TreeBin t = (TreeBin)f;
                                TreeNode r, p;
                                if ((r = t.root) != null)
                                    p = r.findTreeNode(h, key, null);
                                else
                                    p = null;
                                V pv = (p == null) ? null : p.val;
                                val = remappingFunction.apply(key, pv);
                                if (val != null) {
                                    if (p != null)
                                        p.val = val;
                                    else {
                                        delta = 1;
                                        t.putTreeVal(h, key, val);
                                    }
                                }
                                else if (p != null) {
                                    delta = -1;
                                    if (t.removeTreeNode(p))
                                        setTabAt(tab, i, untreeify(t.first));
                                }
                            }
                        }
                    }
                    if (binCount != 0) {
                        if (binCount >= TREEIFY_THRESHOLD)
                            treeifyBin(tab, i);
                        break;
                    }
                }
            }
            if (delta != 0)
                addCount((long)delta, binCount);
            return val;
        }

        /**
         * 如果指定的键尚未与(非空)值相关联,则将其与给定值value相关联。
         * 否则,将该值替换为给定的重映射函数的结果,或者如果为null则被移除.
         * 整个方法调用是以原子方式执行的。其他线程在此映射上的某些尝试更新操作可能会被阻止当计算正在进行时,所以计算应该简短,
         * 并且不能尝试更新此Map的任何其他映射。
         */
        public V merge(K key, V value, BiFunction remappingFunction) {
            if (key == null || value == null || remappingFunction == null)
                throw new NullPointerException();
            int h = spread(key.hashCode());
            V val = null;
            int delta = 0;
            int binCount = 0;
            for (Node[] tab = table;;) {
                Node f; int n, i, fh;
                //如果tab为null,初始化table
                if (tab == null || (n = tab.length) == 0)
                    tab = initTable();
                //如果该散列位置没有元素,为null
                else if ((f = tabAt(tab, i = (n - 1) & h)) == null) {
                    //利用CAS为索引i处节点赋值
                    if (casTabAt(tab, i, null, new Node(h, key, value, null))) {
                        delta = 1;
                        val = value;
                        break;
                    }
                }
                //如果table在进行resize,则当前线程帮忙去resize
                else if ((fh = f.hash) == MOVED)
                    tab = helpTransfer(tab, f);
                //如果散列位置有数值
                else {
                    //同步代码块:
                    synchronized (f) {
                        //查找i处的节点
                        if (tabAt(tab, i) == f) {
                            //如果为链表节点
                            if (fh >= 0) {
                                binCount = 1;
                                for (Node e = f, pred = null;; ++binCount) {
                                    K ek;
                                    if (e.hash == h &&
                                            ((ek = e.key) == key ||
                                                    (ek != null && key.equals(ek)))) {
                                        val = remappingFunction.apply(e.val, value);
                                        if (val != null)
                                            e.val = val;
                                        else {
                                            delta = -1;
                                            Node en = e.next;
                                            if (pred != null)
                                                pred.next = en;
                                            //原value=新value=null,则删除当前节点
                                            else
                                                setTabAt(tab, i, en);
                                        }
                                        break;
                                    }
                                    pred = e;
                                    if ((e = e.next) == null) {
                                        delta = 1;
                                        val = value;
                                        pred.next =
                                                new Node(h, key, val, null);
                                        break;
                                    }
                                }
                            }
                            //如果节点为红黑树节点
                            else if (f instanceof TreeBin) {
                                binCount = 2;
                                TreeBin t = (TreeBin)f;
                                TreeNode r = t.root;
                                TreeNode p = (r == null) ? null :
                                        r.findTreeNode(h, key, null);
                                val = (p == null) ? value :
                                        remappingFunction.apply(p.val, value);
                                if (val != null) {
                                    if (p != null)
                                        p.val = val;
                                    else {
                                        delta = 1;
                                        t.putTreeVal(h, key, val);
                                    }
                                }
                                else if (p != null) {
                                    delta = -1;
                                    if (t.removeTreeNode(p))
                                        setTabAt(tab, i, untreeify(t.first));
                                }
                            }
                        }
                    }
                    if (binCount != 0) {
                        if (binCount >= TREEIFY_THRESHOLD)
                            treeifyBin(tab, i);
                        break;
                    }
                }
            }
            if (delta != 0)
                addCount((long)delta, binCount);
            return val;
        }

        /*-----------Hashtable的传统方法------------*/

        //此方法在功能上与containsValue(Object)完全相同
        public boolean contains(Object value) {
            return containsValue(value);
        }

        //返回key的枚举
        public Enumeration keys() {
            Node[] t;
            int f = (t = table) == null ? 0 : t.length;
            return new KeyIterator(t, f, 0, f, this);
        }

        //返回value的枚举
        public Enumeration elements() {
            Node[] t;
            int f = (t = table) == null ? 0 : t.length;
            return new ValueIterator(t, f, 0, f, this);
        }

        /*-----------ConcurrentHashMap-only methods------独有方法------------*/

        /**
         * 返回映射的数量。
         * 求映射个数时,应该使用此方法而不是size()方法,因为ConcurrentHashMap包含映射个数可以比Integer.MAX_VALUE更多。
         * 注意:本方法返回的值是一个估计值;如果并发插入或删除,实际计数可能会有所不同。
         * @return the number of mappings
         * @since 1.8
         */
        public long mappingCount() {
            long n = sumCount();
            return (n < 0L) ? 0L : n; // ignore transient negative values
        }

        /**
         * 根据给定类型Boolean.TRUE,新建一个Set,当然这个set也是由ConcurrentHashMap作为后备支撑的
         * @param  the element type of the returned set
         * @return the new set
         * @since 1.8
         */
        public static  KeySetView newKeySet() {
            return new KeySetView
                    (new ConcurrentHashMap(), Boolean.TRUE);
        }

        /**
         * @since 1.8
         */
        public static  KeySetView newKeySet(int initialCapacity) {
            return new KeySetView
                    (new ConcurrentHashMap(initialCapacity), Boolean.TRUE);
        }

        //根据给定的通用value,在add,addAll方法中可以随意添加值.这当然只适用于从该视图中为所有添加使用相同值的情况。
        public KeySetView keySet(V mappedValue) {
            if (mappedValue == null)
                throw new NullPointerException();
            return new KeySetView(this, mappedValue);
        }

        /* ---------------- Special Nodes 特殊节点 -------------- */


        /**
         * 一个用于连接两个table的节点类。它包含一个nextTable指针,用于指向下一张表。而且这个节点的key,value,next指针全部为null,
         * 它的hash值为-1. 这里面定义的find的方法是从nextTable里进行查询节点,而不是以自身为头节点进行查找
         */
        static final class ForwardingNode extends Node {
            final Node[] nextTable;
            ForwardingNode(Node[] tab) {
                //hash值=MOVED=-1
                super(MOVED, null, null, null);
                this.nextTable = tab;
            }

            Node find(int h, Object k) {
                // loop to avoid arbitrarily deep recursion on forwarding nodes
                //循环以避免转发节点上的任意深度递归
                outer: for (Node[] tab = nextTable;;) {
                    Node e; int n;
                    //如果table为null,or长度为0,or指定bin无元素
                    if (k == null || tab == null || (n = tab.length) == 0 ||
                            (e = tabAt(tab, (n - 1) & h)) == null)
                        return null;
                    //循环
                    for (;;) {
                        int eh; K ek;
                        if ((eh = e.hash) == h &&
                                ((ek = e.key) == k || (ek != null && k.equals(ek))))
                            return e;
                        //哈希值<0
                        if (eh < 0) {
                            //如果是转发节点
                            if (e instanceof ForwardingNode) {
                                //更新查找到新table
                                tab = ((ForwardingNode)e).nextTable;
                                continue outer;
                            }
                            //非转发节点,则直接查找
                            else
                                return e.find(h, k);
                        }
                        if ((e = e.next) == null)
                            return null;
                    }
                }
            }
        }

        //在computeIfAbsent和compute方法中的节点占位符
        static final class ReservationNode extends Node {
            ReservationNode() {
                super(RESERVED, null, null, null);
            }

            Node find(int h, Object k) {
                return null;
            }
        }

        /* ---------------- Table 初始化 and Resizing -------------- */

        //返回用于调整大小为n的table的标记位。左移RESIZE_STAMP_SHIFT二进制位时,数值必为负.
        static final int resizeStamp(int n) {
            return Integer.numberOfLeadingZeros(n) | (1 << (RESIZE_STAMP_BITS - 1));
        }

        //使用sizeCtl中记录的size值初始化table
        //对于ConcurrentHashMap来说,调用它的构造方法仅仅是设置了一些参数而已。
        //而整个table的初始化是在向ConcurrentHashMap中插入元素的时候发生的。
        private final Node[] initTable() {
            Node[] tab; int sc;
            while ((tab = table) == null || tab.length == 0) {
                //如果当前有线程在对table进行初始化,则当前线程被阻塞,这也可以看出ConcurrentHashMap的初始化只能由一个线程完成.
                if ((sc = sizeCtl) < 0)
                    Thread.yield(); // 初始化失败,进行自旋
                //利用CAS方法把sizectl的值置为-1,防止其他线程进入,表示本线程正在进行初始化
                else if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
                    try {
                        if ((tab = table) == null || tab.length == 0) {
                            int n = (sc > 0) ? sc : DEFAULT_CAPACITY;
                            //初始化大小为n的node数组
                            @SuppressWarnings("unchecked")
                            Node[] nt = (Node[])new Node[n];
                            table = tab = nt;
                            sc = n - (n >>> 2);////相当于0.75*n 设置一个扩容的阈值
                        }
                    } finally {
                        sizeCtl = sc;//sizeCtl的值改为0.75*n
                    }
                    break;
                }
            }
            return tab;
        }

        /**
         * 增加节点个数,如果table太小而没有resize,则检查是否需要resize。如果已经调整大小,则可以帮助复制转移节点。转移后重新检查占用情况,
         * 以确定是否还需要调整大小,因为resize总是比put操作滞后。
         */
        private final void addCount(long x, int check) {
            CounterCell[] as; long b, s;
            if ((as = counterCells) != null ||
                    !U.compareAndSwapLong(this, BASECOUNT, b = baseCount, s = b + x)) {
                CounterCell a; long v; int m;
                boolean uncontended = true;
                if (as == null || (m = as.length - 1) < 0 ||
                        (a = as[ThreadLocalRandom.getProbe() & m]) == null ||
                        !(uncontended =
                                U.compareAndSwapLong(a, CELLVALUE, v = a.value, v + x))) {
                    fullAddCount(x, uncontended);
                    return;
                }
                if (check <= 1)
                    return;
                s = sumCount();
            }
            if (check >= 0) {
                Node[] tab, nt; int n, sc;
                while (s >= (long)(sc = sizeCtl) && (tab = table) != null &&
                        (n = tab.length) < MAXIMUM_CAPACITY) {
                    int rs = resizeStamp(n);
                    if (sc < 0) {
                        if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 ||
                                sc == rs + MAX_RESIZERS || (nt = nextTable) == null ||
                                transferIndex <= 0)
                            break;
                        if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1))
                            transfer(tab, nt);
                    }
                    else if (U.compareAndSwapInt(this, SIZECTL, sc,
                            (rs << RESIZE_STAMP_SHIFT) + 2))
                        transfer(tab, null);
                    s = sumCount();
                }
            }
        }

        //如果resize正在进行,则多个线程帮助节点的复制操作.
        final Node[] helpTransfer(Node[] tab, Node f) {
            Node[] nextTab; int sc;
            //如果tab不为null,且f为转移节点,且新table不为null
            if (tab != null && (f instanceof ForwardingNode) &&
                    (nextTab = ((ForwardingNode)f).nextTable) != null) {
                //返回resize后的table的标记位
                int rs = resizeStamp(tab.length);
                while (nextTab == nextTable && table == tab &&
                        (sc = sizeCtl) < 0) {
                    if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 ||
                            sc == rs + MAX_RESIZERS || transferIndex <= 0)
                        break;
                    if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1)) {
                        transfer(tab, nextTab);
                        break;
                    }
                }
                return nextTab;
            }
            return table;
        }

        //尝试将table大小设定为:1.5*size+1,以容纳元素
        private final void tryPresize(int size) {
            int c = (size >= (MAXIMUM_CAPACITY >>> 1)) ? MAXIMUM_CAPACITY :
                    tableSizeFor(size + (size >>> 1) + 1);
            int sc;
            while ((sc = sizeCtl) >= 0) {
                Node[] tab = table; int n;
                if (tab == null || (n = tab.length) == 0) {
                    n = (sc > c) ? sc : c;
                    if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
                        try {
                            if (table == tab) {
                                @SuppressWarnings("unchecked")
                                Node[] nt = (Node[])new Node[n];
                                table = nt;
                                sc = n - (n >>> 2);
                            }
                        } finally {
                            sizeCtl = sc;
                        }
                    }
                }
                else if (c <= sc || n >= MAXIMUM_CAPACITY)
                    break;
                else if (tab == table) {
                    int rs = resizeStamp(n);
                    if (sc < 0) {
                        Node[] nt;
                        if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 ||
                                sc == rs + MAX_RESIZERS || (nt = nextTable) == null ||
                                transferIndex <= 0)
                            break;
                        if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1))
                            transfer(tab, nt);
                    }
                    else if (U.compareAndSwapInt(this, SIZECTL, sc,
                            (rs << RESIZE_STAMP_SHIFT) + 2))
                        transfer(tab, null);
                }
            }
        }



        /**
         * 这是ConcurrentHashMa的扩容方法
         * 将每一个bin拷贝到新的table中
         */
        private final void transfer(Node[] tab, Node[] nextTab) {
            int n = tab.length, stride;
            //如果可用cpu数目>1,则stride=tab长度/cpu数量
            if ((stride = (NCPU > 1) ? (n >>> 3) / NCPU : n) < MIN_TRANSFER_STRIDE)
                //如果此时stride<最小分割并行段数,则更新stride为最小分割并行段数
                stride = MIN_TRANSFER_STRIDE; // subdivide range
            //如果新table为null,则对新table初始化,长度为旧table的2倍
            if (nextTab == null) {            // initiating
                try {
                    @SuppressWarnings("unchecked")
                    Node[] nt = (Node[])new Node[n << 1];//2倍扩容
                    nextTab = nt;
                } catch (Throwable ex) {      // try to cope with OOME
                    sizeCtl = Integer.MAX_VALUE;
                    return;
                }
                //nextTable指向新建table
                nextTable = nextTab;
                //转移索引改为n
                transferIndex = n;
            }
            //新table长度
            int nextn = nextTab.length;
            //转移节点:设定为新table,hash值=-1,其他属性为null
            ForwardingNode fwd = new ForwardingNode(nextTab);
            boolean advance = true;///并发扩容的关键属性 如果等于true 说明这个节点已经处理过
            //确保在提交nextTab之前进行扫描
            boolean finishing = false; // to ensure sweep before committing nextTab
            for (int i = 0, bound = 0;;) {
                Node f; int fh;
                ////这个while循环体的作用就是在控制i--  通过i--可以依次遍历原hash表中的节点
                while (advance) {
                    int nextIndex, nextBound;
                    if (--i >= bound || finishing)
                        advance = false;
                    else if ((nextIndex = transferIndex) <= 0) {
                        i = -1;
                        advance = false;
                    }
                    else if (U.compareAndSwapInt
                            (this, TRANSFERINDEX, nextIndex,
                                    nextBound = (nextIndex > stride ?
                                            nextIndex - stride : 0))) {
                        bound = nextBound;
                        i = nextIndex - 1;
                        advance = false;
                    }
                }
                if (i < 0 || i >= n || i + n >= nextn) {
                    int sc;
                    //如果所有的节点都已经完成复制工作  就把nextTable赋值给table 清空临时对象nextTabl
                    if (finishing) {
                        nextTable = null;
                        table = nextTab;
                        sizeCtl = (n << 1) - (n >>> 1);
                        return;
                    }
                    if (U.compareAndSwapInt(this, SIZECTL, sc = sizeCtl, sc - 1)) {
                        if ((sc - 2) != resizeStamp(n) << RESIZE_STAMP_SHIFT)
                            return;
                        finishing = advance = true;
                        i = n; // recheck before commit
                    }
                }
                //i位置节点为null,原table中的i位置放入forwardNode节点,这个也是触发并发扩容的关键点;
                else if ((f = tabAt(tab, i)) == null)
                    advance = casTabAt(tab, i, null, fwd);
                //当前节点已经被复制过,直接跳过.是控制并发扩容的关键
                else if ((fh = f.hash) == MOVED)
                    advance = true; // already processed
                //同步代码块,复制节点,保证线程安全的复制,不重复不冲突
                else {
                    synchronized (f) {
                        if (tabAt(tab, i) == f) {
                            Node ln, hn;
                            //如果节点为链表节点
                            if (fh >= 0) {
                                int runBit = fh & n;
                                Node lastRun = f;
                                //查找lastRun的位置
                                for (Node p = f.next; p != null; p = p.next) {
                                    int b = p.hash & n;
                                    if (b != runBit) {
                                        runBit = b;
                                        lastRun = p;
                                    }
                                }
                                if (runBit == 0) {
                                    ln = lastRun;
                                    hn = null;
                                }
                                else {
                                    hn = lastRun;
                                    ln = null;
                                }
                                //lastRun节点前的节点都会构造一个反序链表,lastRun节点开始到后面的节点则顺序不变
                                for (Node p = f; p != lastRun; p = p.next) {
                                    int ph = p.hash; K pk = p.key; V pv = p.val;
                                    if ((ph & n) == 0)
                                        ln = new Node(ph, pk, pv, ln);
                                    else
                                        hn = new Node(ph, pk, pv, hn);
                                }
                                //在nextTable的i位置上插入一个链表
                                setTabAt(nextTab, i, ln);
                                //在nextTable的i+n的位置上插入另一个链表
                                setTabAt(nextTab, i + n, hn);
                                //在table的i位置上插入forwardNode节点  表示已经处理过该节点
                                setTabAt(tab, i, fwd);
                                //设置advance为true 返回到上面的while循环中 就可以执行i--操作
                                advance = true;
                            }
                            //如果被复制节点为红黑树节点包装类TreeBin,也做一个反序处理,并且判断是否需要untreeify,
                            //把处理的结果分别放在nextTable的i和i+n的位置上
                            else if (f instanceof TreeBin) {
                                TreeBin t = (TreeBin)f;
                                TreeNode lo = null, loTail = null;
                                TreeNode hi = null, hiTail = null;
                                int lc = 0, hc = 0;
                                for (Node e = t.first; e != null; e = e.next) {
                                    int h = e.hash;
                                    TreeNode p = new TreeNode
                                            (h, e.key, e.val, null, null);
                                    if ((h & n) == 0) {
                                        if ((p.prev = loTail) == null)
                                            lo = p;
                                        else
                                            loTail.next = p;
                                        loTail = p;
                                        ++lc;
                                    }
                                    else {
                                        if ((p.prev = hiTail) == null)
                                            hi = p;
                                        else
                                            hiTail.next = p;
                                        hiTail = p;
                                        ++hc;
                                    }
                                }
                                ////如果扩容后已经不再需要tree的结构 反向转换为链表结构
                                ln = (lc <= UNTREEIFY_THRESHOLD) ? untreeify(lo) :
                                        (hc != 0) ? new TreeBin(lo) : t;
                                hn = (hc <= UNTREEIFY_THRESHOLD) ? untreeify(hi) :
                                        (lc != 0) ? new TreeBin(hi) : t;
                                //下面功能和链表处理一致
                                setTabAt(nextTab, i, ln);
                                setTabAt(nextTab, i + n, hn);
                                setTabAt(tab, i, fwd);
                                advance = true;
                            }
                        }
                    }
                }
            }
        }

        /* ---------------- Counter support 计数器支持方法-------------- */

        /**
         * A padded cell for distributing counts.  Adapted from LongAdder
         * and Striped64.  See their internal docs for explanation.
         * 用于分发计数的填充单元格。改编自LongAdder和Striped64。请参阅他们的内部文档以获得解释。
         * 用于辅助sumCount()方法
         */
        @sun.misc.Contended static final class CounterCell {
            //内存可见value
            volatile long value;
            CounterCell(long x) { value = x; }
        }

        //ConcurrentHashMap中节点总数
        final long sumCount() {
            CounterCell[] as = counterCells; CounterCell a;
            long sum = baseCount;
            if (as != null) {
                 for (int i = 0; i < as.length; ++i) {
                            if ((a = as[i]) != null)
                                sum += a.value;
                }
            }
            return sum;
        }

        /**
         * LongAdder是java8新增的.
         * LongAdders与ConcurrentHashMap一起使用,以维护可伸缩的频率映射(一种直方图或多重集)。
         * 例如,要为ConcurrentHashMap freqs 添加一个计数,
         * 初始化(如果尚未存在),可以使用freqs.computeIfAbsent(k  - > new LongAdder())
         */
        private final void fullAddCount(long x, boolean wasUncontended) {
            int h;
            if ((h = ThreadLocalRandom.getProbe()) == 0) {
                ThreadLocalRandom.localInit();      // force initialization
                h = ThreadLocalRandom.getProbe();
                wasUncontended = true;
            }
            boolean collide = false;                // True if last slot nonempty
            for (;;) {
                CounterCell[] as; CounterCell a; int n; long v;
                if ((as = counterCells) != null && (n = as.length) > 0) {
                    if ((a = as[(n - 1) & h]) == null) {
                        if (cellsBusy == 0) {            // Try to attach new Cell
                            CounterCell r = new CounterCell(x); // Optimistic create
                            if (cellsBusy == 0 &&
                                    U.compareAndSwapInt(this, CELLSBUSY, 0, 1)) {
                                boolean created = false;
                                try {               // Recheck under lock
                                    CounterCell[] rs; int m, j;
                                    if ((rs = counterCells) != null &&
                                            (m = rs.length) > 0 &&
                                            rs[j = (m - 1) & h] == null) {
                                        rs[j] = r;
                                        created = true;
                                    }
                                } finally {
                                    cellsBusy = 0;
                                }
                                if (created)
                                    break;
                                continue;           // Slot is now non-empty
                            }
                        }
                        collide = false;
                    }
                    else if (!wasUncontended)       // CAS already known to fail
                        wasUncontended = true;      // Continue after rehash
                    else if (U.compareAndSwapLong(a, CELLVALUE, v = a.value, v + x))
                        break;
                    else if (counterCells != as || n >= NCPU)
                        collide = false;            // At max size or stale
                    else if (!collide)
                        collide = true;
                    else if (cellsBusy == 0 &&
                            U.compareAndSwapInt(this, CELLSBUSY, 0, 1)) {
                        try {
                            if (counterCells == as) {// Expand table unless stale
                                CounterCell[] rs = new CounterCell[n << 1];
                                for (int i = 0; i < n; ++i)
                                    rs[i] = as[i];
                                counterCells = rs;
                            }
                        } finally {
                            cellsBusy = 0;
                        }
                        collide = false;
                        continue;                   // Retry with expanded table
                    }
                    h = ThreadLocalRandom.advanceProbe(h);
                }
                else if (cellsBusy == 0 && counterCells == as &&
                        U.compareAndSwapInt(this, CELLSBUSY, 0, 1)) {
                    boolean init = false;
                    try {                           // Initialize table
                        if (counterCells == as) {
                            CounterCell[] rs = new CounterCell[2];
                            rs[h & 1] = new CounterCell(x);
                            counterCells = rs;
                            init = true;
                        }
                    } finally {
                        cellsBusy = 0;
                    }
                    if (init)
                        break;
                }
                else if (U.compareAndSwapLong(this, BASECOUNT, v = baseCount, v + x))
                    break;                          // Fall back on using base
            }
        }

        /* ---------------- TreeBins 转换相关-------------- */

        //在指定索引处替换掉所有的链表节点为红黑树节点;当然如果此时table特别小,则不执行转换操作,而应执行resize操作.
        private final void treeifyBin(Node[] tab, int index) {
            Node b; int n, sc;
            if (tab != null) {
                if ((n = tab.length) < MIN_TREEIFY_CAPACITY)
                    tryPresize(n << 1);
                else if ((b = tabAt(tab, index)) != null && b.hash >= 0) {
                    synchronized (b) {
                        if (tabAt(tab, index) == b) {
                            TreeNode hd = null, tl = null;
                            for (Node e = b; e != null; e = e.next) {
                                TreeNode p =
                                        new TreeNode(e.hash, e.key, e.val,
                                                null, null);
                                if ((p.prev = tl) == null)
                                    hd = p;
                                else
                                    tl.next = p;
                                tl = p;
                            }
                            //TreeBin封装了TreeNode节点,将索引index处节点设置为新的TreeBin节点
                            setTabAt(tab, index, new TreeBin(hd));
                        }
                    }
                }
            }
        }

        //将给定list中红黑树节点全部替换为链表节点,并返回链表
        static  Node untreeify(Node b) {
            Node hd = null, tl = null;
            for (Node q = b; q != null; q = q.next) {
                Node p = new Node(q.hash, q.key, q.val, null);
                if (tl == null)
                    hd = p;
                else
                    tl.next = p;
                tl = p;
            }
            return hd;
        }

        /* ---------------- TreeNodes -------------- */

        /**
         * Nodes for use in TreeBins
         * 也是一个核心的数据结构.
         * 当链表长度过长的时候,会转换为TreeNode。但是与HashMap不相同的是,它并不是直接转换为红黑树,
         * 而是把这些结点包装成TreeNode放在TreeBin对象中,由TreeBin完成对红黑树的包装。
         * 而且TreeNode在ConcurrentHashMap集成自Node类,而并非HashMap中的集成自LinkedHashMap.Entry类,
         * 也就是说TreeNode带有next指针,这样做的目的是方便基于TreeBin的访问。
         */
        static final class TreeNode extends Node {
            //用于红黑树节点连接,因为本身是一个链表,所以需要一个指针指向双亲节点
            TreeNode parent;  // red-black tree links
            TreeNode left;
            TreeNode right;
            //前驱节点指针,用于删除节点
            TreeNode prev;    // needed to unlink next upon deletion
            boolean red;

            TreeNode(int hash, K key, V val, Node next,
                     TreeNode parent) {
                super(hash, key, val, next);
                this.parent = parent;
            }

            Node find(int h, Object k) {
                return findTreeNode(h, k, null);
            }

            /**
             * 从给定根节点出发,查找指定key的树节点.
             * 红黑树节点排序规则:按照节点的hash值排序
             * @param h 查找节点hash值
             * @param k 查找节点key
             * @param kc
             *
             */
            final TreeNode findTreeNode(int h, Object k, Class kc) {
                //指定key不为null
                if (k != null) {
                    TreeNode p = this;
                    do  {
                        int ph, dir; K pk; TreeNode q;
                        TreeNode pl = p.left, pr = p.right;
                        //查找节点hash值比当前节点p的hash值小,则转向p的左孩子进行遍历,可见 红黑树按照节点的hash值排序
                        if ((ph = p.hash) > h)
                            p = pl;
                        //转右孩子
                        else if (ph < h)
                            p = pr;
                        //查找节点和当前p节点hash值和key都一样,则查找成功,返回查找节点
                        else if ((pk = p.key) == k || (pk != null && k.equals(pk)))
                            return p;
                        else if (pl == null)
                            p = pr;
                        else if (pr == null)
                            p = pl;
                        else if ((kc != null ||
                                (kc = comparableClassFor(k)) != null) &&
                                (dir = compareComparables(kc, k, pk)) != 0)
                            p = (dir < 0) ? pl : pr;
                        //递归查找
                        else if ((q = pr.findTreeNode(h, k, kc)) != null)
                            return q;
                        else
                            p = pl;
                    } while (p != null);
                }
                return null;
            }
        }

        /* ---------------- TreeBins -------------- */

        /**
         * TreeNodes用作bin的头节点.在实际的ConcurrentHashMap“数组”中,存放的是TreeBin对象,而不是TreeNode对象,这是与HashMap的区别。
         * TreeBins不保存key和value,而是指向TreeNodes链表及其根节点.
         * TreeBin还维持一个读写锁,从而保证在红黑树重构前,优先完成读操作,然后再执行写操作.
         */
        static final class TreeBin extends Node {
            //一个TreeBin既要有指向根节点的指针,也要有指向第一个节点指针
            TreeNode root;//根节点
            volatile TreeNode first;//第一个节点
            volatile Thread waiter;//等待线程
            volatile int lockState;//锁状态

            //lockState的一些值
            static final int WRITER = 1; // 持有写锁的锁状态值
            static final int WAITER = 2; // 等待写锁的锁状态值
            static final int READER = 4; // 设置读锁时的锁状态增量值

            /**
             * 插入节点时,如果hash值相同而又没有其它可比较的元素时,此方法可用于打破此种插入僵局.
             * 我们不需要一个全局有序,只需要一个插入规则,保证在调整平衡时可以维持等价关系.
             * 僵局关系的进一步打破也使得测试变得简单了一些.
             * 和hashmap方法一样
             */
            static int tieBreakOrder(Object a, Object b) {
                int d;
                //如果a为null,或者b为null,或者a和b是同一个类的实
                if (a == null || b == null ||
                        (d = a.getClass().getName().
                                compareTo(b.getClass().getName())) == 0)//反射获取类名
                //identityHashCode():无论给定对象的类是否覆盖hashCode(),都会返回给定对象的哈希码,与默认方法hashCode()返回的哈希码相同。
                //如果传入参数a为null,则返回值为0;
                    d = (System.identityHashCode(a) <= System.identityHashCode(b) ?
                            -1 : 1);
                return d;
            }

            //利用头节点为b的初始set节点集合,创建一个bin,里面放了一棵红黑树
            TreeBin(TreeNode b) {
                //这就它的构造函数,TREEBIN=-1,树根节点的hash值
                super(TREEBIN, null, null, null);

                this.first = b;
                TreeNode r = null;
                for (TreeNode x = b, next; x != null; x = next) {
                    next = (TreeNode)x.next;
                    x.left = x.right = null;
                    if (r == null) {
                        x.parent = null;
                        x.red = false;
                        r = x;
                    }
                    else {
                        K k = x.key;
                        int h = x.hash;
                        Class kc = null;
                        for (TreeNode p = r;;) {
                            int dir, ph;
                            K pk = p.key;
                            if ((ph = p.hash) > h)
                                dir = -1;
                            else if (ph < h)
                                dir = 1;
                            else if ((kc == null &&
                                    (kc = comparableClassFor(k)) == null) ||
                                    (dir = compareComparables(kc, k, pk)) == 0)
                                dir = tieBreakOrder(k, pk);
                            TreeNode xp = p;
                            if ((p = (dir <= 0) ? p.left : p.right) == null) {
                                x.parent = xp;
                                if (dir <= 0)
                                    xp.left = x;
                                else
                                    xp.right = x;
                                r = balanceInsertion(r, x);
                                break;
                            }
                        }
                    }
                }
                this.root = r;
                assert checkInvariants(root);
            }

            //红黑树重构开始前,获取写锁
            private final void lockRoot() {
                if (!U.compareAndSwapInt(this, LOCKSTATE, 0, WRITER))
                    contendedLock(); // offload to separate method
            }

            //红黑树重构完成后,释放写锁
            private final void unlockRoot() {
                lockState = 0;
            }

            //root被锁定时,阻塞其他请求.
            private final void contendedLock() {
                boolean waiting = false;
                for (int s;;) {
                    if (((s = lockState) & ~WAITER) == 0) {
                        if (U.compareAndSwapInt(this, LOCKSTATE, s, WRITER)) {
                            if (waiting)
                                waiter = null;
                            return;
                        }
                    }
                    else if ((s & WAITER) == 0) {
                        if (U.compareAndSwapInt(this, LOCKSTATE, s, s | WAITER)) {
                            waiting = true;
                            waiter = Thread.currentThread();
                        }
                    }
                    else if (waiting)
                        LockSupport.park(this);
                }
            }

            //根据给定hash值和key查找树节点.
            //首先尝试从树根节点开始查找
            //如果获取不到bin的锁,则查询需要线性时间:o(n),也就是从头到位底链表进行了遍历
            //所以TreeBin的查找节点过程:时间复杂度为o(logN)或为o(N)
            final Node find(int h, Object k) {
                if (k != null) {
                    for (Node e = first; e != null; ) {
                        int s; K ek;
                        if (((s = lockState) & (WAITER|WRITER)) != 0) {
                            if (e.hash == h &&
                                    ((ek = e.key) == k || (ek != null && k.equals(ek))))
                                return e;
                            e = e.next;
                        }
                        else if (U.compareAndSwapInt(this, LOCKSTATE, s,
                                s + READER)) {
                            TreeNode r, p;
                            try {
                                p = ((r = root) == null ? null :
                                        r.findTreeNode(h, k, null));
                            } finally {
                                Thread w;
                                if (U.getAndAddInt(this, LOCKSTATE, -READER) ==
                                        (READER|WAITER) && (w = waiter) != null)
                                    LockSupport.unpark(w);
                            }
                            return p;
                        }
                    }
                }
                return null;
            }


            //添加一个树节点
            final TreeNode putTreeVal(int h, K k, V v) {
                Class kc = null;
                boolean searched = false;
                for (TreeNode p = root;;) {
                    int dir, ph; K pk;
                    if (p == null) {
                        first = root = new TreeNode(h, k, v, null, null);
                        break;
                    }
                    else if ((ph = p.hash) > h)
                        dir = -1;
                    else if (ph < h)
                        dir = 1;
                    else if ((pk = p.key) == k || (pk != null && k.equals(pk)))
                        return p;
                    else if ((kc == null &&
                            (kc = comparableClassFor(k)) == null) ||
                            (dir = compareComparables(kc, k, pk)) == 0) {
                        if (!searched) {
                            TreeNode q, ch;
                            searched = true;
                            if (((ch = p.left) != null &&
                                    (q = ch.findTreeNode(h, k, kc)) != null) ||
                                    ((ch = p.right) != null &&
                                            (q = ch.findTreeNode(h, k, kc)) != null))
                                return q;
                        }
                        dir = tieBreakOrder(k, pk);
                    }

                    TreeNode xp = p;
                    if ((p = (dir <= 0) ? p.left : p.right) == null) {
                        TreeNode x, f = first;
                        first = x = new TreeNode(h, k, v, f, xp);
                        if (f != null)
                            f.prev = x;
                        if (dir <= 0)
                            xp.left = x;
                        else
                            xp.right = x;
                        if (!xp.red)
                            x.red = true;
                        else {
                            lockRoot();
                            try {
                                root = balanceInsertion(root, x);
                            } finally {
                                unlockRoot();
                            }
                        }
                        break;
                    }
                }
                assert checkInvariants(root);
                return null;
            }

            /**
             * 移除一个树节点.在此方法被调用前此节点必须存在.
             * 这比典型的红黑删除代码更混乱,因为我们不能交换内部节点和其后继叶子节点的内容。
             * 所以,取而代之的是交换树链接。
             * @return 返回值为true,代表删除节点后,bin内节点太少,需要将红黑树转为链表.
             */
            final boolean removeTreeNode(TreeNode p) {
                TreeNode next = (TreeNode)p.next;
                TreeNode pred = p.prev;  // unlink traversal pointers
                TreeNode r, rl;
                if (pred == null)
                    first = next;
                else
                    pred.next = next;
                if (next != null)
                    next.prev = pred;
                if (first == null) {
                    root = null;
                    return true;
                }
                if ((r = root) == null || r.right == null || // too small
                        (rl = r.left) == null || rl.left == null)
                    return true;
                lockRoot();
                try {
                    TreeNode replacement;
                    TreeNode pl = p.left;
                    TreeNode pr = p.right;
                    if (pl != null && pr != null) {
                        TreeNode s = pr, sl;
                        while ((sl = s.left) != null) // find successor
                            s = sl;
                        boolean c = s.red; s.red = p.red; p.red = c; // swap colors
                        TreeNode sr = s.right;
                        TreeNode pp = p.parent;
                        if (s == pr) { // p was s's direct parent
                            p.parent = s;
                            s.right = p;
                        }
                        else {
                            TreeNode sp = s.parent;
                            if ((p.parent = sp) != null) {
                                if (s == sp.left)
                                    sp.left = p;
                                else
                                    sp.right = p;
                            }
                            if ((s.right = pr) != null)
                                pr.parent = s;
                        }
                        p.left = null;
                        if ((p.right = sr) != null)
                            sr.parent = p;
                        if ((s.left = pl) != null)
                            pl.parent = s;
                        if ((s.parent = pp) == null)
                            r = s;
                        else if (p == pp.left)
                            pp.left = s;
                        else
                            pp.right = s;
                        if (sr != null)
                            replacement = sr;
                        else
                            replacement = p;
                    }
                    else if (pl != null)
                        replacement = pl;
                    else if (pr != null)
                        replacement = pr;
                    else
                        replacement = p;
                    if (replacement != p) {
                        TreeNode pp = replacement.parent = p.parent;
                        if (pp == null)
                            r = replacement;
                        else if (p == pp.left)
                            pp.left = replacement;
                        else
                            pp.right = replacement;
                        p.left = p.right = p.parent = null;
                    }

                    root = (p.red) ? r : balanceDeletion(r, replacement);

                    if (p == replacement) {  // detach pointers
                        TreeNode pp;
                        if ((pp = p.parent) != null) {
                            if (p == pp.left)
                                pp.left = null;
                            else if (p == pp.right)
                                pp.right = null;
                            p.parent = null;
                        }
                    }
                } finally {
                    unlockRoot();
                }
                assert checkInvariants(root);
                return false;
            }

            /* --------------------红黑树方法,全部改自CLR算法---------------------------------------- */

            //左旋
            static  TreeNode rotateLeft(TreeNode root,
                                                  TreeNode p) {
                TreeNode r, pp, rl;
                if (p != null && (r = p.right) != null) {
                    if ((rl = p.right = r.left) != null)
                        rl.parent = p;
                    if ((pp = r.parent = p.parent) == null)
                        (root = r).red = false;
                    else if (pp.left == p)
                        pp.left = r;
                    else
                        pp.right = r;
                    r.left = p;
                    p.parent = r;
                }
                return root;
            }

            //右旋
            static  TreeNode rotateRight(TreeNode root,
                                                   TreeNode p) {
                TreeNode l, pp, lr;
                if (p != null && (l = p.left) != null) {
                    if ((lr = p.left = l.right) != null)
                        lr.parent = p;
                    if ((pp = l.parent = p.parent) == null)
                        (root = l).red = false;
                    else if (pp.right == p)
                        pp.right = l;
                    else
                        pp.left = l;
                    l.right = p;
                    p.parent = l;
                }
                return root;
            }

            //插入后,调整平衡和颜色
            static  TreeNode balanceInsertion(TreeNode root,
                                                        TreeNode x) {
                x.red = true;
                for (TreeNode xp, xpp, xppl, xppr;;) {
                    if ((xp = x.parent) == null) {
                        x.red = false;
                        return x;
                    }
                    else if (!xp.red || (xpp = xp.parent) == null)
                        return root;
                    if (xp == (xppl = xpp.left)) {
                        if ((xppr = xpp.right) != null && xppr.red) {
                            xppr.red = false;
                            xp.red = false;
                            xpp.red = true;
                            x = xpp;
                        }
                        else {
                            if (x == xp.right) {
                                root = rotateLeft(root, x = xp);
                                xpp = (xp = x.parent) == null ? null : xp.parent;
                            }
                            if (xp != null) {
                                xp.red = false;
                                if (xpp != null) {
                                    xpp.red = true;
                                    root = rotateRight(root, xpp);
                                }
                            }
                        }
                    }
                    else {
                        if (xppl != null && xppl.red) {
                            xppl.red = false;
                            xp.red = false;
                            xpp.red = true;
                            x = xpp;
                        }
                        else {
                            if (x == xp.left) {
                                root = rotateRight(root, x = xp);
                                xpp = (xp = x.parent) == null ? null : xp.parent;
                            }
                            if (xp != null) {
                                xp.red = false;
                                if (xpp != null) {
                                    xpp.red = true;
                                    root = rotateLeft(root, xpp);
                                }
                            }
                        }
                    }
                }
            }

            //删除后调整平衡和颜色
            static  TreeNode balanceDeletion(TreeNode root,
                                                       TreeNode x) {
                for (TreeNode xp, xpl, xpr;;)  {
                    if (x == null || x == root)
                        return root;
                    else if ((xp = x.parent) == null) {
                        x.red = false;
                        return x;
                    }
                    else if (x.red) {
                        x.red = false;
                        return root;
                    }
                    else if ((xpl = xp.left) == x) {
                        if ((xpr = xp.right) != null && xpr.red) {
                            xpr.red = false;
                            xp.red = true;
                            root = rotateLeft(root, xp);
                            xpr = (xp = x.parent) == null ? null : xp.right;
                        }
                        if (xpr == null)
                            x = xp;
                        else {
                            TreeNode sl = xpr.left, sr = xpr.right;
                            if ((sr == null || !sr.red) &&
                                    (sl == null || !sl.red)) {
                                xpr.red = true;
                                x = xp;
                            }
                            else {
                                if (sr == null || !sr.red) {
                                    if (sl != null)
                                        sl.red = false;
                                    xpr.red = true;
                                    root = rotateRight(root, xpr);
                                    xpr = (xp = x.parent) == null ?
                                            null : xp.right;
                                }
                                if (xpr != null) {
                                    xpr.red = (xp == null) ? false : xp.red;
                                    if ((sr = xpr.right) != null)
                                        sr.red = false;
                                }
                                if (xp != null) {
                                    xp.red = false;
                                    root = rotateLeft(root, xp);
                                }
                                x = root;
                            }
                        }
                    }
                    else { // symmetric
                        if (xpl != null && xpl.red) {
                            xpl.red = false;
                            xp.red = true;
                            root = rotateRight(root, xp);
                            xpl = (xp = x.parent) == null ? null : xp.left;
                        }
                        if (xpl == null)
                            x = xp;
                        else {
                            TreeNode sl = xpl.left, sr = xpl.right;
                            if ((sl == null || !sl.red) &&
                                    (sr == null || !sr.red)) {
                                xpl.red = true;
                                x = xp;
                            }
                            else {
                                if (sl == null || !sl.red) {
                                    if (sr != null)
                                        sr.red = false;
                                    xpl.red = true;
                                    root = rotateLeft(root, xpl);
                                    xpl = (xp = x.parent) == null ?
                                            null : xp.left;
                                }
                                if (xpl != null) {
                                    xpl.red = (xp == null) ? false : xp.red;
                                    if ((sl = xpl.left) != null)
                                        sl.red = false;
                                }
                                if (xp != null) {
                                    xp.red = false;
                                    root = rotateRight(root, xp);
                                }
                                x = root;
                            }
                        }
                    }
                }
            }

            //递归不变检查,用于检查整个红黑树连接上是否有错误
            static  boolean checkInvariants(TreeNode t) {
                //tp双亲,tl左孩子,tr右孩子,tb前驱,tn后继
                TreeNode tp = t.parent, tl = t.left, tr = t.right,
                        tb = t.prev, tn = (TreeNode)t.next;
                //双亲不为null,前驱的后继不是当前节点t,故连接错误,返回false
                if (tb != null && tb.next != t)
                    return false;
                //后继不为null,后继的前驱不为当前节点t,故链接错误,返回false
                if (tn != null && tn.prev != t)
                    return false;
                if (tp != null && t != tp.left && t != tp.right)
                    return false;
                if (tl != null && (tl.parent != t || tl.hash > t.hash))
                    return false;
                if (tr != null && (tr.parent != t || tr.hash < t.hash))
                    return false;
                if (t.red && tl != null && tl.red && tr != null && tr.red)
                    return false;
                if (tl != null && !checkInvariants(tl))
                    return false;
                if (tr != null && !checkInvariants(tr))
                    return false;
                return true;
            }

            private static final sun.misc.Unsafe U;
            private static final long LOCKSTATE;
            static {
                try {
                    U = sun.misc.Unsafe.getUnsafe();
                    Class k = TreeBin.class;
                    LOCKSTATE = U.objectFieldOffset
                            (k.getDeclaredField("lockState"));//反射获取字段值
                } catch (Exception e) {
                    throw new Error(e);
                }
            }
        }

        /* ----------------Table 遍历 -------------- */

        //在继续使用当前table前,必须记录转发表的区域的表,其长度和当前遍历索引。
        static final class TableStack {
            int length;
            int index;
            Node[] tab;
            TableStack next;
        }

        /**
         * 对诸如containsValue之类的方法要进行封装遍历.alsoserves是其他迭代器和分割器的基类.
         * 遍历过程中,可能出现resize,为了面对可能的持续调整,需要记录大量的状态,因此很难通过volatile进行优化,
         * 即便如此,遍历仍然保持合理的吞吐量。通常情况下,迭代逐个进行遍历列表。但是,如果表已经调整大小,那么所有未来的步骤
         * 必须遍历当前索引处的bin以及(index + baseSize).
         * 为了处理用户间跨线程共享迭代器之间的冲突,如果迭代器索引边界失效,则迭代停止.
         */

        static class Traverser {
            Node[] tab;        // current table; updated if resized
            Node next;         // the next entry to use
            TableStack stack, spare; // to save/restore on ForwardingNodes
            int index;              // index of bin to use next
            int baseIndex;          // current index of initial table
            int baseLimit;          // index bound for initial table
            final int baseSize;     // initial table size

            Traverser(Node[] tab, int size, int index, int limit) {
                this.tab = tab;
                this.baseSize = size;
                this.baseIndex = this.index = index;
                this.baseLimit = limit;
                this.next = null;
            }

            //继续遍历,返回下一个有效节点,如果没有则返回null
            final Node advance() {
                Node e;
                if ((e = next) != null)
                    e = e.next;
                for (;;) {
                    Node[] t; int i, n;  // must use locals in checks
                    if (e != null)
                        return next = e;
                    if (baseIndex >= baseLimit || (t = tab) == null ||
                            (n = t.length) <= (i = index) || i < 0)
                        return next = null;
                    if ((e = tabAt(t, i)) != null && e.hash < 0) {
                        if (e instanceof ForwardingNode) {
                            tab = ((ForwardingNode)e).nextTable;
                            e = null;
                            pushState(t, i, n);
                            continue;
                        }
                        else if (e instanceof TreeBin)
                            e = ((TreeBin)e).first;
                        else
                            e = null;
                    }
                    if (stack != null)
                        recoverState(n);
                    else if ((index = i + baseSize) >= n)
                        index = ++baseIndex; // visit upper slots if present
                }
            }

            //遇到转发节点时,保存遍历状态
            private void pushState(Node[] t, int i, int n) {
                TableStack s = spare;  // reuse if possible
                if (s != null)
                    spare = s.next;
                else
                    s = new TableStack();
                s.tab = t;
                s.length = n;
                s.index = i;
                s.next = stack;
                stack = s;
            }

            //取出遍历状态栈的栈定元素.
            private void recoverState(int n) {
                TableStack s; int len;
                while ((s = stack) != null && (index += (len = s.length)) >= n) {
                    n = len;
                    index = s.index;
                    tab = s.tab;
                    s.tab = null;
                    TableStack next = s.next;
                    s.next = spare; // save for reuse
                    stack = next;
                    spare = s;
                }
                if (s == null && (index += baseSize) >= n)
                    index = ++baseIndex;
            }
        }

        //基于key,value,entry的迭代器.在Traverser基础上添加了一些变量以支持iterator.remove方法
        static class BaseIterator extends Traverser {
            final ConcurrentHashMap map;
            Node lastReturned;
            BaseIterator(Node[] tab, int size, int index, int limit,
                         ConcurrentHashMap map) {
                super(tab, size, index, limit);
                this.map = map;
                advance();
            }

            public final boolean hasNext() { return next != null; }
            public final boolean hasMoreElements() { return next != null; }

            public final void remove() {
                Node p;
                if ((p = lastReturned) == null)
                    throw new IllegalStateException();
                lastReturned = null;
                map.replaceNode(p.key, null, null);
            }
        }

        //key迭代器
        static final class KeyIterator extends BaseIterator
                implements Iterator, Enumeration {
            KeyIterator(Node[] tab, int index, int size, int limit,
                        ConcurrentHashMap map) {
                super(tab, index, size, limit, map);
            }

            public final K next() {
                Node p;
                if ((p = next) == null)
                    throw new NoSuchElementException();
                K k = p.key;
                lastReturned = p;
                advance();
                return k;
            }

            public final K nextElement() { return next(); }
        }

        //value迭代器
        static final class ValueIterator extends BaseIterator
                implements Iterator, Enumeration {
            ValueIterator(Node[] tab, int index, int size, int limit,
                          ConcurrentHashMap map) {
                super(tab, index, size, limit, map);
            }

            public final V next() {
                Node p;
                if ((p = next) == null)
                    throw new NoSuchElementException();
                V v = p.val;
                lastReturned = p;
                advance();
                return v;
            }

            public final V nextElement() { return next(); }
        }

        //Entry迭代器
        static final class EntryIterator extends BaseIterator
                implements Iterator> {
            EntryIterator(Node[] tab, int index, int size, int limit,
                          ConcurrentHashMap map) {
                super(tab, index, size, limit, map);
            }

            public final Map.Entry next() {
                Node p;
                if ((p = next) == null)
                    throw new NoSuchElementException();
                K k = p.key;
                V v = p.val;
                lastReturned = p;
                advance();
                return new MapEntry(k, v, map);
            }
        }

        //从EntryIterator导出一个Entry
        static final class MapEntry implements Map.Entry {
            final K key; // non-null
            V val;       // non-null
            final ConcurrentHashMap map;
            MapEntry(K key, V val, ConcurrentHashMap map) {
                this.key = key;
                this.val = val;
                this.map = map;
            }
            public K getKey()        { return key; }
            public V getValue()      { return val; }
            public int hashCode()    { return key.hashCode() ^ val.hashCode(); }
            public String toString() { return key + "=" + val; }

            public boolean equals(Object o) {
                Object k, v; Map.Entry e;
                return ((o instanceof Map.Entry) &&
                        (k = (e = (Map.Entry)o).getKey()) != null &&
                        (v = e.getValue()) != null &&
                        (k == key || k.equals(key)) &&
                        (v == val || v.equals(val)));
            }

            /**
             * 设置我们的entry的value并写入map。
             * 注意:返回的值在这里有点随便。因为可能返回的值已经被其它线程做过更改了,而返回值并未获取到最新的值.
             */
            public V setValue(V value) {
                if (value == null) throw new NullPointerException();
                V v = val;
                val = value;
                map.put(key, value);
                return v;
            }
        }

        //key分割器
        static final class KeySpliterator extends Traverser
                implements Spliterator {
            long est;               // size estimate
            KeySpliterator(Node[] tab, int size, int index, int limit,
                           long est) {
                super(tab, size, index, limit);
                this.est = est;
            }

            public Spliterator trySplit() {
                int i, f, h;
                return (h = ((i = baseIndex) + (f = baseLimit)) >>> 1) <= i ? null :
                        new KeySpliterator(tab, baseSize, baseLimit = h,
                                f, est >>>= 1);
            }

            public void forEachRemaining(Consumer action) {
                if (action == null) throw new NullPointerException();
                for (Node p; (p = advance()) != null;)
                    action.accept(p.key);
            }

            public boolean tryAdvance(Consumer action) {
                if (action == null) throw new NullPointerException();
                Node p;
                if ((p = advance()) == null)
                    return false;
                action.accept(p.key);
                return true;
            }

            public long estimateSize() { return est; }

            public int characteristics() {
                return Spliterator.DISTINCT | Spliterator.CONCURRENT |
                        Spliterator.NONNULL;
            }
        }

        //value分割器
        static final class ValueSpliterator extends Traverser
                implements Spliterator {
            long est;               // size estimate
            ValueSpliterator(Node[] tab, int size, int index, int limit,
                             long est) {
                super(tab, size, index, limit);
                this.est = est;
            }

            public Spliterator trySplit() {
                int i, f, h;
                return (h = ((i = baseIndex) + (f = baseLimit)) >>> 1) <= i ? null :
                        new ValueSpliterator(tab, baseSize, baseLimit = h,
                                f, est >>>= 1);
            }

            public void forEachRemaining(Consumer action) {
                if (action == null) throw new NullPointerException();
                for (Node p; (p = advance()) != null;)
                    action.accept(p.val);
            }

            public boolean tryAdvance(Consumer action) {
                if (action == null) throw new NullPointerException();
                Node p;
                if ((p = advance()) == null)
                    return false;
                action.accept(p.val);
                return true;
            }

            public long estimateSize() { return est; }

            public int characteristics() {
                return Spliterator.CONCURRENT | Spliterator.NONNULL;
            }
        }

        //entry分割器
        static final class EntrySpliterator extends Traverser
                implements Spliterator> {
            final ConcurrentHashMap map; // To export MapEntry
            long est;               // size estimate
            EntrySpliterator(Node[] tab, int size, int index, int limit,
                             long est, ConcurrentHashMap map) {
                super(tab, size, index, limit);
                this.map = map;
                this.est = est;
            }

            public Spliterator> trySplit() {
                int i, f, h;
                return (h = ((i = baseIndex) + (f = baseLimit)) >>> 1) <= i ? null :
                        new EntrySpliterator(tab, baseSize, baseLimit = h,
                                f, est >>>= 1, map);
            }

            public void forEachRemaining(Consumer> action) {
                if (action == null) throw new NullPointerException();
                for (Node p; (p = advance()) != null; )
                    action.accept(new MapEntry(p.key, p.val, map));
            }

            public boolean tryAdvance(Consumer> action) {
                if (action == null) throw new NullPointerException();
                Node p;
                if ((p = advance()) == null)
                    return false;
                action.accept(new MapEntry(p.key, p.val, map));
                return true;
            }

            public long estimateSize() { return est; }

            public int characteristics() {
                return Spliterator.DISTINCT | Spliterator.CONCURRENT |
                        Spliterator.NONNULL;
            }
        }

        /*---------并行批量操作 Parallel bulk operations---------*/

        /**
         * 计算批量任务的初始批次值。
         * 在执行最终的操作之前,返回值将是将任务一分为二的一个指数值exp2.我的理解是,如果返回为3,则是将任务分为8部分,并行处理.
         * 此值计算速度很快,适合用作二划分
         */

        final int batchFor(long b) {
            long n;
            if (b == Long.MAX_VALUE || (n = sumCount()) <= 1L || n < b)
                return 0;
            int sp = ForkJoinPool.getCommonPoolParallelism() << 2; // slack of 4
            return (b <= 0L || (n /= b) >= sp) ? sp : (int)n;
        }


        //@since 1.8
        public void forEach(long parallelismThreshold,
                            BiConsumer action) {
            if (action == null) throw new NullPointerException();
            new ForEachMappingTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            action).invoke();
        }

        //@since 1.8
        public  void forEach(long parallelismThreshold,
                                BiFunction transformer,
                                Consumer action) {
            if (transformer == null || action == null)
                throw new NullPointerException();
            new ForEachTransformedMappingTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            transformer, action).invoke();
        }

        /**
         * 对每个(key,value)应用给定的搜索函数并返回一个非空结果,如果没有,则返回null。
         * 成功后,进一步的元素处理将被抑制,搜索功能的任何其他并行调用的结果都将被忽略。
         * @since 1.8
         */
        public  U search(long parallelismThreshold,
                            BiFunction searchFunction) {
            if (searchFunction == null) throw new NullPointerException();
            return new SearchMappingsTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            searchFunction, new AtomicReference()).invoke();
        }

        //since 1.8,合并结果值
        public  U reduce(long parallelismThreshold,
                            BiFunction transformer,
                            BiFunction reducer) {
            if (transformer == null || reducer == null)
                throw new NullPointerException();
            return new MapReduceMappingsTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            null, transformer, reducer).invoke();
        }

        /**
         * @param basis reduction操作的默认初始化值
         * @since 1.8
         */
        public double reduceToDouble(long parallelismThreshold,
                                     ToDoubleBiFunction transformer,
                                     double basis,
                                     DoubleBinaryOperator reducer) {
            if (transformer == null || reducer == null)
                throw new NullPointerException();
            return new MapReduceMappingsToDoubleTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            null, transformer, basis, reducer).invoke();
        }


        //@since 1.8
        public long reduceToLong(long parallelismThreshold,
                                 ToLongBiFunction transformer,
                                 long basis,
                                 LongBinaryOperator reducer) {
            if (transformer == null || reducer == null)
                throw new NullPointerException();
            return new MapReduceMappingsToLongTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            null, transformer, basis, reducer).invoke();
        }

        // @since 1.8
        public int reduceToInt(long parallelismThreshold,
                               ToIntBiFunction transformer,
                               int basis,
                               IntBinaryOperator reducer) {
            if (transformer == null || reducer == null)
                throw new NullPointerException();
            return new MapReduceMappingsToIntTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            null, transformer, basis, reducer).invoke();
        }

        /**
         * Performs the given action for each key.
         * @since 1.8
         */
        public void forEachKey(long parallelismThreshold,
                               Consumer action) {
            if (action == null) throw new NullPointerException();
            new ForEachKeyTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            action).invoke();
        }

        /**
         * @since 1.8
         */
        public  void forEachKey(long parallelismThreshold,
                                   Function transformer,
                                   Consumer action) {
            if (transformer == null || action == null)
                throw new NullPointerException();
            new ForEachTransformedKeyTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            transformer, action).invoke();
        }

        /**
         * 根据给定查询函数,返回一个非null结果.如果没有则返回null
         * 成功后,进一步的元素处理将被抑制,搜索功能的任何其他并行调用的结果都将被忽略。
         * @since 1.8
         */
        public  U searchKeys(long parallelismThreshold,
                                Function searchFunction) {
            if (searchFunction == null) throw new NullPointerException();
            return new SearchKeysTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            searchFunction, new AtomicReference()).invoke();
        }

        /**
         * 所有key的一个合并操作
         * @since 1.8
         */
        public K reduceKeys(long parallelismThreshold,
                            BiFunction reducer) {
            if (reducer == null) throw new NullPointerException();
            return new ReduceKeysTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            null, reducer).invoke();
        }

        /**
         * 对key的reduce操作
         * @param parallelismThreshold 并行被处理的元素的个数阈值
         * @since 1.8
         */
        public  U reduceKeys(long parallelismThreshold,
                                Function transformer,
                                BiFunction reducer) {
            if (transformer == null || reducer == null)
                throw new NullPointerException();
            return new MapReduceKeysTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            null, transformer, reducer).invoke();
        }

        /**
         * @param basis reduction操作的初始化值
         * @since 1.8
         */
        public double reduceKeysToDouble(long parallelismThreshold,
                                         ToDoubleFunction transformer,
                                         double basis,
                                         DoubleBinaryOperator reducer) {
            if (transformer == null || reducer == null)
                throw new NullPointerException();
            return new MapReduceKeysToDoubleTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            null, transformer, basis, reducer).invoke();
        }

        /**
         * @since 1.8
         */
        public long reduceKeysToLong(long parallelismThreshold,
                                     ToLongFunction transformer,
                                     long basis,
                                     LongBinaryOperator reducer) {
            if (transformer == null || reducer == null)
                throw new NullPointerException();
            return new MapReduceKeysToLongTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            null, transformer, basis, reducer).invoke();
        }

        /**
         * @since 1.8
         */
        public int reduceKeysToInt(long parallelismThreshold,
                                   ToIntFunction transformer,
                                   int basis,
                                   IntBinaryOperator reducer) {
            if (transformer == null || reducer == null)
                throw new NullPointerException();
            return new MapReduceKeysToIntTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            null, transformer, basis, reducer).invoke();
        }

        /**
         * @since 1.8
         */
        public void forEachValue(long parallelismThreshold,
                                 Consumer action) {
            if (action == null)
                throw new NullPointerException();
            new ForEachValueTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            action).invoke();
        }

        /**
         * @param parallelismThreshold 并行被处理的元素的个数阈值
         * @since 1.8
         */
        public  void forEachValue(long parallelismThreshold,
                                     Function transformer,
                                     Consumer action) {
            if (transformer == null || action == null)
                throw new NullPointerException();
            new ForEachTransformedValueTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            transformer, action).invoke();
        }

        /**
         * @param parallelismThreshold 并行被处理的元素的个数阈值
         * @since 1.8
         */
        public  U searchValues(long parallelismThreshold,
                                  Function searchFunction) {
            if (searchFunction == null) throw new NullPointerException();
            return new SearchValuesTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            searchFunction, new AtomicReference()).invoke();
        }

        /**
         *
         * @param parallelismThreshold t并行被处理的元素的个数阈值
         * @since 1.8
         */
        public V reduceValues(long parallelismThreshold,
                              BiFunction reducer) {
            if (reducer == null) throw new NullPointerException();
            return new ReduceValuesTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            null, reducer).invoke();
        }

        /**
         * @since 1.8
         */
        public  U reduceValues(long parallelismThreshold,
                                  Function transformer,
                                  BiFunction reducer) {
            if (transformer == null || reducer == null)
                throw new NullPointerException();
            return new MapReduceValuesTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            null, transformer, reducer).invoke();
        }

        /**
         * @since 1.8
         */
        public double reduceValuesToDouble(long parallelismThreshold,
                                           ToDoubleFunction transformer,
                                           double basis,
                                           DoubleBinaryOperator reducer) {
            if (transformer == null || reducer == null)
                throw new NullPointerException();
            return new MapReduceValuesToDoubleTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            null, transformer, basis, reducer).invoke();
        }

        /**
         * @since 1.8
         */
        public long reduceValuesToLong(long parallelismThreshold,
                                       ToLongFunction transformer,
                                       long basis,
                                       LongBinaryOperator reducer) {
            if (transformer == null || reducer == null)
                throw new NullPointerException();
            return new MapReduceValuesToLongTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            null, transformer, basis, reducer).invoke();
        }

        /**
         * @since 1.8
         */
        public int reduceValuesToInt(long parallelismThreshold,
                                     ToIntFunction transformer,
                                     int basis,
                                     IntBinaryOperator reducer) {
            if (transformer == null || reducer == null)
                throw new NullPointerException();
            return new MapReduceValuesToIntTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            null, transformer, basis, reducer).invoke();
        }

        /**
         * @param parallelismThreshold 并行被处理的元素的个数阈值
         * @since 1.8
         */
        public void forEachEntry(long parallelismThreshold,
                                 Consumer> action) {
            if (action == null) throw new NullPointerException();
            new ForEachEntryTask(null, batchFor(parallelismThreshold), 0, 0, table,
                    action).invoke();
        }

        /**
         * @since 1.8
         */
        public  void forEachEntry(long parallelismThreshold,
                                     Function, ? extends U> transformer,
                                     Consumer action) {
            if (transformer == null || action == null)
                throw new NullPointerException();
            new ForEachTransformedEntryTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            transformer, action).invoke();
        }

        /**
         * @since 1.8
         */
        public  U searchEntries(long parallelismThreshold,
                                   Function, ? extends U> searchFunction) {
            if (searchFunction == null) throw new NullPointerException();
            return new SearchEntriesTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            searchFunction, new AtomicReference()).invoke();
        }

        /**
         * @since 1.8
         */
        public Map.Entry reduceEntries(long parallelismThreshold,
                                            BiFunction, Map.Entry, ? extends Map.Entry> reducer) {
            if (reducer == null) throw new NullPointerException();
            return new ReduceEntriesTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            null, reducer).invoke();
        }

        /**
         * @since 1.8
         */
        public  U reduceEntries(long parallelismThreshold,
                                   Function, ? extends U> transformer,
                                   BiFunction reducer) {
            if (transformer == null || reducer == null)
                throw new NullPointerException();
            return new MapReduceEntriesTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            null, transformer, reducer).invoke();
        }

        /**
         * @since 1.8
         */
        public double reduceEntriesToDouble(long parallelismThreshold,
                                            ToDoubleFunction> transformer,
                                            double basis,
                                            DoubleBinaryOperator reducer) {
            if (transformer == null || reducer == null)
                throw new NullPointerException();
            return new MapReduceEntriesToDoubleTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            null, transformer, basis, reducer).invoke();
        }

        /**
         * @since 1.8
         */
        public long reduceEntriesToLong(long parallelismThreshold,
                                        ToLongFunction> transformer,
                                        long basis,
                                        LongBinaryOperator reducer) {
            if (transformer == null || reducer == null)
                throw new NullPointerException();
            return new MapReduceEntriesToLongTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            null, transformer, basis, reducer).invoke();
        }

        /**
         * @since 1.8
         */
        public int reduceEntriesToInt(long parallelismThreshold,
                                      ToIntFunction> transformer,
                                      int basis,
                                      IntBinaryOperator reducer) {
            if (transformer == null || reducer == null)
                throw new NullPointerException();
            return new MapReduceEntriesToIntTask
                    (null, batchFor(parallelismThreshold), 0, 0, table,
                            null, transformer, basis, reducer).invoke();
        }


        /* ----------------视图操作 Views -------------- */

        /**
         * Base class for views.
         */
        abstract static class CollectionView
                implements Collection, java.io.Serializable {
            private static final long serialVersionUID = 7249069246763182397L;
            final ConcurrentHashMap map;
            CollectionView(ConcurrentHashMap map)  { this.map = map; }

            /**
             * Returns the map backing this view.
             *
             * @return the map backing this view
             */
            public ConcurrentHashMap getMap() { return map; }

            /**
             * Removes all of the elements from this view, by removing all
             * the mappings from the map backing this view.
             */
            public final void clear()      { map.clear(); }
            public final int size()        { return map.size(); }
            public final boolean isEmpty() { return map.isEmpty(); }

            // implementations below rely on concrete classes supplying these
            // abstract methods
            /**
             * Returns an iterator over the elements in this collection.
             *
             * 

The returned iterator is * weakly consistent. * * @return an iterator over the elements in this collection */ public abstract Iterator iterator(); public abstract boolean contains(Object o); public abstract boolean remove(Object o); private static final String oomeMsg = "Required array size too large"; public final Object[] toArray() { long sz = map.mappingCount(); if (sz > MAX_ARRAY_SIZE) throw new OutOfMemoryError(oomeMsg); int n = (int)sz; Object[] r = new Object[n]; int i = 0; for (E e : this) { if (i == n) { if (n >= MAX_ARRAY_SIZE) throw new OutOfMemoryError(oomeMsg); if (n >= MAX_ARRAY_SIZE - (MAX_ARRAY_SIZE >>> 1) - 1) n = MAX_ARRAY_SIZE; else n += (n >>> 1) + 1; r = Arrays.copyOf(r, n); } r[i++] = e; } return (i == n) ? r : Arrays.copyOf(r, i); } @SuppressWarnings("unchecked") public final T[] toArray(T[] a) { long sz = map.mappingCount(); if (sz > MAX_ARRAY_SIZE) throw new OutOfMemoryError(oomeMsg); int m = (int)sz; T[] r = (a.length >= m) ? a : (T[])java.lang.reflect.Array .newInstance(a.getClass().getComponentType(), m); int n = r.length; int i = 0; for (E e : this) { if (i == n) { if (n >= MAX_ARRAY_SIZE) throw new OutOfMemoryError(oomeMsg); if (n >= MAX_ARRAY_SIZE - (MAX_ARRAY_SIZE >>> 1) - 1) n = MAX_ARRAY_SIZE; else n += (n >>> 1) + 1; r = Arrays.copyOf(r, n); } r[i++] = (T)e; } if (a == r && i < n) { r[i] = null; // null-terminate return r; } return (i == n) ? r : Arrays.copyOf(r, i); } public final String toString() { StringBuilder sb = new StringBuilder(); sb.append('['); Iterator it = iterator(); if (it.hasNext()) { for (;;) { Object e = it.next(); sb.append(e == this ? "(this Collection)" : e); if (!it.hasNext()) break; sb.append(',').append(' '); } } return sb.append(']').toString(); } public final boolean containsAll(Collection c) { if (c != this) { for (Object e : c) { if (e == null || !contains(e)) return false; } } return true; } public final boolean removeAll(Collection c) { if (c == null) throw new NullPointerException(); boolean modified = false; for (Iterator it = iterator(); it.hasNext();) { if (c.contains(it.next())) { it.remove(); modified = true; } } return modified; } public final boolean retainAll(Collection c) { if (c == null) throw new NullPointerException(); boolean modified = false; for (Iterator it = iterator(); it.hasNext();) { if (!c.contains(it.next())) { it.remove(); modified = true; } } return modified; } } /** * ConcurrentHashMap键的集合视图,通过映射到一个公共值,添加操作可以具有选择性。 * 这个类不能直接实例化。 * @since 1.8 */ public static class KeySetView extends CollectionView implements Set, java.io.Serializable { private static final long serialVersionUID = 7249069246763182397L; private final V value; KeySetView(ConcurrentHashMap map, V value) { // non-public super(map); this.value = value; } /** * Returns the default mapped value for additions, * or {@code null} if additions are not supported. * * @return the default mapped value for additions, or {@code null} * if not supported */ public V getMappedValue() { return value; } /** * {@inheritDoc} * @throws NullPointerException if the specified key is null */ public boolean contains(Object o) { return map.containsKey(o); } /** * Removes the key from this map view, by removing the key (and its * corresponding value) from the backing map. This method does * nothing if the key is not in the map. * * @param o the key to be removed from the backing map * @return {@code true} if the backing map contained the specified key * @throws NullPointerException if the specified key is null */ public boolean remove(Object o) { return map.remove(o) != null; } /** * @return an iterator over the keys of the backing map */ public Iterator iterator() { Node[] t; ConcurrentHashMap m = map; int f = (t = m.table) == null ? 0 : t.length; return new KeyIterator(t, f, 0, f, m); } //添加指定key,此时value为定义的默认映射值 public boolean add(K e) { V v; if ((v = value) == null) throw new UnsupportedOperationException(); return map.putVal(e, v, true) == null; } public boolean addAll(Collection c) { boolean added = false; V v; if ((v = value) == null) throw new UnsupportedOperationException(); for (K e : c) { if (map.putVal(e, v, true) == null) added = true; } return added; } public int hashCode() { int h = 0; for (K e : this) h += e.hashCode(); return h; } public boolean equals(Object o) { Set c; return ((o instanceof Set) && ((c = (Set)o) == this || (containsAll(c) && c.containsAll(this)))); } public Spliterator spliterator() { Node[] t; ConcurrentHashMap m = map; long n = m.sumCount(); int f = (t = m.table) == null ? 0 : t.length; return new KeySpliterator(t, f, 0, f, n < 0L ? 0L : n); } public void forEach(Consumer action) { if (action == null) throw new NullPointerException(); Node[] t; if ((t = map.table) != null) { Traverser it = new Traverser(t, t.length, 0, t.length); for (Node p; (p = it.advance()) != null; ) action.accept(p.key); } } } /** * A view of a ConcurrentHashMap as a {@link Collection} of * values, in which additions are disabled. This class cannot be * directly instantiated. See {@link #values()}. */ static final class ValuesView extends CollectionView implements Collection, java.io.Serializable { private static final long serialVersionUID = 2249069246763182397L; ValuesView(ConcurrentHashMap map) { super(map); } public final boolean contains(Object o) { return map.containsValue(o); } public final boolean remove(Object o) { if (o != null) { for (Iterator it = iterator(); it.hasNext();) { if (o.equals(it.next())) { it.remove(); return true; } } } return false; } public final Iterator iterator() { ConcurrentHashMap m = map; Node[] t; int f = (t = m.table) == null ? 0 : t.length; return new ValueIterator(t, f, 0, f, m); } public final boolean add(V e) { throw new UnsupportedOperationException(); } public final boolean addAll(Collection c) { throw new UnsupportedOperationException(); } public Spliterator spliterator() { Node[] t; ConcurrentHashMap m = map; long n = m.sumCount(); int f = (t = m.table) == null ? 0 : t.length; return new ValueSpliterator(t, f, 0, f, n < 0L ? 0L : n); } public void forEach(Consumer action) { if (action == null) throw new NullPointerException(); Node[] t; if ((t = map.table) != null) { Traverser it = new Traverser(t, t.length, 0, t.length); for (Node p; (p = it.advance()) != null; ) action.accept(p.val); } } } /** * A view of a ConcurrentHashMap as a {@link Set} of (key, value) * entries. This class cannot be directly instantiated. See * {@link #entrySet()}. */ static final class EntrySetView extends CollectionView> implements Set>, java.io.Serializable { private static final long serialVersionUID = 2249069246763182397L; EntrySetView(ConcurrentHashMap map) { super(map); } public boolean contains(Object o) { Object k, v, r; Map.Entry e; return ((o instanceof Map.Entry) && (k = (e = (Map.Entry)o).getKey()) != null && (r = map.get(k)) != null && (v = e.getValue()) != null && (v == r || v.equals(r))); } public boolean remove(Object o) { Object k, v; Map.Entry e; return ((o instanceof Map.Entry) && (k = (e = (Map.Entry)o).getKey()) != null && (v = e.getValue()) != null && map.remove(k, v)); } /** * @return an iterator over the entries of the backing map */ public Iterator> iterator() { ConcurrentHashMap m = map; Node[] t; int f = (t = m.table) == null ? 0 : t.length; return new EntryIterator(t, f, 0, f, m); } public boolean add(Entry e) { return map.putVal(e.getKey(), e.getValue(), false) == null; } public boolean addAll(Collection> c) { boolean added = false; for (Entry e : c) { if (add(e)) added = true; } return added; } public final int hashCode() { int h = 0; Node[] t; if ((t = map.table) != null) { Traverser it = new Traverser(t, t.length, 0, t.length); for (Node p; (p = it.advance()) != null; ) { h += p.hashCode(); } } return h; } public final boolean equals(Object o) { Set c; return ((o instanceof Set) && ((c = (Set)o) == this || (containsAll(c) && c.containsAll(this)))); } public Spliterator> spliterator() { Node[] t; ConcurrentHashMap m = map; long n = m.sumCount(); int f = (t = m.table) == null ? 0 : t.length; return new EntrySpliterator(t, f, 0, f, n < 0L ? 0L : n, m); } public void forEach(Consumer> action) { if (action == null) throw new NullPointerException(); Node[] t; if ((t = map.table) != null) { Traverser it = new Traverser(t, t.length, 0, t.length); for (Node p; (p = it.advance()) != null; ) action.accept(new MapEntry(p.key, p.val, map)); } } } // ------------------------------------------------------- //批量任务的基类。从类Traverser重复一些字段和代码,因为我们需要子类CountedCompleter。 @SuppressWarnings("serial") abstract static class BulkTask extends CountedCompleter { Node[] tab; // same as Traverser Node next; TableStack stack, spare; int index; int baseIndex; int baseLimit; final int baseSize; int batch; // split control BulkTask(BulkTask par, int b, int i, int f, Node[] t) { super(par); this.batch = b; this.index = this.baseIndex = i; if ((this.tab = t) == null) this.baseSize = this.baseLimit = 0; else if (par == null) this.baseSize = this.baseLimit = t.length; else { this.baseLimit = f; this.baseSize = par.baseSize; } } /** * Same as Traverser version */ final Node advance() { Node e; if ((e = next) != null) e = e.next; for (;;) { Node[] t; int i, n; if (e != null) return next = e; if (baseIndex >= baseLimit || (t = tab) == null || (n = t.length) <= (i = index) || i < 0) return next = null; if ((e = tabAt(t, i)) != null && e.hash < 0) { if (e instanceof ForwardingNode) { tab = ((ForwardingNode)e).nextTable; e = null; pushState(t, i, n); continue; } else if (e instanceof TreeBin) e = ((TreeBin)e).first; else e = null; } if (stack != null) recoverState(n); else if ((index = i + baseSize) >= n) index = ++baseIndex; } } private void pushState(Node[] t, int i, int n) { TableStack s = spare; if (s != null) spare = s.next; else s = new TableStack(); s.tab = t; s.length = n; s.index = i; s.next = stack; stack = s; } private void recoverState(int n) { TableStack s; int len; while ((s = stack) != null && (index += (len = s.length)) >= n) { n = len; index = s.index; tab = s.tab; s.tab = null; TableStack next = s.next; s.next = spare; // save for reuse stack = next; spare = s; } if (s == null && (index += baseSize) >= n) index = ++baseIndex; } } /* * 任务类。使用常规 * 由于编译器无法知道我们已经对任务参数进行了空值检查,因此我们强制使用最简单的提升旁路来帮助避免复杂检查。 */ @SuppressWarnings("serial") static final class ForEachKeyTask extends BulkTask { final Consumer action; ForEachKeyTask (BulkTask p, int b, int i, int f, Node[] t, Consumer action) { super(p, b, i, f, t); this.action = action; } public final void compute() { final Consumer action; if ((action = this.action) != null) { for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); new ForEachKeyTask (this, batch >>>= 1, baseLimit = h, f, tab, action).fork(); } for (Node p; (p = advance()) != null;) action.accept(p.key); propagateCompletion(); } } } @SuppressWarnings("serial") static final class ForEachValueTask extends BulkTask { final Consumer action; ForEachValueTask (BulkTask p, int b, int i, int f, Node[] t, Consumer action) { super(p, b, i, f, t); this.action = action; } public final void compute() { final Consumer action; if ((action = this.action) != null) { for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); new ForEachValueTask (this, batch >>>= 1, baseLimit = h, f, tab, action).fork(); } for (Node p; (p = advance()) != null;) action.accept(p.val); propagateCompletion(); } } } @SuppressWarnings("serial") static final class ForEachEntryTask extends BulkTask { final Consumer> action; ForEachEntryTask (BulkTask p, int b, int i, int f, Node[] t, Consumer> action) { super(p, b, i, f, t); this.action = action; } public final void compute() { final Consumer> action; if ((action = this.action) != null) { for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); new ForEachEntryTask (this, batch >>>= 1, baseLimit = h, f, tab, action).fork(); } for (Node p; (p = advance()) != null; ) action.accept(p); propagateCompletion(); } } } @SuppressWarnings("serial") static final class ForEachMappingTask extends BulkTask { final BiConsumer action; ForEachMappingTask (BulkTask p, int b, int i, int f, Node[] t, BiConsumer action) { super(p, b, i, f, t); this.action = action; } public final void compute() { final BiConsumer action; if ((action = this.action) != null) { for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); new ForEachMappingTask (this, batch >>>= 1, baseLimit = h, f, tab, action).fork(); } for (Node p; (p = advance()) != null; ) action.accept(p.key, p.val); propagateCompletion(); } } } @SuppressWarnings("serial") static final class ForEachTransformedKeyTask extends BulkTask { final Function transformer; final Consumer action; ForEachTransformedKeyTask (BulkTask p, int b, int i, int f, Node[] t, Function transformer, Consumer action) { super(p, b, i, f, t); this.transformer = transformer; this.action = action; } public final void compute() { final Function transformer; final Consumer action; if ((transformer = this.transformer) != null && (action = this.action) != null) { for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); new ForEachTransformedKeyTask (this, batch >>>= 1, baseLimit = h, f, tab, transformer, action).fork(); } for (Node p; (p = advance()) != null; ) { U u; if ((u = transformer.apply(p.key)) != null) action.accept(u); } propagateCompletion(); } } } @SuppressWarnings("serial") static final class ForEachTransformedValueTask extends BulkTask { final Function transformer; final Consumer action; ForEachTransformedValueTask (BulkTask p, int b, int i, int f, Node[] t, Function transformer, Consumer action) { super(p, b, i, f, t); this.transformer = transformer; this.action = action; } public final void compute() { final Function transformer; final Consumer action; if ((transformer = this.transformer) != null && (action = this.action) != null) { for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); new ForEachTransformedValueTask (this, batch >>>= 1, baseLimit = h, f, tab, transformer, action).fork(); } for (Node p; (p = advance()) != null; ) { U u; if ((u = transformer.apply(p.val)) != null) action.accept(u); } propagateCompletion(); } } } @SuppressWarnings("serial") static final class ForEachTransformedEntryTask extends BulkTask { final Function, ? extends U> transformer; final Consumer action; ForEachTransformedEntryTask (BulkTask p, int b, int i, int f, Node[] t, Function, ? extends U> transformer, Consumer action) { super(p, b, i, f, t); this.transformer = transformer; this.action = action; } public final void compute() { final Function, ? extends U> transformer; final Consumer action; if ((transformer = this.transformer) != null && (action = this.action) != null) { for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); new ForEachTransformedEntryTask (this, batch >>>= 1, baseLimit = h, f, tab, transformer, action).fork(); } for (Node p; (p = advance()) != null; ) { U u; if ((u = transformer.apply(p)) != null) action.accept(u); } propagateCompletion(); } } } @SuppressWarnings("serial") static final class ForEachTransformedMappingTask extends BulkTask { final BiFunction transformer; final Consumer action; ForEachTransformedMappingTask (BulkTask p, int b, int i, int f, Node[] t, BiFunction transformer, Consumer action) { super(p, b, i, f, t); this.transformer = transformer; this.action = action; } public final void compute() { final BiFunction transformer; final Consumer action; if ((transformer = this.transformer) != null && (action = this.action) != null) { for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); new ForEachTransformedMappingTask (this, batch >>>= 1, baseLimit = h, f, tab, transformer, action).fork(); } for (Node p; (p = advance()) != null; ) { U u; if ((u = transformer.apply(p.key, p.val)) != null) action.accept(u); } propagateCompletion(); } } } @SuppressWarnings("serial") static final class SearchKeysTask extends BulkTask { final Function searchFunction; final AtomicReference result; SearchKeysTask (BulkTask p, int b, int i, int f, Node[] t, Function searchFunction, AtomicReference result) { super(p, b, i, f, t); this.searchFunction = searchFunction; this.result = result; } public final U getRawResult() { return result.get(); } public final void compute() { final Function searchFunction; final AtomicReference result; if ((searchFunction = this.searchFunction) != null && (result = this.result) != null) { for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { if (result.get() != null) return; addToPendingCount(1); new SearchKeysTask (this, batch >>>= 1, baseLimit = h, f, tab, searchFunction, result).fork(); } while (result.get() == null) { U u; Node p; if ((p = advance()) == null) { propagateCompletion(); break; } if ((u = searchFunction.apply(p.key)) != null) { if (result.compareAndSet(null, u)) quietlyCompleteRoot(); break; } } } } } @SuppressWarnings("serial") static final class SearchValuesTask extends BulkTask { final Function searchFunction; final AtomicReference result; SearchValuesTask (BulkTask p, int b, int i, int f, Node[] t, Function searchFunction, AtomicReference result) { super(p, b, i, f, t); this.searchFunction = searchFunction; this.result = result; } public final U getRawResult() { return result.get(); } public final void compute() { final Function searchFunction; final AtomicReference result; if ((searchFunction = this.searchFunction) != null && (result = this.result) != null) { for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { if (result.get() != null) return; addToPendingCount(1); new SearchValuesTask (this, batch >>>= 1, baseLimit = h, f, tab, searchFunction, result).fork(); } while (result.get() == null) { U u; Node p; if ((p = advance()) == null) { propagateCompletion(); break; } if ((u = searchFunction.apply(p.val)) != null) { if (result.compareAndSet(null, u)) quietlyCompleteRoot(); break; } } } } } @SuppressWarnings("serial") static final class SearchEntriesTask extends BulkTask { final Function, ? extends U> searchFunction; final AtomicReference result; SearchEntriesTask (BulkTask p, int b, int i, int f, Node[] t, Function, ? extends U> searchFunction, AtomicReference result) { super(p, b, i, f, t); this.searchFunction = searchFunction; this.result = result; } public final U getRawResult() { return result.get(); } public final void compute() { final Function, ? extends U> searchFunction; final AtomicReference result; if ((searchFunction = this.searchFunction) != null && (result = this.result) != null) { for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { if (result.get() != null) return; addToPendingCount(1); new SearchEntriesTask (this, batch >>>= 1, baseLimit = h, f, tab, searchFunction, result).fork(); } while (result.get() == null) { U u; Node p; if ((p = advance()) == null) { propagateCompletion(); break; } if ((u = searchFunction.apply(p)) != null) { if (result.compareAndSet(null, u)) quietlyCompleteRoot(); return; } } } } } @SuppressWarnings("serial") static final class SearchMappingsTask extends BulkTask { final BiFunction searchFunction; final AtomicReference result; SearchMappingsTask (BulkTask p, int b, int i, int f, Node[] t, BiFunction searchFunction, AtomicReference result) { super(p, b, i, f, t); this.searchFunction = searchFunction; this.result = result; } public final U getRawResult() { return result.get(); } public final void compute() { final BiFunction searchFunction; final AtomicReference result; if ((searchFunction = this.searchFunction) != null && (result = this.result) != null) { for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { if (result.get() != null) return; addToPendingCount(1); new SearchMappingsTask (this, batch >>>= 1, baseLimit = h, f, tab, searchFunction, result).fork(); } while (result.get() == null) { U u; Node p; if ((p = advance()) == null) { propagateCompletion(); break; } if ((u = searchFunction.apply(p.key, p.val)) != null) { if (result.compareAndSet(null, u)) quietlyCompleteRoot(); break; } } } } } @SuppressWarnings("serial") static final class ReduceKeysTask extends BulkTask { final BiFunction reducer; K result; ReduceKeysTask rights, nextRight; ReduceKeysTask (BulkTask p, int b, int i, int f, Node[] t, ReduceKeysTask nextRight, BiFunction reducer) { super(p, b, i, f, t); this.nextRight = nextRight; this.reducer = reducer; } public final K getRawResult() { return result; } public final void compute() { final BiFunction reducer; if ((reducer = this.reducer) != null) { for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); (rights = new ReduceKeysTask (this, batch >>>= 1, baseLimit = h, f, tab, rights, reducer)).fork(); } K r = null; for (Node p; (p = advance()) != null; ) { K u = p.key; r = (r == null) ? u : u == null ? r : reducer.apply(r, u); } result = r; CountedCompleter c; for (c = firstComplete(); c != null; c = c.nextComplete()) { @SuppressWarnings("unchecked") ReduceKeysTask t = (ReduceKeysTask)c, s = t.rights; while (s != null) { K tr, sr; if ((sr = s.result) != null) t.result = (((tr = t.result) == null) ? sr : reducer.apply(tr, sr)); s = t.rights = s.nextRight; } } } } } @SuppressWarnings("serial") static final class ReduceValuesTask extends BulkTask { final BiFunction reducer; V result; ReduceValuesTask rights, nextRight; ReduceValuesTask (BulkTask p, int b, int i, int f, Node[] t, ReduceValuesTask nextRight, BiFunction reducer) { super(p, b, i, f, t); this.nextRight = nextRight; this.reducer = reducer; } public final V getRawResult() { return result; } public final void compute() { final BiFunction reducer; if ((reducer = this.reducer) != null) { for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); (rights = new ReduceValuesTask (this, batch >>>= 1, baseLimit = h, f, tab, rights, reducer)).fork(); } V r = null; for (Node p; (p = advance()) != null; ) { V v = p.val; r = (r == null) ? v : reducer.apply(r, v); } result = r; CountedCompleter c; for (c = firstComplete(); c != null; c = c.nextComplete()) { @SuppressWarnings("unchecked") ReduceValuesTask t = (ReduceValuesTask)c, s = t.rights; while (s != null) { V tr, sr; if ((sr = s.result) != null) t.result = (((tr = t.result) == null) ? sr : reducer.apply(tr, sr)); s = t.rights = s.nextRight; } } } } } @SuppressWarnings("serial") static final class ReduceEntriesTask extends BulkTask> { final BiFunction, Map.Entry, ? extends Map.Entry> reducer; Map.Entry result; ReduceEntriesTask rights, nextRight; ReduceEntriesTask (BulkTask p, int b, int i, int f, Node[] t, ReduceEntriesTask nextRight, BiFunction, Map.Entry, ? extends Map.Entry> reducer) { super(p, b, i, f, t); this.nextRight = nextRight; this.reducer = reducer; } public final Map.Entry getRawResult() { return result; } public final void compute() { final BiFunction, Map.Entry, ? extends Map.Entry> reducer; if ((reducer = this.reducer) != null) { for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); (rights = new ReduceEntriesTask (this, batch >>>= 1, baseLimit = h, f, tab, rights, reducer)).fork(); } Map.Entry r = null; for (Node p; (p = advance()) != null; ) r = (r == null) ? p : reducer.apply(r, p); result = r; CountedCompleter c; for (c = firstComplete(); c != null; c = c.nextComplete()) { @SuppressWarnings("unchecked") ReduceEntriesTask t = (ReduceEntriesTask)c, s = t.rights; while (s != null) { Map.Entry tr, sr; if ((sr = s.result) != null) t.result = (((tr = t.result) == null) ? sr : reducer.apply(tr, sr)); s = t.rights = s.nextRight; } } } } } //MapReduce @SuppressWarnings("serial") static final class MapReduceKeysTask extends BulkTask { final Function transformer; final BiFunction reducer; U result; MapReduceKeysTask rights, nextRight; MapReduceKeysTask (BulkTask p, int b, int i, int f, Node[] t, MapReduceKeysTask nextRight, Function transformer, BiFunction reducer) { super(p, b, i, f, t); this.nextRight = nextRight; this.transformer = transformer; this.reducer = reducer; } public final U getRawResult() { return result; } public final void compute() { final Function transformer; final BiFunction reducer; if ((transformer = this.transformer) != null && (reducer = this.reducer) != null) { for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); (rights = new MapReduceKeysTask (this, batch >>>= 1, baseLimit = h, f, tab, rights, transformer, reducer)).fork(); } U r = null; for (Node p; (p = advance()) != null; ) { U u; if ((u = transformer.apply(p.key)) != null) r = (r == null) ? u : reducer.apply(r, u); } result = r; CountedCompleter c; for (c = firstComplete(); c != null; c = c.nextComplete()) { @SuppressWarnings("unchecked") MapReduceKeysTask t = (MapReduceKeysTask)c, s = t.rights; while (s != null) { U tr, sr; if ((sr = s.result) != null) t.result = (((tr = t.result) == null) ? sr : reducer.apply(tr, sr)); s = t.rights = s.nextRight; } } } } } @SuppressWarnings("serial") static final class MapReduceValuesTask extends BulkTask { final Function transformer; final BiFunction reducer; U result; MapReduceValuesTask rights, nextRight; MapReduceValuesTask (BulkTask p, int b, int i, int f, Node[] t, MapReduceValuesTask nextRight, Function transformer, BiFunction reducer) { super(p, b, i, f, t); this.nextRight = nextRight; this.transformer = transformer; this.reducer = reducer; } public final U getRawResult() { return result; } public final void compute() { final Function transformer; final BiFunction reducer; if ((transformer = this.transformer) != null && (reducer = this.reducer) != null) { for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); (rights = new MapReduceValuesTask (this, batch >>>= 1, baseLimit = h, f, tab, rights, transformer, reducer)).fork(); } U r = null; for (Node p; (p = advance()) != null; ) { U u; if ((u = transformer.apply(p.val)) != null) r = (r == null) ? u : reducer.apply(r, u); } result = r; CountedCompleter c; for (c = firstComplete(); c != null; c = c.nextComplete()) { @SuppressWarnings("unchecked") MapReduceValuesTask t = (MapReduceValuesTask)c, s = t.rights; while (s != null) { U tr, sr; if ((sr = s.result) != null) t.result = (((tr = t.result) == null) ? sr : reducer.apply(tr, sr)); s = t.rights = s.nextRight; } } } } } @SuppressWarnings("serial") static final class MapReduceEntriesTask extends BulkTask { final Function, ? extends U> transformer; final BiFunction reducer; U result; MapReduceEntriesTask rights, nextRight; MapReduceEntriesTask (BulkTask p, int b, int i, int f, Node[] t, MapReduceEntriesTask nextRight, Function, ? extends U> transformer, BiFunction reducer) { super(p, b, i, f, t); this.nextRight = nextRight; this.transformer = transformer; this.reducer = reducer; } public final U getRawResult() { return result; } public final void compute() { final Function, ? extends U> transformer; final BiFunction reducer; if ((transformer = this.transformer) != null && (reducer = this.reducer) != null) { for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); (rights = new MapReduceEntriesTask (this, batch >>>= 1, baseLimit = h, f, tab, rights, transformer, reducer)).fork(); } U r = null; for (Node p; (p = advance()) != null; ) { U u; if ((u = transformer.apply(p)) != null) r = (r == null) ? u : reducer.apply(r, u); } result = r; CountedCompleter c; for (c = firstComplete(); c != null; c = c.nextComplete()) { @SuppressWarnings("unchecked") MapReduceEntriesTask t = (MapReduceEntriesTask)c, s = t.rights; while (s != null) { U tr, sr; if ((sr = s.result) != null) t.result = (((tr = t.result) == null) ? sr : reducer.apply(tr, sr)); s = t.rights = s.nextRight; } } } } } @SuppressWarnings("serial") static final class MapReduceMappingsTask extends BulkTask { final BiFunction transformer; final BiFunction reducer; U result; MapReduceMappingsTask rights, nextRight; MapReduceMappingsTask (BulkTask p, int b, int i, int f, Node[] t, MapReduceMappingsTask nextRight, BiFunction transformer, BiFunction reducer) { super(p, b, i, f, t); this.nextRight = nextRight; this.transformer = transformer; this.reducer = reducer; } public final U getRawResult() { return result; } public final void compute() { final BiFunction transformer; final BiFunction reducer; if ((transformer = this.transformer) != null && (reducer = this.reducer) != null) { for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); (rights = new MapReduceMappingsTask (this, batch >>>= 1, baseLimit = h, f, tab, rights, transformer, reducer)).fork(); } U r = null; for (Node p; (p = advance()) != null; ) { U u; if ((u = transformer.apply(p.key, p.val)) != null) r = (r == null) ? u : reducer.apply(r, u); } result = r; CountedCompleter c; for (c = firstComplete(); c != null; c = c.nextComplete()) { @SuppressWarnings("unchecked") MapReduceMappingsTask t = (MapReduceMappingsTask)c, s = t.rights; while (s != null) { U tr, sr; if ((sr = s.result) != null) t.result = (((tr = t.result) == null) ? sr : reducer.apply(tr, sr)); s = t.rights = s.nextRight; } } } } } @SuppressWarnings("serial") static final class MapReduceKeysToDoubleTask extends BulkTask { final ToDoubleFunction transformer; final DoubleBinaryOperator reducer; final double basis; double result; MapReduceKeysToDoubleTask rights, nextRight; MapReduceKeysToDoubleTask (BulkTask p, int b, int i, int f, Node[] t, MapReduceKeysToDoubleTask nextRight, ToDoubleFunction transformer, double basis, DoubleBinaryOperator reducer) { super(p, b, i, f, t); this.nextRight = nextRight; this.transformer = transformer; this.basis = basis; this.reducer = reducer; } public final Double getRawResult() { return result; } public final void compute() { final ToDoubleFunction transformer; final DoubleBinaryOperator reducer; if ((transformer = this.transformer) != null && (reducer = this.reducer) != null) { double r = this.basis; for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); (rights = new MapReduceKeysToDoubleTask (this, batch >>>= 1, baseLimit = h, f, tab, rights, transformer, r, reducer)).fork(); } for (Node p; (p = advance()) != null; ) r = reducer.applyAsDouble(r, transformer.applyAsDouble(p.key)); result = r; CountedCompleter c; for (c = firstComplete(); c != null; c = c.nextComplete()) { @SuppressWarnings("unchecked") MapReduceKeysToDoubleTask t = (MapReduceKeysToDoubleTask)c, s = t.rights; while (s != null) { t.result = reducer.applyAsDouble(t.result, s.result); s = t.rights = s.nextRight; } } } } } @SuppressWarnings("serial") static final class MapReduceValuesToDoubleTask extends BulkTask { final ToDoubleFunction transformer; final DoubleBinaryOperator reducer; final double basis; double result; MapReduceValuesToDoubleTask rights, nextRight; MapReduceValuesToDoubleTask (BulkTask p, int b, int i, int f, Node[] t, MapReduceValuesToDoubleTask nextRight, ToDoubleFunction transformer, double basis, DoubleBinaryOperator reducer) { super(p, b, i, f, t); this.nextRight = nextRight; this.transformer = transformer; this.basis = basis; this.reducer = reducer; } public final Double getRawResult() { return result; } public final void compute() { final ToDoubleFunction transformer; final DoubleBinaryOperator reducer; if ((transformer = this.transformer) != null && (reducer = this.reducer) != null) { double r = this.basis; for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); (rights = new MapReduceValuesToDoubleTask (this, batch >>>= 1, baseLimit = h, f, tab, rights, transformer, r, reducer)).fork(); } for (Node p; (p = advance()) != null; ) r = reducer.applyAsDouble(r, transformer.applyAsDouble(p.val)); result = r; CountedCompleter c; for (c = firstComplete(); c != null; c = c.nextComplete()) { @SuppressWarnings("unchecked") MapReduceValuesToDoubleTask t = (MapReduceValuesToDoubleTask)c, s = t.rights; while (s != null) { t.result = reducer.applyAsDouble(t.result, s.result); s = t.rights = s.nextRight; } } } } } @SuppressWarnings("serial") static final class MapReduceEntriesToDoubleTask extends BulkTask { final ToDoubleFunction> transformer; final DoubleBinaryOperator reducer; final double basis; double result; MapReduceEntriesToDoubleTask rights, nextRight; MapReduceEntriesToDoubleTask (BulkTask p, int b, int i, int f, Node[] t, MapReduceEntriesToDoubleTask nextRight, ToDoubleFunction> transformer, double basis, DoubleBinaryOperator reducer) { super(p, b, i, f, t); this.nextRight = nextRight; this.transformer = transformer; this.basis = basis; this.reducer = reducer; } public final Double getRawResult() { return result; } public final void compute() { final ToDoubleFunction> transformer; final DoubleBinaryOperator reducer; if ((transformer = this.transformer) != null && (reducer = this.reducer) != null) { double r = this.basis; for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); (rights = new MapReduceEntriesToDoubleTask (this, batch >>>= 1, baseLimit = h, f, tab, rights, transformer, r, reducer)).fork(); } for (Node p; (p = advance()) != null; ) r = reducer.applyAsDouble(r, transformer.applyAsDouble(p)); result = r; CountedCompleter c; for (c = firstComplete(); c != null; c = c.nextComplete()) { @SuppressWarnings("unchecked") MapReduceEntriesToDoubleTask t = (MapReduceEntriesToDoubleTask)c, s = t.rights; while (s != null) { t.result = reducer.applyAsDouble(t.result, s.result); s = t.rights = s.nextRight; } } } } } @SuppressWarnings("serial") static final class MapReduceMappingsToDoubleTask extends BulkTask { final ToDoubleBiFunction transformer; final DoubleBinaryOperator reducer; final double basis; double result; MapReduceMappingsToDoubleTask rights, nextRight; MapReduceMappingsToDoubleTask (BulkTask p, int b, int i, int f, Node[] t, MapReduceMappingsToDoubleTask nextRight, ToDoubleBiFunction transformer, double basis, DoubleBinaryOperator reducer) { super(p, b, i, f, t); this.nextRight = nextRight; this.transformer = transformer; this.basis = basis; this.reducer = reducer; } public final Double getRawResult() { return result; } public final void compute() { final ToDoubleBiFunction transformer; final DoubleBinaryOperator reducer; if ((transformer = this.transformer) != null && (reducer = this.reducer) != null) { double r = this.basis; for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); (rights = new MapReduceMappingsToDoubleTask (this, batch >>>= 1, baseLimit = h, f, tab, rights, transformer, r, reducer)).fork(); } for (Node p; (p = advance()) != null; ) r = reducer.applyAsDouble(r, transformer.applyAsDouble(p.key, p.val)); result = r; CountedCompleter c; for (c = firstComplete(); c != null; c = c.nextComplete()) { @SuppressWarnings("unchecked") MapReduceMappingsToDoubleTask t = (MapReduceMappingsToDoubleTask)c, s = t.rights; while (s != null) { t.result = reducer.applyAsDouble(t.result, s.result); s = t.rights = s.nextRight; } } } } } @SuppressWarnings("serial") static final class MapReduceKeysToLongTask extends BulkTask { final ToLongFunction transformer; final LongBinaryOperator reducer; final long basis; long result; MapReduceKeysToLongTask rights, nextRight; MapReduceKeysToLongTask (BulkTask p, int b, int i, int f, Node[] t, MapReduceKeysToLongTask nextRight, ToLongFunction transformer, long basis, LongBinaryOperator reducer) { super(p, b, i, f, t); this.nextRight = nextRight; this.transformer = transformer; this.basis = basis; this.reducer = reducer; } public final Long getRawResult() { return result; } public final void compute() { final ToLongFunction transformer; final LongBinaryOperator reducer; if ((transformer = this.transformer) != null && (reducer = this.reducer) != null) { long r = this.basis; for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); (rights = new MapReduceKeysToLongTask (this, batch >>>= 1, baseLimit = h, f, tab, rights, transformer, r, reducer)).fork(); } for (Node p; (p = advance()) != null; ) r = reducer.applyAsLong(r, transformer.applyAsLong(p.key)); result = r; CountedCompleter c; for (c = firstComplete(); c != null; c = c.nextComplete()) { @SuppressWarnings("unchecked") MapReduceKeysToLongTask t = (MapReduceKeysToLongTask)c, s = t.rights; while (s != null) { t.result = reducer.applyAsLong(t.result, s.result); s = t.rights = s.nextRight; } } } } } @SuppressWarnings("serial") static final class MapReduceValuesToLongTask extends BulkTask { final ToLongFunction transformer; final LongBinaryOperator reducer; final long basis; long result; MapReduceValuesToLongTask rights, nextRight; MapReduceValuesToLongTask (BulkTask p, int b, int i, int f, Node[] t, MapReduceValuesToLongTask nextRight, ToLongFunction transformer, long basis, LongBinaryOperator reducer) { super(p, b, i, f, t); this.nextRight = nextRight; this.transformer = transformer; this.basis = basis; this.reducer = reducer; } public final Long getRawResult() { return result; } public final void compute() { final ToLongFunction transformer; final LongBinaryOperator reducer; if ((transformer = this.transformer) != null && (reducer = this.reducer) != null) { long r = this.basis; for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); (rights = new MapReduceValuesToLongTask (this, batch >>>= 1, baseLimit = h, f, tab, rights, transformer, r, reducer)).fork(); } for (Node p; (p = advance()) != null; ) r = reducer.applyAsLong(r, transformer.applyAsLong(p.val)); result = r; CountedCompleter c; for (c = firstComplete(); c != null; c = c.nextComplete()) { @SuppressWarnings("unchecked") MapReduceValuesToLongTask t = (MapReduceValuesToLongTask)c, s = t.rights; while (s != null) { t.result = reducer.applyAsLong(t.result, s.result); s = t.rights = s.nextRight; } } } } } @SuppressWarnings("serial") static final class MapReduceEntriesToLongTask extends BulkTask { final ToLongFunction> transformer; final LongBinaryOperator reducer; final long basis; long result; MapReduceEntriesToLongTask rights, nextRight; MapReduceEntriesToLongTask (BulkTask p, int b, int i, int f, Node[] t, MapReduceEntriesToLongTask nextRight, ToLongFunction> transformer, long basis, LongBinaryOperator reducer) { super(p, b, i, f, t); this.nextRight = nextRight; this.transformer = transformer; this.basis = basis; this.reducer = reducer; } public final Long getRawResult() { return result; } public final void compute() { final ToLongFunction> transformer; final LongBinaryOperator reducer; if ((transformer = this.transformer) != null && (reducer = this.reducer) != null) { long r = this.basis; for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); (rights = new MapReduceEntriesToLongTask (this, batch >>>= 1, baseLimit = h, f, tab, rights, transformer, r, reducer)).fork(); } for (Node p; (p = advance()) != null; ) r = reducer.applyAsLong(r, transformer.applyAsLong(p)); result = r; CountedCompleter c; for (c = firstComplete(); c != null; c = c.nextComplete()) { @SuppressWarnings("unchecked") MapReduceEntriesToLongTask t = (MapReduceEntriesToLongTask)c, s = t.rights; while (s != null) { t.result = reducer.applyAsLong(t.result, s.result); s = t.rights = s.nextRight; } } } } } @SuppressWarnings("serial") static final class MapReduceMappingsToLongTask extends BulkTask { final ToLongBiFunction transformer; final LongBinaryOperator reducer; final long basis; long result; MapReduceMappingsToLongTask rights, nextRight; MapReduceMappingsToLongTask (BulkTask p, int b, int i, int f, Node[] t, MapReduceMappingsToLongTask nextRight, ToLongBiFunction transformer, long basis, LongBinaryOperator reducer) { super(p, b, i, f, t); this.nextRight = nextRight; this.transformer = transformer; this.basis = basis; this.reducer = reducer; } public final Long getRawResult() { return result; } public final void compute() { final ToLongBiFunction transformer; final LongBinaryOperator reducer; if ((transformer = this.transformer) != null && (reducer = this.reducer) != null) { long r = this.basis; for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); (rights = new MapReduceMappingsToLongTask (this, batch >>>= 1, baseLimit = h, f, tab, rights, transformer, r, reducer)).fork(); } for (Node p; (p = advance()) != null; ) r = reducer.applyAsLong(r, transformer.applyAsLong(p.key, p.val)); result = r; CountedCompleter c; for (c = firstComplete(); c != null; c = c.nextComplete()) { @SuppressWarnings("unchecked") MapReduceMappingsToLongTask t = (MapReduceMappingsToLongTask)c, s = t.rights; while (s != null) { t.result = reducer.applyAsLong(t.result, s.result); s = t.rights = s.nextRight; } } } } } @SuppressWarnings("serial") static final class MapReduceKeysToIntTask extends BulkTask { final ToIntFunction transformer; final IntBinaryOperator reducer; final int basis; int result; MapReduceKeysToIntTask rights, nextRight; MapReduceKeysToIntTask (BulkTask p, int b, int i, int f, Node[] t, MapReduceKeysToIntTask nextRight, ToIntFunction transformer, int basis, IntBinaryOperator reducer) { super(p, b, i, f, t); this.nextRight = nextRight; this.transformer = transformer; this.basis = basis; this.reducer = reducer; } public final Integer getRawResult() { return result; } public final void compute() { final ToIntFunction transformer; final IntBinaryOperator reducer; if ((transformer = this.transformer) != null && (reducer = this.reducer) != null) { int r = this.basis; for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); (rights = new MapReduceKeysToIntTask (this, batch >>>= 1, baseLimit = h, f, tab, rights, transformer, r, reducer)).fork(); } for (Node p; (p = advance()) != null; ) r = reducer.applyAsInt(r, transformer.applyAsInt(p.key)); result = r; CountedCompleter c; for (c = firstComplete(); c != null; c = c.nextComplete()) { @SuppressWarnings("unchecked") MapReduceKeysToIntTask t = (MapReduceKeysToIntTask)c, s = t.rights; while (s != null) { t.result = reducer.applyAsInt(t.result, s.result); s = t.rights = s.nextRight; } } } } } @SuppressWarnings("serial") static final class MapReduceValuesToIntTask extends BulkTask { final ToIntFunction transformer; final IntBinaryOperator reducer; final int basis; int result; MapReduceValuesToIntTask rights, nextRight; MapReduceValuesToIntTask (BulkTask p, int b, int i, int f, Node[] t, MapReduceValuesToIntTask nextRight, ToIntFunction transformer, int basis, IntBinaryOperator reducer) { super(p, b, i, f, t); this.nextRight = nextRight; this.transformer = transformer; this.basis = basis; this.reducer = reducer; } public final Integer getRawResult() { return result; } public final void compute() { final ToIntFunction transformer; final IntBinaryOperator reducer; if ((transformer = this.transformer) != null && (reducer = this.reducer) != null) { int r = this.basis; for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); (rights = new MapReduceValuesToIntTask (this, batch >>>= 1, baseLimit = h, f, tab, rights, transformer, r, reducer)).fork(); } for (Node p; (p = advance()) != null; ) r = reducer.applyAsInt(r, transformer.applyAsInt(p.val)); result = r; CountedCompleter c; for (c = firstComplete(); c != null; c = c.nextComplete()) { @SuppressWarnings("unchecked") MapReduceValuesToIntTask t = (MapReduceValuesToIntTask)c, s = t.rights; while (s != null) { t.result = reducer.applyAsInt(t.result, s.result); s = t.rights = s.nextRight; } } } } } @SuppressWarnings("serial") static final class MapReduceEntriesToIntTask extends BulkTask { final ToIntFunction> transformer; final IntBinaryOperator reducer; final int basis; int result; MapReduceEntriesToIntTask rights, nextRight; MapReduceEntriesToIntTask (BulkTask p, int b, int i, int f, Node[] t, MapReduceEntriesToIntTask nextRight, ToIntFunction> transformer, int basis, IntBinaryOperator reducer) { super(p, b, i, f, t); this.nextRight = nextRight; this.transformer = transformer; this.basis = basis; this.reducer = reducer; } public final Integer getRawResult() { return result; } public final void compute() { final ToIntFunction> transformer; final IntBinaryOperator reducer; if ((transformer = this.transformer) != null && (reducer = this.reducer) != null) { int r = this.basis; for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); (rights = new MapReduceEntriesToIntTask (this, batch >>>= 1, baseLimit = h, f, tab, rights, transformer, r, reducer)).fork(); } for (Node p; (p = advance()) != null; ) r = reducer.applyAsInt(r, transformer.applyAsInt(p)); result = r; CountedCompleter c; for (c = firstComplete(); c != null; c = c.nextComplete()) { @SuppressWarnings("unchecked") MapReduceEntriesToIntTask t = (MapReduceEntriesToIntTask)c, s = t.rights; while (s != null) { t.result = reducer.applyAsInt(t.result, s.result); s = t.rights = s.nextRight; } } } } } @SuppressWarnings("serial") static final class MapReduceMappingsToIntTask extends BulkTask { final ToIntBiFunction transformer; final IntBinaryOperator reducer; final int basis; int result; MapReduceMappingsToIntTask rights, nextRight; MapReduceMappingsToIntTask (BulkTask p, int b, int i, int f, Node[] t, MapReduceMappingsToIntTask nextRight, ToIntBiFunction transformer, int basis, IntBinaryOperator reducer) { super(p, b, i, f, t); this.nextRight = nextRight; this.transformer = transformer; this.basis = basis; this.reducer = reducer; } public final Integer getRawResult() { return result; } public final void compute() { final ToIntBiFunction transformer; final IntBinaryOperator reducer; if ((transformer = this.transformer) != null && (reducer = this.reducer) != null) { int r = this.basis; for (int i = baseIndex, f, h; batch > 0 && (h = ((f = baseLimit) + i) >>> 1) > i;) { addToPendingCount(1); (rights = new MapReduceMappingsToIntTask (this, batch >>>= 1, baseLimit = h, f, tab, rights, transformer, r, reducer)).fork(); } for (Node p; (p = advance()) != null; ) r = reducer.applyAsInt(r, transformer.applyAsInt(p.key, p.val)); result = r; CountedCompleter c; for (c = firstComplete(); c != null; c = c.nextComplete()) { @SuppressWarnings("unchecked") MapReduceMappingsToIntTask t = (MapReduceMappingsToIntTask)c, s = t.rights; while (s != null) { t.result = reducer.applyAsInt(t.result, s.result); s = t.rights = s.nextRight; } } } } } /*-------Unsafe mechanics------*/ /** * unsafe代码块控制了一些属性的修改工作,比如最常用的SIZECTL 。 * 在这一版本的concurrentHashMap中,大量应用来的CAS方法进行变量、属性的修改工作。利用CAS进行无锁操作,可以大大提高性能。 * static代码块中:大量使用了反射 */ private static final sun.misc.Unsafe U; private static final long SIZECTL; private static final long TRANSFERINDEX; private static final long BASECOUNT; private static final long CELLSBUSY; private static final long CELLVALUE; private static final long ABASE; private static final int ASHIFT; static { try { U = sun.misc.Unsafe.getUnsafe(); Class k = ConcurrentHashMap.class; SIZECTL = U.objectFieldOffset (k.getDeclaredField("sizeCtl")); TRANSFERINDEX = U.objectFieldOffset (k.getDeclaredField("transferIndex")); BASECOUNT = U.objectFieldOffset (k.getDeclaredField("baseCount")); CELLSBUSY = U.objectFieldOffset (k.getDeclaredField("cellsBusy")); Class ck = CounterCell.class; CELLVALUE = U.objectFieldOffset (ck.getDeclaredField("value")); Class ak = Node[].class; ABASE = U.arrayBaseOffset(ak); int scale = U.arrayIndexScale(ak); if ((scale & (scale - 1)) != 0) throw new Error("data type scale not a power of two"); ASHIFT = 31 - Integer.numberOfLeadingZeros(scale); } catch (Exception e) { throw new Error(e); } } }

参考:https://blog.csdn.net/u010723709/article/details/48007881

你可能感兴趣的:(JDK8源码分析,java)