并发容器Map - ConcurrentSkipListMap

跳表在原有的有序链表上面增加了多级索引,通过索引来实现快速查找。

1.官方文档

This class implements a tree-like two-dimensionally linked skip
list in which the index levels are represented in separate
nodes from the base nodes holding data.  There are two reasons
for taking this approach instead of the usual array-based
structure: 1) Array based implementations seem to encounter
more complexity and overhead 2) We can use cheaper algorithms
for the heavily-traversed index lists than can be used for the
base lists.  Here's a picture of some of the basics for a
possible list with 2 levels of index:

     * Head nodes          Index nodes
     * +-+    right        +-+                      +-+
     * |2|---------------->| |--------------------->| |->null
     * +-+                 +-+                      +-+
     *  | down              |                        |
     *  v                   v                        v
     * +-+            +-+  +-+       +-+            +-+       +-+
     * |1|----------->| |->| |------>| |----------->| |------>| |->null
     * +-+            +-+  +-+       +-+            +-+       +-+
     *  v              |    |         |              |         |
     * Nodes  next     v    v         v              v         v
     * +-+  +-+  +-+  +-+  +-+  +-+  +-+  +-+  +-+  +-+  +-+  +-+
     * | |->|A|->|B|->|C|->|D|->|E|->|F|->|G|->|H|->|I|->|J|->|K|->null
     * +-+  +-+  +-+  +-+  +-+  +-+  +-+  +-+  +-+  +-+  +-+  +-+

实现了一个类似树的二维链接跳表,索引层与保存数据的基础结点是分开的不同数据类型。采用这种方法而不是基于数组有两个原因:

  • 1)基于数组的实现会更复杂并且开销更大
  • 2)可以使用开销更小的算法用于经常被遍历的索引链表而不是基础链表(最底层的保存数据的链表)。
The base lists use a variant of the HM linked ordered set
algorithm. See Tim Harris, "A pragmatic implementation of
non-blocking linked lists"
http://www.cl.cam.ac.uk/~tlh20/publications.html and Maged
Michael "High Performance Dynamic Lock-Free Hash Tables and
List-Based Sets"
http://www.research.ibm.com/people/m/michael/pubs.htm.  The
basic idea in these lists is to mark the "next" pointers of
deleted nodes when deleting to avoid conflicts with concurrent
insertions, and when traversing to keep track of triples
(predecessor, node, successor) in order to detect when and how
to unlink these deleted nodes.

基础链表使用了HM链接有序集算法的变体,参见Tim Harris,"A pragmatic implementation of non-blocking linked lists",以及Maged
Michael, "High Performance Dynamic Lock-Free Hash Tables and List-Based Sets"。这些链表的基本思想是在删除时标记已删除结点的next指针,以避免与并发插入冲突;或者是在遍历时跟踪三元组(predecessor, node, successor)以便检测什么时候以及怎样unlink这些已删除的结点。

Rather than using mark-bits to mark list deletions (which can
be slow and space-intensive using AtomicMarkedReference), nodes
use direct CAS'able next pointers.  On deletion, instead of
marking a pointer, they splice in another node that can be
thought of as standing for a marked pointer (indicating this by
using otherwise impossible field values).  Using plain nodes
acts roughly like "boxed" implementations of marked pointers,
but uses new nodes only when nodes are deleted, not for every
link.  This requires less space and supports faster
traversal. Even if marked references were better supported by
JVMs, traversal using this technique might still be faster
because any search need only read ahead one more node than
otherwise required (to check for trailing marker) rather than
unmasking mark bits or whatever on each read.

结点使用可直接CAS的next指针而不是使用标记位来标记链表删除(AtomicMarkedReference可能是缓慢的并且空间密集的)。在删除时,不是标记指针,而是拼接另一个可代表标记指针的结点(通过使用不可能出现的字段值来指示这一点)。使用普通结点大致类似于标记指针的装箱实现,但仅在删除结点时使用新结点,而不是每个链接。这需要更少的空间并支持更快的遍历。即使JVM更好地支持标记引用,使用此技术的遍历可能仍然更快,因为任何搜索只需要提前读取一个结点而不是在每次读取时取消屏蔽标记位。

This approach maintains the essential property needed in the HM
algorithm of changing the next-pointer of a deleted node so
that any other CAS of it will fail, but implements the idea by
changing the pointer to point to a different node, not by
marking it.  While it would be possible to further squeeze
space by defining marker nodes not to have key/value fields, it
isn't worth the extra type-testing overhead.  The deletion
markers are rarely encountered during traversal and are
normally quickly garbage collected. (Note that this technique
would not work well in systems without garbage collection.)

这种方法维护HM算法所需的基本属性,即更改已删除结点的next指针,以便该结点的任何其他CAS操作都会失败,通过将指针更改为指向一个不同的结点来实现,而不是标记。虽然可以通过定义标记结点而不是key/value字段来进一步挤压空间,但是不值得额外的类型检测开销。在遍历期间很少遇到删除标记,并且通常很快就被GC回收了。(注意该技术在没有GC的系统中不能很好地工作)。

In addition to using deletion markers, the lists also use
nullness of value fields to indicate deletion, in a style
similar to typical lazy-deletion schemes.  If a node's value is
null, then it is considered logically deleted and ignored even
though it is still reachable. This maintains proper control of
concurrent replace vs delete operations -- an attempted replace
must fail if a delete beat it by nulling field, and a delete
must return the last non-null value held in the field. (Note:
Null, rather than some special marker, is used for value fields
here because it just so happens to mesh with the Map API
requirement that method get returns null if there is no
mapping, which allows nodes to remain concurrently readable
even when deleted. Using any other marker value here would be
messy at best.)

除了使用删除标记外,链表还使用空值来指示删除,类似于典型的延迟删除方案。如果结点的值为null,则逻辑上上认为它已删除并被忽略,即使仍然可以访问。这样可以保持对并发替换和删除操作的正确控制。

Here's the sequence of events for a deletion of node n with
predecessor b and successor f, initially:

     *        +------+       +------+      +------+
     *   ...  |   b  |------>|   n  |----->|   f  | ...
     *        +------+       +------+      +------+

1. CAS n's value field from non-null to null.
   From this point on, no public operations encountering
   the node consider this mapping to exist. However, other
   ongoing insertions and deletions might still modify
   n's next pointer.

2. CAS n's next pointer to point to a new marker node.
   From this point on, no other nodes can be appended to n.
   which avoids deletion errors in CAS-based linked lists.
     *        +------+       +------+      +------+       +------+
     *   ...  |   b  |------>|   n  |----->|marker|------>|   f  | ...
     *        +------+       +------+      +------+       +------+

3. CAS b's next pointer over both n and its marker.
   From this point on, no new traversals will encounter n,
   and it can eventually be GCed.
     *        +------+                                    +------+
     *   ...  |   b  |----------------------------------->|   f  | ...
     *        +------+                                    +------+

A failure at step 1 leads to simple retry due to a lost race
with another operation. Steps 2-3 can fail because some other
thread noticed during a traversal a node with null value and
helped out by marking and/or unlinking.  This helping-out
ensures that no thread can become stuck waiting for progress of
the deleting thread.  The use of marker nodes slightly
complicates helping-out code because traversals must track
consistent reads of up to four nodes (b, n, marker, f), not
just (b, n, f), although the next field of a marker is
immutable, and once a next field is CAS'ed to point to a
marker, it never again changes, so this requires less care.

删除结点n,前驱为b,后继为f。

  • 1.CAS操作将n的值域从非null变为null。从此所有公共操作会认为此结点是不存在的。但是,正在进行的插入和删除仍可能会修改n的next指针。
  • 2.CAS操作将n的next指针指向一个新的标记结点marker。从此没有任何结点可以附加到n之后。这避免了基于CAS链表中的删除错误
  • 3.CAS操作将b的next指针跳过n和marker,从此没有任何遍历会遇到n,并且n最终会被GC回收。

步骤1失败(由于与另一操作竞争失败)会进行简单重试。步骤2-3可能失败,因为其他线程在遍历时遇到具有null值的结点会协助进行标记或者unlinking。这种协助会保证没有线程会因为等待删除线程而阻塞。maker结点的使用会轻微给协助代码带来复杂性,因为遍历时必须跟踪最多四个结点(b n marker f),尽管marker的next指针时不变的,一旦next被CAS指向一个marker,就再也不会改变。

Skip lists add indexing to this scheme, so that the base-level
traversals start close to the locations being found, inserted
or deleted -- usually base level traversals only traverse a few
nodes. This doesn't change the basic algorithm except for the
need to make sure base traversals start at predecessors (here,
b) that are not (structurally) deleted, otherwise retrying
after processing the deletion.

跳表在此机制(上面说的marker)上增加索引,以便在底层的链表遍历在最接近插入、删除的位置开始。这不会改变基本算法,除了要保证底层遍历需要从未删除的前驱b开始,否则需要重试。

Index levels are maintained as lists with volatile next fields,
using CAS to link and unlink.  Races are allowed in index-list
operations that can (rarely) fail to link in a new index node
or delete one. (We can't do this of course for data nodes.)
However, even when this happens, the index lists remain sorted,
so correctly serve as indices.  This can impact performance,
but since skip lists are probabilistic anyway, the net result
is that under contention, the effective "p" value may be lower
than its nominal value. And race windows are kept small enough
that in practice these failures are rare, even under a lot of
contention.

索引层的链表具有volatile类型的next域,使用CAS进行link和unlink。索引层操作允许竞争,可能会(很少)导致无法链接一个新结点或删除结点。(底层数据结点不能这么做)。即使发生这种情况,索引列表仍然保持有序,因此依然是正确的。这可能会影响性能,但是由于跳表是随机的,所以即使在很多竞争情况下这些故障也是很少见的。

The fact that retries (for both base and index lists) are
relatively cheap due to indexing allows some minor
simplifications of retry logic. Traversal restarts are
performed after most "helping-out" CASes. This isn't always
strictly necessary, but the implicit backoffs tend to help
reduce other downstream failed CAS's enough to outweigh restart
cost.  This worsens the worst case, but seems to improve even
highly contended cases.

由于索引允许重试逻辑的一些小的简化,重试(对于基础链表和索引链表)是相对轻量级的。在大多数协助CAS后遍历会重启。这并非严格必要,但隐式的退避(backoff)往往有助于减少其他下游失败的CAS(这种成本会超过重启成本)。这会加重最坏的情况,但是在高度竞争的情况下会提升性能。

Unlike most skip-list implementations, index insertion and
deletion here require a separate traversal pass occurring after
the base-level action, to add or remove index nodes.  This adds
to single-threaded overhead, but improves contended
multithreaded performance by narrowing interference windows,
and allows deletion to ensure that all index nodes will be made
unreachable upon return from a public remove operation, thus
avoiding unwanted garbage retention. This is more important
here than in some other data structures because we cannot null
out node fields referencing user keys since they might still be
read by other ongoing traversals.

与大多数跳表实现不同,索引插入和删除发生在基础层操作后,需要一个单独的遍历以添加或删除索引结点。这增加了单线程开销,但通过缩小干扰窗口提高了竞争多线程的性能,并确保从删除操作返回的索引结点都不可访问,从而避免不必要的垃圾滞留。在这里这显得更重要,因为不能将用户key引用的结点域置为null(可能仍然被其他正在进行的遍历读取)

Indexing uses skip list parameters that maintain good search
performance while using sparser-than-usual indices: The
hardwired parameters k=1, p=0.5 (see method doPut) mean
that about one-quarter of the nodes have indices. Of those that
do, half have one level, a quarter have two, and so on (see
Pugh's Skip List Cookbook, sec 3.4).  The expected total space
requirement for a map is slightly less than for the current
implementation of java.util.TreeMap.

索引使用跳表参数可保持良好的搜索性能:k = 1, p = 0.5(参见方法doPut)意味着大约有1/4的结点具有索引。在具有索引的结点中,大约有一半具有一层索引,1/4具有两层索引,依次类推。

Changing the level of the index (i.e, the height of the
tree-like structure) also uses CAS. The head index has initial
level/height of one. Creation of an index with height greater
than the current level adds a level to the head index by
CAS'ing on a new top-most head. To maintain good performance
after a lot of removals, deletion methods heuristically try to
reduce the height if the topmost levels appear to be empty.
This may encounter races in which it possible (but rare) to
reduce and "lose" a level just as it is about to contain an
index (that will then never be encountered). This does no
structural harm, and in practice appears to be a better option
than allowing unrestrained growth of levels.

改变索引的层级使用CAS。头部索引初始化层级为1。创建高度大于当前层级索引,会通过在新建最高层级head上CAS操作增加head索引一个层级来实现。为了在大量删除之后保证良好的性能,如果最顶层为空,则删除方法启发式尝试降低高度。这可能会遇到竞争,导致可能(很少)减少或失去一层。这不会带来结构上的伤害,在实际中比允许无限制的层级增长更好。

The code for all this is more verbose than you'd like. Most
operations entail locating an element (or position to insert an
element). The code to do this can't be nicely factored out
because subsequent uses require a snapshot of predecessor
and/or successor and/or value fields which can't be returned
all at once, at least not without creating yet another object
to hold them -- creating such little objects is an especially
bad idea for basic internal search operations because it adds
to GC overhead.  (This is one of the few times I've wished Java
had macros.) Instead, some traversal code is interleaved within
insertion and removal operations.  The control logic to handle
all the retry conditions is sometimes twisty. Most search is
broken into 2 parts. findPredecessor() searches index nodes
only, returning a base-level predecessor of the key. findNode()
finishes out the base-level search. Even with this factoring,
there is a fair amount of near-duplication of code to handle
variants.

代码冗长,大多数操作需要定位元素(或插入元素的位置)。这些代码无法很好地抽象出来,因为后续需要predecessor、successor或者值域的快照,这些快照不能一次全部返回,至少不能创建另一个对象来保存它们——创建这些小对象对于基本的内部搜索操作尤其糟糕,因为其增加了GC开销。(这是我少数几次希望Java拥有宏)。相反,一些遍历代码在插入和删除操作中交错。处理所有重试条件的控制逻辑有时是曲折的。大多数搜索分为两个部分:

  • findPredecessor()仅搜索索引结点,返回键的基础级前驱
  • findNode()完成基础级搜索
To produce random values without interference across threads,
we use within-JDK thread local random support (via the
"secondary seed", to avoid interference with user-level
ThreadLocalRandom.)

A previous version of this class wrapped non-comparable keys
with their comparators to emulate Comparables when using
comparators vs Comparables.  However, JVMs now appear to better
handle infusing comparator-vs-comparable choice into search
loops. Static method cpr(comparator, x, y) is used for all
comparisons, which works well as long as the comparator
argument is set up outside of loops (thus sometimes passed as
an argument to internal methods) to avoid field re-reads.

为了在不干扰线程的情况下产生随机值,使用了JDK线程局部随机变量("secondary seed",避免干扰用户级的ThreadLoacalRandom)。

此前版本使用比较器包装不可比较的键。但是JVM现在看起来更好地处理comparator与comparable的选择融入搜索循环中。静态方法cpr(comparator, x, y)。

For explanation of algorithms sharing at least a couple of
features with this one, see Mikhail Fomitchev's thesis
(http://www.cs.yorku.ca/~mikhail/), Keir Fraser's thesis
(http://www.cl.cam.ac.uk/users/kaf24/), and Hakan Sundell's
thesis (http://www.cs.chalmers.se/~phs/).

Given the use of tree-like index nodes, you might wonder why
this doesn't use some kind of search tree instead, which would
support somewhat faster search operations. The reason is that
there are no known efficient lock-free insertion and deletion
algorithms for search trees. The immutability of the "down"
links of index nodes (as opposed to mutable "left" fields in
true trees) makes this tractable using only CAS operations.

有关算法的解释,可参阅如下论文Mikhail Fomitchev的,Keir Fraser,Hakan Sundell。

鉴于使用树状索引结点,为什么不使用某种搜索树,这将支持更快的搜索操作。原因是搜索树没有快速无锁插入和删除算法。索引结点down指针的不变性使得仅使用CAS操作更容易处理(搜索树中的left域是可变的)。

2.Node、 Index和HeadIndex

  • Node,数据节点,存储数据的节点,单链表结构;
  • Index,索引节点,node指向对应的Node数据结点,及向下和向右的索引指针;
  • HeadIndex,头索引节点,继承自Index,并扩展一个level字段,用于记录索引的层级。


    static final class Node {
        final K key;
        volatile Object value;
        volatile Node next;

        /**
         * Creates a new regular node.
         */
        Node(K key, Object value, Node next) {
            this.key = key;
            this.value = value;
            this.next = next;
        }

        /**
         * Creates a new marker node. A marker is distinguished by
         * having its value field point to itself.  Marker nodes also
         * have null keys, a fact that is exploited in a few places,
         * but this doesn't distinguish markers from the base-level
         * header node (head.node), which also has a null key.
         */
        Node(Node next) {
            this.key = null;
            this.value = this;
            this.next = next;
        }
    static class Index {
        final Node node;
        final Index down;
        volatile Index right;

        /**
         * Creates index node with given values.
         */
        Index(Node node, Index down, Index right) {
            this.node = node;
            this.down = down;
            this.right = right;
        }

    static final class HeadIndex extends Index {
        final int level;
        HeadIndex(Node node, Index down, Index right, int level) {
            super(node, down, right);
            this.level = level;
        }
    }

3.构造器与初始化

    public ConcurrentSkipListMap() {
        this.comparator = null;
        initialize();
    }


    public ConcurrentSkipListMap(Comparator comparator) {
        this.comparator = comparator;
        initialize();
    }


    public ConcurrentSkipListMap(Map m) {
        this.comparator = null;
        initialize();
        putAll(m);
    }
    private void initialize() {
        keySet = null;
        entrySet = null;
        values = null;
        descendingMap = null;
        head = new HeadIndex(new Node(null, BASE_HEADER, null),
                                  null, null, 1);
    }

初始状态如下:


4.put操作

    public V put(K key, V value) {
        if (value == null)
            throw new NullPointerException();
        return doPut(key, value, false);
    }
    private V doPut(K key, V value, boolean onlyIfAbsent) {
        Node z;             // added node
        if (key == null)
            throw new NullPointerException();
        Comparator cmp = comparator;
        outer: for (;;) {
            for (Node b = findPredecessor(key, cmp), n = b.next;;) {
                if (n != null) {
                    Object v; int c;
                    Node f = n.next;
                    if (n != b.next)               // inconsistent read
                        break;
                    if ((v = n.value) == null) {   // n is deleted
                        n.helpDelete(b, f);
                        break;
                    }
                    if (b.value == null || v == n) // b is deleted
                        break;
                    if ((c = cpr(cmp, key, n.key)) > 0) {
                        b = n;
                        n = f;
                        continue;
                    }
                    if (c == 0) {
                        if (onlyIfAbsent || n.casValue(v, value)) {
                            @SuppressWarnings("unchecked") V vv = (V)v;
                            return vv;
                        }
                        break; // restart if lost race to replace value
                    }
                    // else c < 0; fall through
                }

                z = new Node(key, value, n);
                if (!b.casNext(n, z))
                    break;         // restart if lost race to append to b
                break outer;
            }
        }

        int rnd = ThreadLocalRandom.nextSecondarySeed();
        if ((rnd & 0x80000001) == 0) { // test highest and lowest bits
            int level = 1, max;
            while (((rnd >>>= 1) & 1) != 0)
                ++level;
            Index idx = null;
            HeadIndex h = head;
            if (level <= (max = h.level)) {
                for (int i = 1; i <= level; ++i)
                    idx = new Index(z, idx, null);
            }
            else { // try to grow by one level
                level = max + 1; // hold in array and later pick the one to use
                @SuppressWarnings("unchecked")Index[] idxs =
                    (Index[])new Index[level+1];
                for (int i = 1; i <= level; ++i)
                    idxs[i] = idx = new Index(z, idx, null);
                for (;;) {
                    h = head;
                    int oldLevel = h.level;
                    if (level <= oldLevel) // lost race to add level
                        break;
                    HeadIndex newh = h;
                    Node oldbase = h.node;
                    for (int j = oldLevel+1; j <= level; ++j)
                        newh = new HeadIndex(oldbase, newh, idxs[j], j);
                    if (casHead(h, newh)) {
                        h = newh;
                        idx = idxs[level = oldLevel];
                        break;
                    }
                }
            }
            // find insertion points and splice in
            splice: for (int insertionLevel = level;;) {
                int j = h.level;
                for (Index q = h, r = q.right, t = idx;;) {
                    if (q == null || t == null)
                        break splice;
                    if (r != null) {
                        Node n = r.node;
                        // compare before deletion check avoids needing recheck
                        int c = cpr(cmp, key, n.key);
                        if (n.value == null) {
                            if (!q.unlink(r))
                                break;
                            r = q.right;
                            continue;
                        }
                        if (c > 0) {
                            q = r;
                            r = r.right;
                            continue;
                        }
                    }

                    if (j == insertionLevel) {
                        if (!q.link(r, t))
                            break; // restart
                        if (t.node.value == null) {
                            findNode(key);
                            break splice;
                        }
                        if (--insertionLevel == 0)
                            break splice;
                    }

                    if (--j >= insertionLevel && j < level)
                        t = t.down;
                    q = q.down;
                    r = q.right;
                }
            }
        }
        return null;
    }

撇除中间的并发处理,总体上分为两个步骤:

  • step1.将结点插入到基础层的数据链表
    1)findPredecessor()通过索引找到基础层最近的前驱
    2)从该前驱向后遍历,找到插入的位置:如果已存在,就更新,否则插入新结点
  • step2.随机建立索引层
    1)根据ThreadLocalRandom.nextSecondarySeed()判断是否需要创建索引,如果不需要(最高位和最低位不为0),则完成插入;
    2)通过rnd决定索引层级,如果不高于最高层级,则建立一条竖直的索引链表idx = new Index(z, idx, null);
    3)如果高于,则新层级只能比现有最高多1,同样建立一条竖直链表idxs[i] = idx = new Index(z, idx, null);并将头索引增加到相应的高度。
    第三步操作后:h指向新的head,idx指向新建的待链接splice的最高层的索引,level为待链接splice的最高层。
    4)splice链接操作,也即将新建立的索引链接如原来的索引中。
    insertionLevel = level指明目标插入层;
    j = h.level表示HeadIndex纵列的游标;
    q = h, r = q.right,将t = idx插入到q和r之间;
    t = idx待插入的索引。

示例,如下跳表插入9:



findPredecessor()找到底层数据链表的起始点5,从5开始搜索,插入8与12之间:



建立竖直索引链表和新的头部结点:

将新建的索引链表链接到原来的索引层中去:


5.remove操作

    public boolean remove(Object key, Object value) {
        if (key == null)
            throw new NullPointerException();
        return value != null && doRemove(key, value) != null;
    }
    final V doRemove(Object key, Object value) {
        if (key == null)
            throw new NullPointerException();
        Comparator cmp = comparator;
        outer: for (;;) {
            for (Node b = findPredecessor(key, cmp), n = b.next;;) {
                Object v; int c;
                if (n == null)
                    break outer;
                Node f = n.next;
                if (n != b.next)                    // inconsistent read
                    break;
                if ((v = n.value) == null) {        // n is deleted
                    n.helpDelete(b, f);
                    break;
                }
                if (b.value == null || v == n)      // b is deleted
                    break;
                if ((c = cpr(cmp, key, n.key)) < 0)
                    break outer;
                if (c > 0) {
                    b = n;
                    n = f;
                    continue;
                }
                if (value != null && !value.equals(v))
                    break outer;
                if (!n.casValue(v, null))
                    break;
                if (!n.appendMarker(f) || !b.casNext(n, f))
                    findNode(key);                  // retry via findNode
                else {
                    findPredecessor(key, cmp);      // clean index
                    if (head.right == null)
                        tryReduceLevel();
                }
                @SuppressWarnings("unchecked") V vv = (V)v;
                return vv;
            }
        }
        return null;
    }

撇除并发细节,删除也分为两大步:

  • step1.删除基础数据层链表结点
  • step2.findPredecessor(key, cmp)删除对应的索引层结点,如果head的right指针指向了null,则尝试减小索引层级

删除基础数据层链表结点关键的三个步骤:

  • 1)n.casValue(v, null),将n的值置为null
  • 2)n.appendMarker(f),在n之后添加一个marker结点
  • 3)b.casNext(n, f),将b的next指向f
     *        +------+       +------+      +------+
     *   ...  |   b  |------>|   n  |----->|   f  | ...
     *        +------+       +------+      +------+

     *        +------+       +------+      +------+       +------+
     *   ...  |   b  |------>|   n  |----->|marker|------>|   f  | ...
     *        +------+       +------+      +------+       +------+

     *        +------+                                    +------+
     *   ...  |   b  |----------------------------------->|   f  | ...
     *        +------+                                    +------+

示例,如下跳表删除9:



先删除数据结点,然后删除索引结点,最后减小索引层级:


6.marker有什么用?

核心作用是:保证并发删除、插入时线程安全。因为插入marker后,cas时(当下的数据是marker)与之前获取的数据(C)有差别,所以会进行重试。

6.1 没有marker并发操作会出现问题

假设当前基础层的数据链表为A->B->C,Thread 1要删除B,Thread 2要在B后面插入结点D。可以模拟一下如果没有marker时会出现什么情况:



最后结果如下:

            +------+      +------+
            |   B  |----->|   D  | 
            +------+      +------+
                              |
                              V
+------+                   +------+
|   A  |------>----->----->|   C  | 
+------+                   +------+

6.2 有marker并发操作不会出现问题

Thread 1删除B之后:

            +------+      +-----------+
            |   B  |----->|   marker  | 
            +------+      +-----------+
                              |
                              V
+------+                   +------+
|   A  |------>----->----->|   C  | 
+------+                   +------+

Thread 2调用B.casNext(C, D)可能发生的情况:

  • 1)在Thread 1设置marker结点操作前,Thread 2成功,则Thread 1要么成功,要么重试
  • 2)在Thread 1设置marker之后,删除B与maker之前,Thread 2操作失败,Thread 2重试
  • 3) 在Thread 1删除B与maker之后,则B.casNext(C, D) 失败,Thread 2重试

参考

  • 源码注释
  • 死磕 java集合之ConcurrentSkipListMap源码分析——发现个bug
  • ConcurrentSkipListMap 源码分析 (基于Java 8)

你可能感兴趣的:(并发容器Map - ConcurrentSkipListMap)