Java多线程系列--【JUC集合05】- ConcurrentSkipListMap

参考:http://www.cnblogs.com/skywang12345/p/3498556.html

概要

本章对Java.util.concurrent包中的ConcurrentSkipListMap类进行详细的介绍。内容包括:
ConcurrentSkipListMap介绍
ConcurrentSkipListMap原理和数据结构
ConcurrentSkipListMap函数列表
ConcurrentSkipListMap源码分析(JDK1.7.0_40版本)
ConcurrentSkipListMap示例

转载请注明出处:http://www.cnblogs.com/skywang12345/p/3498556.html

 

ConcurrentSkipListMap介绍

ConcurrentSkipListMap是线程安全的有序的哈希表,适用于高并发的场景。
ConcurrentSkipListMap和TreeMap,它们虽然都是有序的哈希表。但是,第一,它们的线程安全机制不同,TreeMap是非线程安全的,而ConcurrentSkipListMap是线程安全的。第二,ConcurrentSkipListMap是通过跳表实现的,而TreeMap是通过红黑树实现的。
关于跳表(Skip List),它是平衡树的一种替代的数据结构,但是和红黑树不相同的是,跳表对于树的平衡的实现是基于一种随机化的算法的,这样也就是说跳表的插入和删除的工作是比较简单的。

 

ConcurrentSkipListMap原理和数据结构

ConcurrentSkipListMap的数据结构,如下图所示:

说明

先以数据“7,14,21,32,37,71,85”序列为例,来对跳表进行简单说明。

跳表分为许多层(level),每一层都可以看作是数据的索引,这些索引的意义就是加快跳表查找数据速度。每一层的数据都是有序的,上一层数据是下一层数据的子集,并且第一层(level 1)包含了全部的数据;层次越高,跳跃性越大,包含的数据越少。
跳表包含一个表头,它查找数据时,是从上往下,从左往右进行查找。现在“需要找出值为32的节点”为例,来对比说明跳表和普遍的链表。

情况1:链表中查找“32”节点
路径如下图1-02所示:

需要4步(红色部分表示路径)。

 

情况2:跳表中查找“32”节点
路径如下图1-03所示:

忽略索引垂直线路上路径的情况下,只需要2步(红色部分表示路径)。


下面说说Java中ConcurrentSkipListMap的数据结构。
(01) ConcurrentSkipListMap继承于AbstractMap类,也就意味着它是一个哈希表。
(02) Index是ConcurrentSkipListMap的内部类,它与“跳表中的索引相对应”。HeadIndex继承于Index,ConcurrentSkipListMap中含有一个HeadIndex的对象head,head是“跳表的表头”。
(03) Index是跳表中的索引,它包含“右索引的指针(right)”,“下索引的指针(down)”和“哈希表节点node”。node是Node的对象,Node也是ConcurrentSkipListMap中的内部类。

 

ConcurrentSkipListMap函数列表

复制代码
// 构造一个新的空映射,该映射按照键的自然顺序进行排序。
ConcurrentSkipListMap()
// 构造一个新的空映射,该映射按照指定的比较器进行排序。
ConcurrentSkipListMap(Comparatorsuper K> comparator)
// 构造一个新映射,该映射所包含的映射关系与给定映射包含的映射关系相同,并按照键的自然顺序进行排序。
ConcurrentSkipListMap(Mapextends K,? extends V> m)
// 构造一个新映射,该映射所包含的映射关系与指定的有序映射包含的映射关系相同,使用的顺序也相同。
ConcurrentSkipListMap(SortedMapextends V> m)

// 返回与大于等于给定键的最小键关联的键-值映射关系;如果不存在这样的条目,则返回 null。
Map.Entry ceilingEntry(K key)
// 返回大于等于给定键的最小键;如果不存在这样的键,则返回 null。
K ceilingKey(K key)
// 从此映射中移除所有映射关系。
void clear()
// 返回此 ConcurrentSkipListMap 实例的浅表副本。
ConcurrentSkipListMap clone()
// 返回对此映射中的键进行排序的比较器;如果此映射使用键的自然顺序,则返回 null。
Comparatorsuper K> comparator()
// 如果此映射包含指定键的映射关系,则返回 true。
boolean containsKey(Object key)
// 如果此映射为指定值映射一个或多个键,则返回 true。
boolean containsValue(Object value)
// 返回此映射中所包含键的逆序 NavigableSet 视图。
NavigableSet descendingKeySet()
// 返回此映射中所包含映射关系的逆序视图。
ConcurrentNavigableMap descendingMap()
// 返回此映射中所包含的映射关系的 Set 视图。
Set> entrySet()
// 比较指定对象与此映射的相等性。
boolean equals(Object o)
// 返回与此映射中的最小键关联的键-值映射关系;如果该映射为空,则返回 null。
Map.Entry firstEntry()
// 返回此映射中当前第一个(最低)键。
K firstKey()
// 返回与小于等于给定键的最大键关联的键-值映射关系;如果不存在这样的键,则返回 null。
Map.Entry floorEntry(K key)
// 返回小于等于给定键的最大键;如果不存在这样的键,则返回 null。
K floorKey(K key)
// 返回指定键所映射到的值;如果此映射不包含该键的映射关系,则返回 null。
V get(Object key)
// 返回此映射的部分视图,其键值严格小于 toKey。
ConcurrentNavigableMap headMap(K toKey)
// 返回此映射的部分视图,其键小于(或等于,如果 inclusive 为 true)toKey。
ConcurrentNavigableMap headMap(K toKey, boolean inclusive)
// 返回与严格大于给定键的最小键关联的键-值映射关系;如果不存在这样的键,则返回 null。
Map.Entry higherEntry(K key)
// 返回严格大于给定键的最小键;如果不存在这样的键,则返回 null。
K higherKey(K key)
// 如果此映射未包含键-值映射关系,则返回 true。
boolean isEmpty()
// 返回此映射中所包含键的 NavigableSet 视图。
NavigableSet keySet()
// 返回与此映射中的最大键关联的键-值映射关系;如果该映射为空,则返回 null。
Map.Entry lastEntry()
// 返回映射中当前最后一个(最高)键。
K lastKey()
// 返回与严格小于给定键的最大键关联的键-值映射关系;如果不存在这样的键,则返回 null。
Map.Entry lowerEntry(K key)
// 返回严格小于给定键的最大键;如果不存在这样的键,则返回 null。
K lowerKey(K key)
// 返回此映射中所包含键的 NavigableSet 视图。
NavigableSet navigableKeySet()
// 移除并返回与此映射中的最小键关联的键-值映射关系;如果该映射为空,则返回 null。
Map.Entry pollFirstEntry()
// 移除并返回与此映射中的最大键关联的键-值映射关系;如果该映射为空,则返回 null。
Map.Entry pollLastEntry()
// 将指定值与此映射中的指定键关联。
V put(K key, V value)
// 如果指定键已经不再与某个值相关联,则将它与给定值关联。
V putIfAbsent(K key, V value)
// 从此映射中移除指定键的映射关系(如果存在)。
V remove(Object key)
// 只有目前将键的条目映射到给定值时,才移除该键的条目。
boolean remove(Object key, Object value)
// 只有目前将键的条目映射到某一值时,才替换该键的条目。
V replace(K key, V value)
// 只有目前将键的条目映射到给定值时,才替换该键的条目。
boolean replace(K key, V oldValue, V newValue)
// 返回此映射中的键-值映射关系数。
int size()
// 返回此映射的部分视图,其键的范围从 fromKey 到 toKey。
ConcurrentNavigableMap subMap(K fromKey, boolean fromInclusive, K toKey, boolean toInclusive)
// 返回此映射的部分视图,其键值的范围从 fromKey(包括)到 toKey(不包括)。
ConcurrentNavigableMap subMap(K fromKey, K toKey)
// 返回此映射的部分视图,其键大于等于 fromKey。
ConcurrentNavigableMap tailMap(K fromKey)
// 返回此映射的部分视图,其键大于(或等于,如果 inclusive 为 true)fromKey。
ConcurrentNavigableMap tailMap(K fromKey, boolean inclusive)
// 返回此映射中所包含值的 Collection 视图。
Collection values()
复制代码

 

ConcurrentSkipListMap源码分析(JDK1.7.0_40版本)

ConcurrentSkipListMap.java的完整源码如下:

复制代码
   1 /*
   2  * ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms.
   3  *
   4  *
   5  *
   6  *
   7  *
   8  *
   9  *
  10  *
  11  *
  12  *
  13  *
  14  *
  15  *
  16  *
  17  *
  18  *
  19  *
  20  *
  21  *
  22  *
  23  */
  24 
  25 /*
  26  *
  27  *
  28  *
  29  *
  30  *
  31  * Written by Doug Lea with assistance from members of JCP JSR-166
  32  * Expert Group and released to the public domain, as explained at
  33  * http://creativecommons.org/publicdomain/zero/1.0/
  34  */
  35 
  36 package java.util.concurrent;
  37 import java.util.*;
  38 import java.util.concurrent.atomic.*;
  39 
  40 /**
  41  * A scalable concurrent {@link ConcurrentNavigableMap} implementation.
  42  * The map is sorted according to the {@linkplain Comparable natural
  43  * ordering} of its keys, or by a {@link Comparator} provided at map
  44  * creation time, depending on which constructor is used.
  45  *
  46  * 

This class implements a concurrent variant of 47 * href="http://en.wikipedia.org/wiki/Skip_list" target="_top">SkipLists 48 * providing expected average log(n) time cost for the 49 * containsKey, get, put and 50 * remove operations and their variants. Insertion, removal, 51 * update, and access operations safely execute concurrently by 52 * multiple threads. Iterators are weakly consistent, returning 53 * elements reflecting the state of the map at some point at or since 54 * the creation of the iterator. They do not throw {@link 55 * ConcurrentModificationException}, and may proceed concurrently with 56 * other operations. Ascending key ordered views and their iterators 57 * are faster than descending ones. 58 * 59 *

All Map.Entry pairs returned by methods in this class 60 * and its views represent snapshots of mappings at the time they were 61 * produced. They do not support the Entry.setValue 62 * method. (Note however that it is possible to change mappings in the 63 * associated map using put, putIfAbsent, or 64 * replace, depending on exactly which effect you need.) 65 * 66 *

Beware that, unlike in most collections, the size 67 * method is not a constant-time operation. Because of the 68 * asynchronous nature of these maps, determining the current number 69 * of elements requires a traversal of the elements, and so may report 70 * inaccurate results if this collection is modified during traversal. 71 * Additionally, the bulk operations putAll, equals, 72 * toArray, containsValue, and clear are 73 * not guaranteed to be performed atomically. For example, an 74 * iterator operating concurrently with a putAll operation 75 * might view only some of the added elements. 76 * 77 *

This class and its views and iterators implement all of the 78 * optional methods of the {@link Map} and {@link Iterator} 79 * interfaces. Like most other concurrent collections, this class does 80 * not permit the use of null keys or values because some 81 * null return values cannot be reliably distinguished from the absence of 82 * elements. 83 * 84 *

This class is a member of the 85 * docRoot}/../technotes/guides/collections/index.html"> 86 * Java Collections Framework. 87 * 88 * @author Doug Lea 89 * @param the type of keys maintained by this map 90 * @param the type of mapped values 91 * @since 1.6 92 */ 93 public class ConcurrentSkipListMap extends AbstractMap 94 implements ConcurrentNavigableMap, 95 Cloneable, 96 java.io.Serializable { 97 /* 98 * This class implements a tree-like two-dimensionally linked skip 99 * list in which the index levels are represented in separate 100 * nodes from the base nodes holding data. There are two reasons 101 * for taking this approach instead of the usual array-based 102 * structure: 1) Array based implementations seem to encounter 103 * more complexity and overhead 2) We can use cheaper algorithms 104 * for the heavily-traversed index lists than can be used for the 105 * base lists. Here's a picture of some of the basics for a 106 * possible list with 2 levels of index: 107 * 108 * Head nodes Index nodes 109 * +-+ right +-+ +-+ 110 * |2|---------------->| |--------------------->| |->null 111 * +-+ +-+ +-+ 112 * | down | | 113 * v v v 114 * +-+ +-+ +-+ +-+ +-+ +-+ 115 * |1|----------->| |->| |------>| |----------->| |------>| |->null 116 * +-+ +-+ +-+ +-+ +-+ +-+ 117 * v | | | | | 118 * Nodes next v v v v v 119 * +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ 120 * | |->|A|->|B|->|C|->|D|->|E|->|F|->|G|->|H|->|I|->|J|->|K|->null 121 * +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ 122 * 123 * The base lists use a variant of the HM linked ordered set 124 * algorithm. See Tim Harris, "A pragmatic implementation of 125 * non-blocking linked lists" 126 * http://www.cl.cam.ac.uk/~tlh20/publications.html and Maged 127 * Michael "High Performance Dynamic Lock-Free Hash Tables and 128 * List-Based Sets" 129 * http://www.research.ibm.com/people/m/michael/pubs.htm. The 130 * basic idea in these lists is to mark the "next" pointers of 131 * deleted nodes when deleting to avoid conflicts with concurrent 132 * insertions, and when traversing to keep track of triples 133 * (predecessor, node, successor) in order to detect when and how 134 * to unlink these deleted nodes. 135 * 136 * Rather than using mark-bits to mark list deletions (which can 137 * be slow and space-intensive using AtomicMarkedReference), nodes 138 * use direct CAS'able next pointers. On deletion, instead of 139 * marking a pointer, they splice in another node that can be 140 * thought of as standing for a marked pointer (indicating this by 141 * using otherwise impossible field values). Using plain nodes 142 * acts roughly like "boxed" implementations of marked pointers, 143 * but uses new nodes only when nodes are deleted, not for every 144 * link. This requires less space and supports faster 145 * traversal. Even if marked references were better supported by 146 * JVMs, traversal using this technique might still be faster 147 * because any search need only read ahead one more node than 148 * otherwise required (to check for trailing marker) rather than 149 * unmasking mark bits or whatever on each read. 150 * 151 * This approach maintains the essential property needed in the HM 152 * algorithm of changing the next-pointer of a deleted node so 153 * that any other CAS of it will fail, but implements the idea by 154 * changing the pointer to point to a different node, not by 155 * marking it. While it would be possible to further squeeze 156 * space by defining marker nodes not to have key/value fields, it 157 * isn't worth the extra type-testing overhead. The deletion 158 * markers are rarely encountered during traversal and are 159 * normally quickly garbage collected. (Note that this technique 160 * would not work well in systems without garbage collection.) 161 * 162 * In addition to using deletion markers, the lists also use 163 * nullness of value fields to indicate deletion, in a style 164 * similar to typical lazy-deletion schemes. If a node's value is 165 * null, then it is considered logically deleted and ignored even 166 * though it is still reachable. This maintains proper control of 167 * concurrent replace vs delete operations -- an attempted replace 168 * must fail if a delete beat it by nulling field, and a delete 169 * must return the last non-null value held in the field. (Note: 170 * Null, rather than some special marker, is used for value fields 171 * here because it just so happens to mesh with the Map API 172 * requirement that method get returns null if there is no 173 * mapping, which allows nodes to remain concurrently readable 174 * even when deleted. Using any other marker value here would be 175 * messy at best.) 176 * 177 * Here's the sequence of events for a deletion of node n with 178 * predecessor b and successor f, initially: 179 * 180 * +------+ +------+ +------+ 181 * ... | b |------>| n |----->| f | ... 182 * +------+ +------+ +------+ 183 * 184 * 1. CAS n's value field from non-null to null. 185 * From this point on, no public operations encountering 186 * the node consider this mapping to exist. However, other 187 * ongoing insertions and deletions might still modify 188 * n's next pointer. 189 * 190 * 2. CAS n's next pointer to point to a new marker node. 191 * From this point on, no other nodes can be appended to n. 192 * which avoids deletion errors in CAS-based linked lists. 193 * 194 * +------+ +------+ +------+ +------+ 195 * ... | b |------>| n |----->|marker|------>| f | ... 196 * +------+ +------+ +------+ +------+ 197 * 198 * 3. CAS b's next pointer over both n and its marker. 199 * From this point on, no new traversals will encounter n, 200 * and it can eventually be GCed. 201 * +------+ +------+ 202 * ... | b |----------------------------------->| f | ... 203 * +------+ +------+ 204 * 205 * A failure at step 1 leads to simple retry due to a lost race 206 * with another operation. Steps 2-3 can fail because some other 207 * thread noticed during a traversal a node with null value and 208 * helped out by marking and/or unlinking. This helping-out 209 * ensures that no thread can become stuck waiting for progress of 210 * the deleting thread. The use of marker nodes slightly 211 * complicates helping-out code because traversals must track 212 * consistent reads of up to four nodes (b, n, marker, f), not 213 * just (b, n, f), although the next field of a marker is 214 * immutable, and once a next field is CAS'ed to point to a 215 * marker, it never again changes, so this requires less care. 216 * 217 * Skip lists add indexing to this scheme, so that the base-level 218 * traversals start close to the locations being found, inserted 219 * or deleted -- usually base level traversals only traverse a few 220 * nodes. This doesn't change the basic algorithm except for the 221 * need to make sure base traversals start at predecessors (here, 222 * b) that are not (structurally) deleted, otherwise retrying 223 * after processing the deletion. 224 * 225 * Index levels are maintained as lists with volatile next fields, 226 * using CAS to link and unlink. Races are allowed in index-list 227 * operations that can (rarely) fail to link in a new index node 228 * or delete one. (We can't do this of course for data nodes.) 229 * However, even when this happens, the index lists remain sorted, 230 * so correctly serve as indices. This can impact performance, 231 * but since skip lists are probabilistic anyway, the net result 232 * is that under contention, the effective "p" value may be lower 233 * than its nominal value. And race windows are kept small enough 234 * that in practice these failures are rare, even under a lot of 235 * contention. 236 * 237 * The fact that retries (for both base and index lists) are 238 * relatively cheap due to indexing allows some minor 239 * simplifications of retry logic. Traversal restarts are 240 * performed after most "helping-out" CASes. This isn't always 241 * strictly necessary, but the implicit backoffs tend to help 242 * reduce other downstream failed CAS's enough to outweigh restart 243 * cost. This worsens the worst case, but seems to improve even 244 * highly contended cases. 245 * 246 * Unlike most skip-list implementations, index insertion and 247 * deletion here require a separate traversal pass occuring after 248 * the base-level action, to add or remove index nodes. This adds 249 * to single-threaded overhead, but improves contended 250 * multithreaded performance by narrowing interference windows, 251 * and allows deletion to ensure that all index nodes will be made 252 * unreachable upon return from a public remove operation, thus 253 * avoiding unwanted garbage retention. This is more important 254 * here than in some other data structures because we cannot null 255 * out node fields referencing user keys since they might still be 256 * read by other ongoing traversals. 257 * 258 * Indexing uses skip list parameters that maintain good search 259 * performance while using sparser-than-usual indices: The 260 * hardwired parameters k=1, p=0.5 (see method randomLevel) mean 261 * that about one-quarter of the nodes have indices. Of those that 262 * do, half have one level, a quarter have two, and so on (see 263 * Pugh's Skip List Cookbook, sec 3.4). The expected total space 264 * requirement for a map is slightly less than for the current 265 * implementation of java.util.TreeMap. 266 * 267 * Changing the level of the index (i.e, the height of the 268 * tree-like structure) also uses CAS. The head index has initial 269 * level/height of one. Creation of an index with height greater 270 * than the current level adds a level to the head index by 271 * CAS'ing on a new top-most head. To maintain good performance 272 * after a lot of removals, deletion methods heuristically try to 273 * reduce the height if the topmost levels appear to be empty. 274 * This may encounter races in which it possible (but rare) to 275 * reduce and "lose" a level just as it is about to contain an 276 * index (that will then never be encountered). This does no 277 * structural harm, and in practice appears to be a better option 278 * than allowing unrestrained growth of levels. 279 * 280 * The code for all this is more verbose than you'd like. Most 281 * operations entail locating an element (or position to insert an 282 * element). The code to do this can't be nicely factored out 283 * because subsequent uses require a snapshot of predecessor 284 * and/or successor and/or value fields which can't be returned 285 * all at once, at least not without creating yet another object 286 * to hold them -- creating such little objects is an especially 287 * bad idea for basic internal search operations because it adds 288 * to GC overhead. (This is one of the few times I've wished Java 289 * had macros.) Instead, some traversal code is interleaved within 290 * insertion and removal operations. The control logic to handle 291 * all the retry conditions is sometimes twisty. Most search is 292 * broken into 2 parts. findPredecessor() searches index nodes 293 * only, returning a base-level predecessor of the key. findNode() 294 * finishes out the base-level search. Even with this factoring, 295 * there is a fair amount of near-duplication of code to handle 296 * variants. 297 * 298 * For explanation of algorithms sharing at least a couple of 299 * features with this one, see Mikhail Fomitchev's thesis 300 * (http://www.cs.yorku.ca/~mikhail/), Keir Fraser's thesis 301 * (http://www.cl.cam.ac.uk/users/kaf24/), and Hakan Sundell's 302 * thesis (http://www.cs.chalmers.se/~phs/). 303 * 304 * Given the use of tree-like index nodes, you might wonder why 305 * this doesn't use some kind of search tree instead, which would 306 * support somewhat faster search operations. The reason is that 307 * there are no known efficient lock-free insertion and deletion 308 * algorithms for search trees. The immutability of the "down" 309 * links of index nodes (as opposed to mutable "left" fields in 310 * true trees) makes this tractable using only CAS operations. 311 * 312 * Notation guide for local variables 313 * Node: b, n, f for predecessor, node, successor 314 * Index: q, r, d for index node, right, down. 315 * t for another index node 316 * Head: h 317 * Levels: j 318 * Keys: k, key 319 * Values: v, value 320 * Comparisons: c 321 */ 322 323 private static final long serialVersionUID = -8627078645895051609L; 324 325 /** 326 * Generates the initial random seed for the cheaper per-instance 327 * random number generators used in randomLevel. 328 */ 329 private static final Random seedGenerator = new Random(); 330 331 /** 332 * Special value used to identify base-level header 333 */ 334 private static final Object BASE_HEADER = new Object(); 335 336 /** 337 * The topmost head index of the skiplist. 338 */ 339 private transient volatile HeadIndex head; 340 341 /** 342 * The comparator used to maintain order in this map, or null 343 * if using natural ordering. 344 * @serial 345 */ 346 private final Comparatorsuper K> comparator; 347 348 /** 349 * Seed for simple random number generator. Not volatile since it 350 * doesn't matter too much if different threads don't see updates. 351 */ 352 private transient int randomSeed; 353 354 /** Lazily initialized key set */ 355 private transient KeySet keySet; 356 /** Lazily initialized entry set */ 357 private transient EntrySet entrySet; 358 /** Lazily initialized values collection */ 359 private transient Values values; 360 /** Lazily initialized descending key set */ 361 private transient ConcurrentNavigableMap descendingMap; 362 363 /** 364 * Initializes or resets state. Needed by constructors, clone, 365 * clear, readObject. and ConcurrentSkipListSet.clone. 366 * (Note that comparator must be separately initialized.) 367 */ 368 final void initialize() { 369 keySet = null; 370 entrySet = null; 371 values = null; 372 descendingMap = null; 373 randomSeed = seedGenerator.nextInt() | 0x0100; // ensure nonzero 374 head = new HeadIndex(new Node(null, BASE_HEADER, null), 375 null, null, 1); 376 } 377 378 /** 379 * compareAndSet head node 380 */ 381 private boolean casHead(HeadIndex cmp, HeadIndex val) { 382 return UNSAFE.compareAndSwapObject(this, headOffset, cmp, val); 383 } 384 385 /* ---------------- Nodes -------------- */ 386 387 /** 388 * Nodes hold keys and values, and are singly linked in sorted 389 * order, possibly with some intervening marker nodes. The list is 390 * headed by a dummy node accessible as head.node. The value field 391 * is declared only as Object because it takes special non-V 392 * values for marker and header nodes. 393 */ 394 static final class Node { 395 final K key; 396 volatile Object value; 397 volatile Node next; 398 399 /** 400 * Creates a new regular node. 401 */ 402 Node(K key, Object value, Node next) { 403 this.key = key; 404 this.value = value; 405 this.next = next; 406 } 407 408 /** 409 * Creates a new marker node. A marker is distinguished by 410 * having its value field point to itself. Marker nodes also 411 * have null keys, a fact that is exploited in a few places, 412 * but this doesn't distinguish markers from the base-level 413 * header node (head.node), which also has a null key. 414 */ 415 Node(Node next) { 416 this.key = null; 417 this.value = this; 418 this.next = next; 419 } 420 421 /** 422 * compareAndSet value field 423 */ 424 boolean casValue(Object cmp, Object val) { 425 return UNSAFE.compareAndSwapObject(this, valueOffset, cmp, val); 426 } 427 428 /** 429 * compareAndSet next field 430 */ 431 boolean casNext(Node cmp, Node val) { 432 return UNSAFE.compareAndSwapObject(this, nextOffset, cmp, val); 433 } 434 435 /** 436 * Returns true if this node is a marker. This method isn't 437 * actually called in any current code checking for markers 438 * because callers will have already read value field and need 439 * to use that read (not another done here) and so directly 440 * test if value points to node. 441 * @param n a possibly null reference to a node 442 * @return true if this node is a marker node 443 */ 444 boolean isMarker() { 445 return value == this; 446 } 447 448 /** 449 * Returns true if this node is the header of base-level list. 450 * @return true if this node is header node 451 */ 452 boolean isBaseHeader() { 453 return value == BASE_HEADER; 454 } 455 456 /** 457 * Tries to append a deletion marker to this node. 458 * @param f the assumed current successor of this node 459 * @return true if successful 460 */ 461 boolean appendMarker(Node f) { 462 return casNext(f, new Node(f)); 463 } 464 465 /** 466 * Helps out a deletion by appending marker or unlinking from 467 * predecessor. This is called during traversals when value 468 * field seen to be null. 469 * @param b predecessor 470 * @param f successor 471 */ 472 void helpDelete(Node b, Node f) { 473 /* 474 * Rechecking links and then doing only one of the 475 * help-out stages per call tends to minimize CAS 476 * interference among helping threads. 477 */ 478 if (f == next && this == b.next) { 479 if (f == null || f.value != f) // not already marked 480 appendMarker(f); 481 else 482 b.casNext(this, f.next); 483 } 484 } 485 486 /** 487 * Returns value if this node contains a valid key-value pair, 488 * else null. 489 * @return this node's value if it isn't a marker or header or 490 * is deleted, else null. 491 */ 492 V getValidValue() { 493 Object v = value; 494 if (v == this || v == BASE_HEADER) 495 return null; 496 return (V)v; 497 } 498 499 /** 500 * Creates and returns a new SimpleImmutableEntry holding current 501 * mapping if this node holds a valid value, else null. 502 * @return new entry or null 503 */ 504 AbstractMap.SimpleImmutableEntry createSnapshot() { 505 V v = getValidValue(); 506 if (v == null) 507 return null; 508 return new AbstractMap.SimpleImmutableEntry(key, v); 509 } 510 511 // UNSAFE mechanics 512 513 private static final sun.misc.Unsafe UNSAFE; 514 private static final long valueOffset; 515 private static final long nextOffset; 516 517 static { 518 try { 519 UNSAFE = sun.misc.Unsafe.getUnsafe(); 520 Class k = Node.class; 521 valueOffset = UNSAFE.objectFieldOffset 522 (k.getDeclaredField("value")); 523 nextOffset = UNSAFE.objectFieldOffset 524 (k.getDeclaredField("next")); 525 } catch (Exception e) { 526 throw new Error(e); 527 } 528 } 529 } 530 531 /* ---------------- Indexing -------------- */ 532 533 /** 534 * Index nodes represent the levels of the skip list. Note that 535 * even though both Nodes and Indexes have forward-pointing 536 * fields, they have different types and are handled in different 537 * ways, that can't nicely be captured by placing field in a 538 * shared abstract class. 539 */ 540 static class Index { 541 final Node node; 542 final Index down; 543 volatile Index right; 544 545 /** 546 * Creates index node with given values. 547 */ 548 Index(Node node, Index down, Index right) { 549 this.node = node; 550 this.down = down; 551 this.right = right; 552 } 553 554 /** 555 * compareAndSet right field 556 */ 557 final boolean casRight(Index cmp, Index val) { 558 return UNSAFE.compareAndSwapObject(this, rightOffset, cmp, val); 559 } 560 561 /** 562 * Returns true if the node this indexes has been deleted. 563 * @return true if indexed node is known to be deleted 564 */ 565 final boolean indexesDeletedNode() { 566 return node.value == null; 567 } 568 569 /** 570 * Tries to CAS newSucc as successor. To minimize races with 571 * unlink that may lose this index node, if the node being 572 * indexed is known to be deleted, it doesn't try to link in. 573 * @param succ the expected current successor 574 * @param newSucc the new successor 575 * @return true if successful 576 */ 577 final boolean link(Index succ, Index newSucc) { 578 Node n = node; 579 newSucc.right = succ; 580 return n.value != null && casRight(succ, newSucc); 581 } 582 583 /** 584 * Tries to CAS right field to skip over apparent successor 585 * succ. Fails (forcing a retraversal by caller) if this node 586 * is known to be deleted. 587 * @param succ the expected current successor 588 * @return true if successful 589 */ 590 final boolean unlink(Index succ) { 591 return !indexesDeletedNode() && casRight(succ, succ.right); 592 } 593 594 // Unsafe mechanics 595 private static final sun.misc.Unsafe UNSAFE; 596 private static final long rightOffset; 597 static { 598 try { 599 UNSAFE = sun.misc.Unsafe.getUnsafe(); 600 Class k = Index.class; 601 rightOffset = UNSAFE.objectFieldOffset 602 (k.getDeclaredField("right")); 603 } catch (Exception e) { 604 throw new Error(e); 605 } 606 } 607 } 608 609 /* ---------------- Head nodes -------------- */ 610 611 /** 612 * Nodes heading each level keep track of their level. 613 */ 614 static final class HeadIndex extends Index { 615 final int level; 616 HeadIndex(Node node, Index down, Index right, int level) { 617 super(node, down, right); 618 this.level = level; 619 } 620 } 621 622 /* ---------------- Comparison utilities -------------- */ 623 624 /** 625 * Represents a key with a comparator as a Comparable. 626 * 627 * Because most sorted collections seem to use natural ordering on 628 * Comparables (Strings, Integers, etc), most internal methods are 629 * geared to use them. This is generally faster than checking 630 * per-comparison whether to use comparator or comparable because 631 * it doesn't require a (Comparable) cast for each comparison. 632 * (Optimizers can only sometimes remove such redundant checks 633 * themselves.) When Comparators are used, 634 * ComparableUsingComparators are created so that they act in the 635 * same way as natural orderings. This penalizes use of 636 * Comparators vs Comparables, which seems like the right 637 * tradeoff. 638 */ 639 static final class ComparableUsingComparator implements Comparable { 640 final K actualKey; 641 final Comparatorsuper K> cmp; 642 ComparableUsingComparator(K key, Comparatorsuper K> cmp) { 643 this.actualKey = key; 644 this.cmp = cmp; 645 } 646 public int compareTo(K k2) { 647 return cmp.compare(actualKey, k2); 648 } 649 } 650 651 /** 652 * If using comparator, return a ComparableUsingComparator, else 653 * cast key as Comparable, which may cause ClassCastException, 654 * which is propagated back to caller. 655 */ 656 private Comparablesuper K> comparable(Object key) 657 throws ClassCastException { 658 if (key == null) 659 throw new NullPointerException(); 660 if (comparator != null) 661 return new ComparableUsingComparator((K)key, comparator); 662 else 663 return (Comparablesuper K>)key; 664 } 665 666 /** 667 * Compares using comparator or natural ordering. Used when the 668 * ComparableUsingComparator approach doesn't apply. 669 */ 670 int compare(K k1, K k2) throws ClassCastException { 671 Comparatorsuper K> cmp = comparator; 672 if (cmp != null) 673 return cmp.compare(k1, k2); 674 else 675 return ((Comparablesuper K>)k1).compareTo(k2); 676 } 677 678 /** 679 * Returns true if given key greater than or equal to least and 680 * strictly less than fence, bypassing either test if least or 681 * fence are null. Needed mainly in submap operations. 682 */ 683 boolean inHalfOpenRange(K key, K least, K fence) { 684 if (key == null) 685 throw new NullPointerException(); 686 return ((least == null || compare(key, least) >= 0) && 687 (fence == null || compare(key, fence) < 0)); 688 } 689 690 /** 691 * Returns true if given key greater than or equal to least and less 692 * or equal to fence. Needed mainly in submap operations. 693 */ 694 boolean inOpenRange(K key, K least, K fence) { 695 if (key == null) 696 throw new NullPointerException(); 697 return ((least == null || compare(key, least) >= 0) && 698 (fence == null || compare(key, fence) <= 0)); 699 } 700 701 /* ---------------- Traversal -------------- */ 702 703 /** 704 * Returns a base-level node with key strictly less than given key, 705 * or the base-level header if there is no such node. Also 706 * unlinks indexes to deleted nodes found along the way. Callers 707 * rely on this side-effect of clearing indices to deleted nodes. 708 * @param key the key 709 * @return a predecessor of key 710 */ 711 private Node findPredecessor(Comparablesuper K> key) { 712 if (key == null) 713 throw new NullPointerException(); // don't postpone errors 714 for (;;) { 715 Index q = head; 716 Index r = q.right; 717 for (;;) { 718 if (r != null) { 719 Node n = r.node; 720 K k = n.key; 721 if (n.value == null) { 722 if (!q.unlink(r)) 723 break; // restart 724 r = q.right; // reread r 725 continue; 726 } 727 if (key.compareTo(k) > 0) { 728 q = r; 729 r = r.right; 730 continue; 731 } 732 } 733 Index d = q.down; 734 if (d != null) { 735 q = d; 736 r = d.right; 737 } else 738 return q.node; 739 } 740 } 741 } 742 743 /** 744 * Returns node holding key or null if no such, clearing out any 745 * deleted nodes seen along the way. Repeatedly traverses at 746 * base-level looking for key starting at predecessor returned 747 * from findPredecessor, processing base-level deletions as 748 * encountered. Some callers rely on this side-effect of clearing 749 * deleted nodes. 750 * 751 * Restarts occur, at traversal step centered on node n, if: 752 * 753 * (1) After reading n's next field, n is no longer assumed 754 * predecessor b's current successor, which means that 755 * we don't have a consistent 3-node snapshot and so cannot 756 * unlink any subsequent deleted nodes encountered. 757 * 758 * (2) n's value field is null, indicating n is deleted, in 759 * which case we help out an ongoing structural deletion 760 * before retrying. Even though there are cases where such 761 * unlinking doesn't require restart, they aren't sorted out 762 * here because doing so would not usually outweigh cost of 763 * restarting. 764 * 765 * (3) n is a marker or n's predecessor's value field is null, 766 * indicating (among other possibilities) that 767 * findPredecessor returned a deleted node. We can't unlink 768 * the node because we don't know its predecessor, so rely 769 * on another call to findPredecessor to notice and return 770 * some earlier predecessor, which it will do. This check is 771 * only strictly needed at beginning of loop, (and the 772 * b.value check isn't strictly needed at all) but is done 773 * each iteration to help avoid contention with other 774 * threads by callers that will fail to be able to change 775 * links, and so will retry anyway. 776 * 777 * The traversal loops in doPut, doRemove, and findNear all 778 * include the same three kinds of checks. And specialized 779 * versions appear in findFirst, and findLast and their 780 * variants. They can't easily share code because each uses the 781 * reads of fields held in locals occurring in the orders they 782 * were performed. 783 * 784 * @param key the key 785 * @return node holding key, or null if no such 786 */ 787 private Node findNode(Comparablesuper K> key) { 788 for (;;) { 789 Node b = findPredecessor(key); 790 Node n = b.next; 791 for (;;) { 792 if (n == null) 793 return null; 794 Node f = n.next; 795 if (n != b.next) // inconsistent read 796 break; 797 Object v = n.value; 798 if (v == null) { // n is deleted 799 n.helpDelete(b, f); 800 break; 801 } 802 if (v == n || b.value == null) // b is deleted 803 break; 804 int c = key.compareTo(n.key); 805 if (c == 0) 806 return n; 807 if (c < 0) 808 return null; 809 b = n; 810 n = f; 811 } 812 } 813 } 814 815 /** 816 * Gets value for key using findNode. 817 * @param okey the key 818 * @return the value, or null if absent 819 */ 820 private V doGet(Object okey) { 821 Comparablesuper K> key = comparable(okey); 822 /* 823 * Loop needed here and elsewhere in case value field goes 824 * null just as it is about to be returned, in which case we 825 * lost a race with a deletion, so must retry. 826 */ 827 for (;;) { 828 Node n = findNode(key); 829 if (n == null) 830 return null; 831 Object v = n.value; 832 if (v != null) 833 return (V)v; 834 } 835 } 836 837 /* ---------------- Insertion -------------- */ 838 839 /** 840 * Main insertion method. Adds element if not present, or 841 * replaces value if present and onlyIfAbsent is false. 842 * @param kkey the key 843 * @param value the value that must be associated with key 844 * @param onlyIfAbsent if should not insert if already present 845 * @return the old value, or null if newly inserted 846 */ 847 private V doPut(K kkey, V value, boolean onlyIfAbsent) { 848 Comparablesuper K> key = comparable(kkey); 849 for (;;) { 850 Node b = findPredecessor(key); 851 Node n = b.next; 852 for (;;) { 853 if (n != null) { 854 Node f = n.next; 855 if (n != b.next) // inconsistent read 856 break; 857 Object v = n.value; 858 if (v == null) { // n is deleted 859 n.helpDelete(b, f); 860 break; 861 } 862 if (v == n || b.value == null) // b is deleted 863 break; 864 int c = key.compareTo(n.key); 865 if (c > 0) { 866 b = n; 867 n = f; 868 continue; 869 } 870 if (c == 0) { 871 if (onlyIfAbsent || n.casValue(v, value)) 872 return (V)v; 873 else 874 break; // restart if lost race to replace value 875 } 876 // else c < 0; fall through 877 } 878 879 Node z = new Node(kkey, value, n); 880 if (!b.casNext(n, z)) 881 break; // restart if lost race to append to b 882 int level = randomLevel(); 883 if (level > 0) 884 insertIndex(z, level); 885 return null; 886 } 887 } 888 } 889 890 /** 891 * Returns a random level for inserting a new node. 892 * Hardwired to k=1, p=0.5, max 31 (see above and 893 * Pugh's "Skip List Cookbook", sec 3.4). 894 * 895 * This uses the simplest of the generators described in George 896 * Marsaglia's "Xorshift RNGs" paper. This is not a high-quality 897 * generator but is acceptable here. 898 */ 899 private int randomLevel() { 900 int x = randomSeed; 901 x ^= x << 13; 902 x ^= x >>> 17; 903 randomSeed = x ^= x << 5; 904 if ((x & 0x80000001) != 0) // test highest and lowest bits 905 return 0; 906 int level = 1; 907 while (((x >>>= 1) & 1) != 0) ++level; 908 return level; 909 } 910 911 /** 912 * Creates and adds index nodes for the given node. 913 * @param z the node 914 * @param level the level of the index 915 */ 916 private void insertIndex(Node z, int level) { 917 HeadIndex h = head; 918 int max = h.level; 919 920 if (level <= max) { 921 Index idx = null; 922 for (int i = 1; i <= level; ++i) 923 idx = new Index(z, idx, null); 924 addIndex(idx, h, level); 925 926 } else { // Add a new level 927 /* 928 * To reduce interference by other threads checking for 929 * empty levels in tryReduceLevel, new levels are added 930 * with initialized right pointers. Which in turn requires 931 * keeping levels in an array to access them while 932 * creating new head index nodes from the opposite 933 * direction. 934 */ 935 level = max + 1; 936 Index[] idxs = (Index[])new Index[level+1]; 937 Index idx = null; 938 for (int i = 1; i <= level; ++i) 939 idxs[i] = idx = new Index(z, idx, null); 940 941 HeadIndex oldh; 942 int k; 943 for (;;) { 944 oldh = head; 945 int oldLevel = oldh.level; 946 if (level <= oldLevel) { // lost race to add level 947 k = level; 948 break; 949 } 950 HeadIndex newh = oldh; 951 Node oldbase = oldh.node; 952 for (int j = oldLevel+1; j <= level; ++j) 953 newh = new HeadIndex(oldbase, newh, idxs[j], j); 954 if (casHead(oldh, newh)) { 955 k = oldLevel; 956 break; 957 } 958 } 959 addIndex(idxs[k], oldh, k); 960 } 961 } 962 963 /** 964 * Adds given index nodes from given level down to 1. 965 * @param idx the topmost index node being inserted 966 * @param h the value of head to use to insert. This must be 967 * snapshotted by callers to provide correct insertion level 968 * @param indexLevel the level of the index 969 */ 970 private void addIndex(Index idx, HeadIndex h, int indexLevel) { 971 // Track next level to insert in case of retries 972 int insertionLevel = indexLevel; 973 Comparablesuper K> key = comparable(idx.node.key); 974 if (key == null) throw new NullPointerException(); 975 976 // Similar to findPredecessor, but adding index nodes along 977 // path to key. 978 for (;;) { 979 int j = h.level; 980 Index q = h; 981 Index r = q.right; 982 Index t = idx; 983 for (;;) { 984 if (r != null) { 985 Node n = r.node; 986 // compare before deletion check avoids needing recheck 987 int c = key.compareTo(n.key); 988 if (n.value == null) { 989 if (!q.unlink(r)) 990 break; 991 r = q.right; 992 continue; 993 } 994 if (c > 0) { 995 q = r; 996 r = r.right; 997 continue; 998 } 999 } 1000 1001 if (j == insertionLevel) { 1002 // Don't insert index if node already deleted 1003 if (t.indexesDeletedNode()) { 1004 findNode(key); // cleans up 1005 return; 1006 } 1007 if (!q.link(r, t)) 1008 break; // restart 1009 if (--insertionLevel == 0) { 1010 // need final deletion check before return 1011 if (t.indexesDeletedNode()) 1012 findNode(key); 1013 return; 1014 } 1015 } 1016 1017 if (--j >= insertionLevel && j < indexLevel) 1018 t = t.down; 1019 q = q.down; 1020 r = q.right; 1021 } 1022 } 1023 } 1024 1025 /* ---------------- Deletion -------------- */ 1026 1027 /** 1028 * Main deletion method. Locates node, nulls value, appends a 1029 * deletion marker, unlinks predecessor, removes associated index 1030 * nodes, and possibly reduces head index level. 1031 * 1032 * Index nodes are cleared out simply by calling findPredecessor. 1033 * which unlinks indexes to deleted nodes found along path to key, 1034 * which will include the indexes to this node. This is done 1035 * unconditionally. We can't check beforehand whether there are 1036 * index nodes because it might be the case that some or all 1037 * indexes hadn't been inserted yet for this node during initial 1038 * search for it, and we'd like to ensure lack of garbage 1039 * retention, so must call to be sure. 1040 * 1041 * @param okey the key 1042 * @param value if non-null, the value that must be 1043 * associated with key 1044 * @return the node, or null if not found 1045 */ 1046 final V doRemove(Object okey, Object value) { 1047 Comparablesuper K> key = comparable(okey); 1048 for (;;) { 1049 Node b = findPredecessor(key); 1050 Node n = b.next; 1051 for (;;) { 1052 if (n == null) 1053 return null; 1054 Node f = n.next; 1055 if (n != b.next) // inconsistent read 1056 break; 1057 Object v = n.value; 1058 if (v == null) { // n is deleted 1059 n.helpDelete(b, f); 1060 break; 1061 } 1062 if (v == n || b.value == null) // b is deleted 1063 break; 1064 int c = key.compareTo(n.key); 1065 if (c < 0) 1066 return null; 1067 if (c > 0) { 1068 b = n; 1069 n = f; 1070 continue; 1071 } 1072 if (value != null && !value.equals(v)) 1073 return null; 1074 if (!n.casValue(v, null)) 1075 break; 1076 if (!n.appendMarker(f) || !b.casNext(n, f)) 1077 findNode(key); // Retry via findNode 1078 else { 1079 findPredecessor(key); // Clean index 1080 if (head.right == null) 1081 tryReduceLevel(); 1082 } 1083 return (V)v; 1084 } 1085 } 1086 } 1087 1088 /** 1089 * Possibly reduce head level if it has no nodes. This method can 1090 * (rarely) make mistakes, in which case levels can disappear even 1091 * though they are about to contain index nodes. This impacts 1092 * performance, not correctness. To minimize mistakes as well as 1093 * to reduce hysteresis, the level is reduced by one only if the 1094 * topmost three levels look empty. Also, if the removed level 1095 * looks non-empty after CAS, we try to change it back quick 1096 * before anyone notices our mistake! (This trick works pretty 1097 * well because this method will practically never make mistakes 1098 * unless current thread stalls immediately before first CAS, in 1099 * which case it is very unlikely to stall again immediately 1100 * afterwards, so will recover.) 1101 * 1102 * We put up with all this rather than just let levels grow 1103 * because otherwise, even a small map that has undergone a large 1104 * number of insertions and removals will have a lot of levels, 1105 * slowing down access more than would an occasional unwanted 1106 * reduction. 1107 */ 1108 private void tryReduceLevel() { 1109 HeadIndex h = head; 1110 HeadIndex d; 1111 HeadIndex e; 1112 if (h.level > 3 && 1113 (d = (HeadIndex)h.down) != null && 1114 (e = (HeadIndex)d.down) != null && 1115 e.right == null && 1116 d.right == null && 1117 h.right == null && 1118 casHead(h, d) && // try to set 1119 h.right != null) // recheck 1120 casHead(d, h); // try to backout 1121 } 1122 1123 /* ---------------- Finding and removing first element -------------- */ 1124 1125 /** 1126 * Specialized variant of findNode to get first valid node. 1127 * @return first node or null if empty 1128 */ 1129 Node findFirst() { 1130 for (;;) { 1131 Node b = head.node; 1132 Node n = b.next; 1133 if (n == null) 1134 return null; 1135 if (n.value != null) 1136 return n; 1137 n.helpDelete(b, n.next); 1138 } 1139 } 1140 1141 /** 1142 * Removes first entry; returns its snapshot. 1143 * @return null if empty, else snapshot of first entry 1144 */ 1145 Map.Entry doRemoveFirstEntry() { 1146 for (;;) { 1147 Node b = head.node; 1148 Node n = b.next; 1149 if (n == null) 1150 return null; 1151 Node f = n.next; 1152 if (n != b.next) 1153 continue; 1154 Object v = n.value; 1155 if (v == null) { 1156 n.helpDelete(b, f); 1157 continue; 1158 } 1159 if (!n.casValue(v, null)) 1160 continue; 1161 if (!n.appendMarker(f) || !b.casNext(n, f)) 1162 findFirst(); // retry 1163 clearIndexToFirst(); 1164 return new AbstractMap.SimpleImmutableEntry(n.key, (V)v); 1165 } 1166 } 1167 1168 /** 1169 * Clears out index nodes associated with deleted first entry. 1170 */ 1171 private void clearIndexToFirst() { 1172 for (;;) { 1173 Index q = head; 1174 for (;;) { 1175 Index r = q.right; 1176 if (r != null && r.indexesDeletedNode() && !q.unlink(r)) 1177 break; 1178 if ((q = q.down) == null) { 1179 if (head.right == null) 1180 tryReduceLevel(); 1181 return; 1182 } 1183 } 1184 } 1185 } 1186 1187 1188 /* ---------------- Finding and removing last element -------------- */ 1189 1190 /** 1191 * Specialized version of find to get last valid node. 1192 * @return last node or null if empty 1193 */ 1194 Node findLast() { 1195 /* 1196 * findPredecessor can't be used to traverse index level 1197 * because this doesn't use comparisons. So traversals of 1198 * both levels are folded together. 1199 */ 1200 Index q = head; 1201 for (;;) { 1202 Index d, r; 1203 if ((r = q.right) != null) { 1204 if (r.indexesDeletedNode()) { 1205 q.unlink(r); 1206 q = head; // restart 1207 } 1208 else 1209 q = r; 1210 } else if ((d = q.down) != null) { 1211 q = d; 1212 } else { 1213 Node b = q.node; 1214 Node n = b.next; 1215 for (;;) { 1216 if (n == null) 1217 return b.isBaseHeader() ? null : b; 1218 Node f = n.next; // inconsistent read 1219 if (n != b.next) 1220 break; 1221 Object v = n.value; 1222 if (v == null) { // n is deleted 1223 n.helpDelete(b, f); 1224 break; 1225 } 1226 if (v == n || b.value == null) // b is deleted 1227 break; 1228 b = n; 1229 n = f; 1230 } 1231 q = head; // restart 1232 } 1233 } 1234 } 1235 1236 /** 1237 * Specialized variant of findPredecessor to get predecessor of last 1238 * valid node. Needed when removing the last entry. It is possible 1239 * that all successors of returned node will have been deleted upon 1240 * return, in which case this method can be retried. 1241 * @return likely predecessor of last node 1242 */ 1243 private Node findPredecessorOfLast() { 1244 for (;;) { 1245 Index q = head; 1246 for (;;) { 1247 Index d, r; 1248 if ((r = q.right) != null) { 1249 if (r.indexesDeletedNode()) { 1250 q.unlink(r); 1251 break; // must restart 1252 } 1253 // proceed as far across as possible without overshooting 1254 if (r.node.next != null) { 1255 q = r; 1256 continue; 1257 } 1258 } 1259 if ((d = q.down) != null) 1260 q = d; 1261 else 1262 return q.node; 1263 } 1264 } 1265 } 1266 1267 /** 1268 * Removes last entry; returns its snapshot. 1269 * Specialized variant of doRemove. 1270 * @return null if empty, else snapshot of last entry 1271 */ 1272 Map.Entry doRemoveLastEntry() { 1273 for (;;) { 1274 Node b = findPredecessorOfLast(); 1275 Node n = b.next; 1276 if (n == null) { 1277 if (b.isBaseHeader()) // empty 1278 return null; 1279 else 1280 continue; // all b's successors are deleted; retry 1281 } 1282 for (;;) { 1283 Node f = n.next; 1284 if (n != b.next) // inconsistent read 1285 break; 1286 Object v = n.value; 1287 if (v == null) { // n is deleted 1288 n.helpDelete(b, f); 1289 break; 1290 } 1291 if (v == n || b.value == null) // b is deleted 1292 break; 1293 if (f != null) { 1294 b = n; 1295 n = f; 1296 continue; 1297 } 1298 if (!n.casValue(v, null)) 1299 break; 1300 K key = n.key; 1301 Comparablesuper K> ck = comparable(key); 1302 if (!n.appendMarker(f) || !b.casNext(n, f)) 1303 findNode(ck); // Retry via findNode 1304 else { 1305 findPredecessor(ck); // Clean index 1306 if (head.right == null) 1307 tryReduceLevel(); 1308 } 1309 return new AbstractMap.SimpleImmutableEntry(key, (V)v); 1310 } 1311 } 1312 } 1313 1314 /* ---------------- Relational operations -------------- */ 1315 1316 // Control values OR'ed as arguments to findNear 1317 1318 private static final int EQ = 1; 1319 private static final int LT = 2; 1320 private static final int GT = 0; // Actually checked as !LT 1321 1322 /** 1323 * Utility for ceiling, floor, lower, higher methods. 1324 * @param kkey the key 1325 * @param rel the relation -- OR'ed combination of EQ, LT, GT 1326 * @return nearest node fitting relation, or null if no such 1327 */ 1328 Node findNear(K kkey, int rel) { 1329 Comparablesuper K> key = comparable(kkey); 1330 for (;;) { 1331 Node b = findPredecessor(key); 1332 Node n = b.next; 1333 for (;;) { 1334 if (n == null) 1335 return ((rel & LT) == 0 || b.isBaseHeader()) ? null : b; 1336 Node f = n.next; 1337 if (n != b.next) // inconsistent read 1338 break; 1339 Object v = n.value; 1340 if (v == null) { // n is deleted 1341 n.helpDelete(b, f); 1342 break; 1343 } 1344 if (v == n || b.value == null) // b is deleted 1345 break; 1346 int c = key.compareTo(n.key); 1347 if ((c == 0 && (rel & EQ) != 0) || 1348 (c < 0 && (rel & LT) == 0)) 1349 return n; 1350 if ( c <= 0 && (rel & LT) != 0) 1351 return b.isBaseHeader() ? null : b; 1352 b = n; 1353 n = f; 1354 } 1355 } 1356 } 1357 1358 /** 1359 * Returns SimpleImmutableEntry for results of findNear. 1360 * @param key the key 1361 * @param rel the relation -- OR'ed combination of EQ, LT, GT 1362 * @return Entry fitting relation, or null if no such 1363 */ 1364 AbstractMap.SimpleImmutableEntry getNear(K key, int rel) { 1365 for (;;) { 1366 Node n = findNear(key, rel); 1367 if (n == null) 1368 return null; 1369 AbstractMap.SimpleImmutableEntry e = n.createSnapshot(); 1370 if (e != null) 1371 return e; 1372 } 1373 } 1374 1375 1376 /* ---------------- Constructors -------------- */ 1377 1378 /** 1379 * Constructs a new, empty map, sorted according to the 1380 * {@linkplain Comparable natural ordering} of the keys. 1381 */ 1382 public ConcurrentSkipListMap() { 1383 this.comparator = null; 1384 initialize(); 1385 } 1386 1387 /** 1388 * Constructs a new, empty map, sorted according to the specified 1389 * comparator. 1390 * 1391 * @param comparator the comparator that will be used to order this map. 1392 * If null, the {@linkplain Comparable natural 1393 * ordering} of the keys will be used. 1394 */ 1395 public ConcurrentSkipListMap(Comparatorsuper K> comparator) { 1396 this.comparator = comparator; 1397 initialize(); 1398 } 1399 1400 /** 1401 * Constructs a new map containing the same mappings as the given map, 1402 * sorted according to the {@linkplain Comparable natural ordering} of 1403 * the keys. 1404 * 1405 * @param m the map whose mappings are to be placed in this map 1406 * @throws ClassCastException if the keys in m are not 1407 * {@link Comparable}, or are not mutually comparable 1408 * @throws NullPointerException if the specified map or any of its keys 1409 * or values are null 1410 */ 1411 public ConcurrentSkipListMap(Mapextends K, ? extends V> m) { 1412 this.comparator = null; 1413 initialize(); 1414 putAll(m); 1415 } 1416 1417 /** 1418 * Constructs a new map containing the same mappings and using the 1419 * same ordering as the specified sorted map. 1420 * 1421 * @param m the sorted map whose mappings are to be placed in this 1422 * map, and whose comparator is to be used to sort this map 1423 * @throws NullPointerException if the specified sorted map or any of 1424 * its keys or values are null 1425 */ 1426 public ConcurrentSkipListMap(SortedMapextends V> m) { 1427 this.comparator = m.comparator(); 1428 initialize(); 1429 buildFromSorted(m); 1430 } 1431 1432 /** 1433 * Returns a shallow copy of this ConcurrentSkipListMap 1434 * instance. (The keys and values themselves are not cloned.) 1435 * 1436 * @return a shallow copy of this map 1437 */ 1438 public ConcurrentSkipListMap clone() { 1439 ConcurrentSkipListMap clone = null; 1440 try { 1441 clone = (ConcurrentSkipListMap) super.clone(); 1442 } catch (CloneNotSupportedException e) { 1443 throw new InternalError(); 1444 } 1445 1446 clone.initialize(); 1447 clone.buildFromSorted(this); 1448 return clone; 1449 } 1450 1451 /** 1452 * Streamlined bulk insertion to initialize from elements of 1453 * given sorted map. Call only from constructor or clone 1454 * method. 1455 */ 1456 private void buildFromSorted(SortedMapextends V> map) { 1457 if (map == null) 1458 throw new NullPointerException(); 1459 1460 HeadIndex h = head; 1461 Node basepred = h.node; 1462 1463 // Track the current rightmost node at each level. Uses an 1464 // ArrayList to avoid committing to initial or maximum level. 1465 ArrayList> preds = new ArrayList>(); 1466 1467 // initialize 1468 for (int i = 0; i <= h.level; ++i) 1469 preds.add(null); 1470 Index q = h; 1471 for (int i = h.level; i > 0; --i) { 1472 preds.set(i, q); 1473 q = q.down; 1474 } 1475 1476 Iteratorextends Map.Entryextends K, ? extends V>> it = 1477 map.entrySet().iterator(); 1478 while (it.hasNext()) { 1479 Map.Entryextends K, ? extends V> e = it.next(); 1480 int j = randomLevel(); 1481 if (j > h.level) j = h.level + 1; 1482 K k = e.getKey(); 1483 V v = e.getValue(); 1484 if (k == null || v == null) 1485 throw new NullPointerException(); 1486 Node z = new Node(k, v, null); 1487 basepred.next = z; 1488 basepred = z; 1489 if (j > 0) { 1490 Index idx = null; 1491 for (int i = 1; i <= j; ++i) { 1492 idx = new Index(z, idx, null); 1493 if (i > h.level) 1494 h = new HeadIndex(h.node, h, idx, i); 1495 1496 if (i < preds.size()) { 1497 preds.get(i).right = idx; 1498 preds.set(i, idx); 1499 } else 1500 preds.add(idx); 1501 } 1502 } 1503 } 1504 head = h; 1505 } 1506 1507 /* ---------------- Serialization -------------- */ 1508 1509 /** 1510 * Save the state of this map to a stream. 1511 * 1512 * @serialData The key (Object) and value (Object) for each 1513 * key-value mapping represented by the map, followed by 1514 * null. The key-value mappings are emitted in key-order 1515 * (as determined by the Comparator, or by the keys' natural 1516 * ordering if no Comparator). 1517 */ 1518 private void writeObject(java.io.ObjectOutputStream s) 1519 throws java.io.IOException { 1520 // Write out the Comparator and any hidden stuff 1521 s.defaultWriteObject(); 1522 1523 // Write out keys and values (alternating) 1524 for (Node n = findFirst(); n != null; n = n.next) { 1525 V v = n.getValidValue(); 1526 if (v != null) { 1527 s.writeObject(n.key); 1528 s.writeObject(v); 1529 } 1530 } 1531 s.writeObject(null); 1532 } 1533 1534 /** 1535 * Reconstitute the map from a stream. 1536 */ 1537 private void readObject(final java.io.ObjectInputStream s) 1538 throws java.io.IOException, ClassNotFoundException { 1539 // Read in the Comparator and any hidden stuff 1540 s.defaultReadObject(); 1541 // Reset transients 1542 initialize(); 1543 1544 /* 1545 * This is nearly identical to buildFromSorted, but is 1546 * distinct because readObject calls can't be nicely adapted 1547 * as the kind of iterator needed by buildFromSorted. (They 1548 * can be, but doing so requires type cheats and/or creation 1549 * of adaptor classes.) It is simpler to just adapt the code. 1550 */ 1551 1552 HeadIndex h = head; 1553 Node basepred = h.node; 1554 ArrayList> preds = new ArrayList>(); 1555 for (int i = 0; i <= h.level; ++i) 1556 preds.add(null); 1557 Index q = h; 1558 for (int i = h.level; i > 0; --i) { 1559 preds.set(i, q); 1560 q = q.down; 1561 } 1562 1563 for (;;) { 1564 Object k = s.readObject(); 1565 if (k == null) 1566 break; 1567 Object v = s.readObject(); 1568 if (v == null) 1569 throw new NullPointerException(); 1570 K key = (K) k; 1571 V val = (V) v; 1572 int j = randomLevel(); 1573 if (j > h.level) j = h.level + 1; 1574 Node z = new Node(key, val, null); 1575 basepred.next = z; 1576 basepred = z; 1577 if (j > 0) { 1578 Index idx = null; 1579 for (int i = 1; i <= j; ++i) { 1580 idx = new Index(z, idx, null); 1581 if (i > h.level) 1582 h = new HeadIndex(h.node, h, idx, i); 1583 1584 if (i < preds.size()) { 1585 preds.get(i).right = idx; 1586 preds.set(i, idx); 1587 } else 1588 preds.add(idx); 1589 } 1590 } 1591 } 1592 head = h; 1593 } 1594 1595 /* ------ Map API methods ------ */ 1596 1597 /** 1598 * Returns true if this map contains a mapping for the specified 1599 * key. 1600 * 1601 * @param key key whose presence in this map is to be tested 1602 * @return true if this map contains a mapping for the specified key 1603 * @throws ClassCastException if the specified key cannot be compared 1604 * with the keys currently in the map 1605 * @throws NullPointerException if the specified key is null 1606 */ 1607 public boolean containsKey(Object key) { 1608 return doGet(key) != null; 1609 } 1610 1611 /** 1612 * Returns the value to which the specified key is mapped, 1613 * or {@code null} if this map contains no mapping for the key. 1614 * 1615 *

More formally, if this map contains a mapping from a key 1616 * {@code k} to a value {@code v} such that {@code key} compares 1617 * equal to {@code k} according to the map's ordering, then this 1618 * method returns {@code v}; otherwise it returns {@code null}. 1619 * (There can be at most one such mapping.) 1620 * 1621 * @throws ClassCastException if the specified key cannot be compared 1622 * with the keys currently in the map 1623 * @throws NullPointerException if the specified key is null 1624 */ 1625 public V get(Object key) { 1626 return doGet(key); 1627 } 1628 1629 /** 1630 * Associates the specified value with the specified key in this map. 1631 * If the map previously contained a mapping for the key, the old 1632 * value is replaced. 1633 * 1634 * @param key key with which the specified value is to be associated 1635 * @param value value to be associated with the specified key 1636 * @return the previous value associated with the specified key, or 1637 * null if there was no mapping for the key 1638 * @throws ClassCastException if the specified key cannot be compared 1639 * with the keys currently in the map 1640 * @throws NullPointerException if the specified key or value is null 1641 */ 1642 public V put(K key, V value) { 1643 if (value == null) 1644 throw new NullPointerException(); 1645 return doPut(key, value, false); 1646 } 1647 1648 /** 1649 * Removes the mapping for the specified key from this map if present. 1650 * 1651 * @param key key for which mapping should be removed 1652 * @return the previous value associated with the specified key, or 1653 * null if there was no mapping for the key 1654 * @throws ClassCastException if the specified key cannot be compared 1655 * with the keys currently in the map 1656 * @throws NullPointerException if the specified key is null 1657 */ 1658 public V remove(Object key) { 1659 return doRemove(key, null); 1660 } 1661 1662 /** 1663 * Returns true if this map maps one or more keys to the 1664 * specified value. This operation requires time linear in the 1665 * map size. Additionally, it is possible for the map to change 1666 * during execution of this method, in which case the returned 1667 * result may be inaccurate. 1668 * 1669 * @param value value whose presence in this map is to be tested 1670 * @return true if a mapping to value exists; 1671 * false otherwise 1672 * @throws NullPointerException if the specified value is null 1673 */ 1674 public boolean containsValue(Object value) { 1675 if (value == null) 1676 throw new NullPointerException(); 1677 for (Node n = findFirst(); n != null; n = n.next) { 1678 V v = n.getValidValue(); 1679 if (v != null && value.equals(v)) 1680 return true; 1681 } 1682 return false; 1683 } 1684 1685 /** 1686 * Returns the number of key-value mappings in this map. If this map 1687 * contains more than Integer.MAX_VALUE elements, it 1688 * returns Integer.MAX_VALUE. 1689 * 1690 *

Beware that, unlike in most collections, this method is 1691 * NOT a constant-time operation. Because of the 1692 * asynchronous nature of these maps, determining the current 1693 * number of elements requires traversing them all to count them. 1694 * Additionally, it is possible for the size to change during 1695 * execution of this method, in which case the returned result 1696 * will be inaccurate. Thus, this method is typically not very 1697 * useful in concurrent applications. 1698 * 1699 * @return the number of elements in this map 1700 */ 1701 public int size() { 1702 long count = 0; 1703 for (Node n = findFirst(); n != null; n = n.next) { 1704 if (n.getValidValue() != null) 1705 ++count; 1706 } 1707 return (count >= Integer.MAX_VALUE) ? Integer.MAX_VALUE : (int) count; 1708 } 1709 1710 /** 1711 * Returns true if this map contains no key-value mappings. 1712 * @return true if this map contains no key-value mappings 1713 */ 1714 public boolean isEmpty() { 1715 return findFirst() == null; 1716 } 1717 1718 /** 1719 * Removes all of the mappings from this map. 1720 */ 1721 public void clear() { 1722 initialize(); 1723 } 1724 1725 /* ---------------- View methods -------------- */ 1726 1727 /* 1728 * Note: Lazy initialization works for views because view classes 1729 * are stateless/immutable so it doesn't matter wrt correctness if 1730 * more than one is created (which will only rarely happen). Even 1731 * so, the following idiom conservatively ensures that the method 1732 * returns the one it created if it does so, not one created by 1733 * another racing thread. 1734 */ 1735 1736 /** 1737 * Returns a {@link NavigableSet} view of the keys contained in this map. 1738 * The set's iterator returns the keys in ascending order. 1739 * The set is backed by the map, so changes to the map are 1740 * reflected in the set, and vice-versa. The set supports element 1741 * removal, which removes the corresponding mapping from the map, 1742 * via the {@code Iterator.remove}, {@code Set.remove}, 1743 * {@code removeAll}, {@code retainAll}, and {@code clear} 1744 * operations. It does not support the {@code add} or {@code addAll} 1745 * operations. 1746 * 1747 *

The view's {@code iterator} is a "weakly consistent" iterator 1748 * that will never throw {@link ConcurrentModificationException}, 1749 * and guarantees to traverse elements as they existed upon 1750 * construction of the iterator, and may (but is not guaranteed to) 1751 * reflect any modifications subsequent to construction. 1752 * 1753 *

This method is equivalent to method {@code navigableKeySet}. 1754 * 1755 * @return a navigable set view of the keys in this map 1756 */ 1757 public NavigableSet keySet() { 1758 KeySet ks = keySet; 1759 return (ks != null) ? ks : (keySet = new KeySet(this)); 1760 } 1761 1762 public NavigableSet navigableKeySet() { 1763 KeySet ks = keySet; 1764 return (ks != null) ? ks : (keySet = new KeySet(this)); 1765 } 1766 1767 /** 1768 * Returns a {@link Collection} view of the values contained in this map. 1769 * The collection's iterator returns the values in ascending order 1770 * of the corresponding keys. 1771 * The collection is backed by the map, so changes to the map are 1772 * reflected in the collection, and vice-versa. The collection 1773 * supports element removal, which removes the corresponding 1774 * mapping from the map, via the Iterator.remove, 1775 * Collection.remove, removeAll, 1776 * retainAll and clear operations. It does not 1777 * support the add or addAll operations. 1778 * 1779 *

The view's iterator is a "weakly consistent" iterator 1780 * that will never throw {@link ConcurrentModificationException}, 1781 * and guarantees to traverse elements as they existed upon 1782 * construction of the iterator, and may (but is not guaranteed to) 1783 * reflect any modifications subsequent to construction. 1784 */ 1785 public Collection values() { 1786 Values vs = values; 1787 return (vs != null) ? vs : (values = new Values(this)); 1788 } 1789 1790 /** 1791 * Returns a {@link Set} view of the mappings contained in this map. 1792 * The set's iterator returns the entries in ascending key order. 1793 * The set is backed by the map, so changes to the map are 1794 * reflected in the set, and vice-versa. The set supports element 1795 * removal, which removes the corresponding mapping from the map, 1796 * via the Iterator.remove, Set.remove, 1797 * removeAll, retainAll and clear 1798 * operations. It does not support the add or 1799 * addAll operations. 1800 * 1801 *

The view's iterator is a "weakly consistent" iterator 1802 * that will never throw {@link ConcurrentModificationException}, 1803 * and guarantees to traverse elements as they existed upon 1804 * construction of the iterator, and may (but is not guaranteed to) 1805 * reflect any modifications subsequent to construction. 1806 * 1807 *

The Map.Entry elements returned by 1808 * iterator.next() do not support the 1809 * setValue operation. 1810 * 1811 * @return a set view of the mappings contained in this map, 1812 * sorted in ascending key order 1813 */ 1814 public Set> entrySet() { 1815 EntrySet es = entrySet; 1816 return (es != null) ? es : (entrySet = new EntrySet(this)); 1817 } 1818 1819 public ConcurrentNavigableMap descendingMap() { 1820 ConcurrentNavigableMap dm = descendingMap; 1821 return (dm != null) ? dm : (descendingMap = new SubMap 1822 (this, null, false, null, false, true)); 1823 } 1824 1825 public NavigableSet descendingKeySet() { 1826 return descendingMap().navigableKeySet(); 1827 } 1828 1829 /* ---------------- AbstractMap Overrides -------------- */ 1830 1831 /** 1832 * Compares the specified object with this map for equality. 1833 * Returns true if the given object is also a map and the 1834 * two maps represent the same mappings. More formally, two maps 1835 * m1 and m2 represent the same mappings if 1836 * m1.entrySet().equals(m2.entrySet()). This 1837 * operation may return misleading results if either map is 1838 * concurrently modified during execution of this method. 1839 * 1840 * @param o object to be compared for equality with this map 1841 * @return true if the specified object is equal to this map 1842 */ 1843 public boolean equals(Object o) { 1844 if (o == this) 1845 return true; 1846 if (!(o instanceof Map)) 1847 return false; 1848 Map m = (Map) o; 1849 try { 1850 for (Map.Entry e : this.entrySet()) 1851 if (! e.getValue().equals(m.get(e.getKey()))) 1852 return false; 1853 for (Map.Entry e : m.entrySet()) { 1854 Object k = e.getKey(); 1855 Object v = e.getValue(); 1856 if (k == null || v == null || !v.equals(get(k))) 1857 return false; 1858 } 1859 return true; 1860 } catch (ClassCastException unused) { 1861 return false; 1862 } catch (NullPointerException unused) { 1863 return false; 1864 } 1865 } 1866 1867 /* ------ ConcurrentMap API methods ------ */ 1868 1869 /** 1870 * {@inheritDoc} 1871 * 1872 * @return the previous value associated with the specified key, 1873 * or null if there was no mapping for the key 1874 * @throws ClassCastException if the specified key cannot be compared 1875 * with the keys currently in the map 1876 * @throws NullPointerException if the specified key or value is null 1877 */ 1878 public V putIfAbsent(K key, V value) { 1879 if (value == null) 1880 throw new NullPointerException(); 1881 return doPut(key, value, true); 1882 } 1883 1884 /** 1885 * {@inheritDoc} 1886 * 1887 * @throws ClassCastException if the specified key cannot be compared 1888 * with the keys currently in the map 1889 * @throws NullPointerException if the specified key is null 1890 */ 1891 public boolean remove(Object key, Object value) { 1892 if (key == null) 1893 throw new NullPointerException(); 1894 if (value == null) 1895 return false; 1896 return doRemove(key, value) != null; 1897 } 1898 1899 /** 1900 * {@inheritDoc} 1901 * 1902 * @throws ClassCastException if the specified key cannot be compared 1903 * with the keys currently in the map 1904 * @throws NullPointerException if any of the arguments are null 1905 */ 1906 public boolean replace(K key, V oldValue, V newValue) { 1907 if (oldValue == null || newValue == null) 1908 throw new NullPointerException(); 1909 Comparablesuper K> k = comparable(key); 1910 for (;;) { 1911 Node n = findNode(k); 1912 if (n == null) 1913 return false; 1914 Object v = n.value; 1915 if (v != null) { 1916 if (!oldValue.equals(v)) 1917 return false; 1918 if (n.casValue(v, newValue)) 1919 return true; 1920 } 1921 } 1922 } 1923 1924 /** 1925 * {@inheritDoc} 1926 * 1927 * @return the previous value associated with the specified key, 1928 * or null if there was no mapping for the key 1929 * @throws ClassCastException if the specified key cannot be compared 1930 * with the keys currently in the map 1931 * @throws NullPointerException if the specified key or value is null 1932 */ 1933 public V replace(K key, V value) { 1934 if (value == null) 1935 throw new NullPointerException(); 1936 Comparablesuper K> k = comparable(key); 1937 for (;;) { 1938 Node n = findNode(k); 1939 if (n == null) 1940 return null; 1941 Object v = n.value; 1942 if (v != null && n.casValue(v, value)) 1943 return (V)v; 1944 } 1945 } 1946 1947 /* ------ SortedMap API methods ------ */ 1948 1949 public Comparatorsuper K> comparator() { 1950 return comparator; 1951 } 1952 1953 /** 1954 * @throws NoSuchElementException {@inheritDoc} 1955 */ 1956 public K firstKey() { 1957 Node n = findFirst(); 1958 if (n == null) 1959 throw new NoSuchElementException(); 1960 return n.key; 1961 } 1962 1963 /** 1964 * @throws NoSuchElementException {@inheritDoc} 1965 */ 1966 public K lastKey() { 1967 Node n = findLast(); 1968 if (n == null) 1969 throw new NoSuchElementException(); 1970 return n.key; 1971 } 1972 1973 /** 1974 * @throws ClassCastException {@inheritDoc} 1975 * @throws NullPointerException if {@code fromKey} or {@code toKey} is null 1976 * @throws IllegalArgumentException {@inheritDoc} 1977 */ 1978 public ConcurrentNavigableMap subMap(K fromKey, 1979 boolean fromInclusive, 1980 K toKey, 1981 boolean toInclusive) { 1982 if (fromKey == null || toKey == null) 1983 throw new NullPointerException(); 1984 return new SubMap 1985 (this, fromKey, fromInclusive, toKey, toInclusive, false); 1986 } 1987 1988 /** 1989 * @throws ClassCastException {@inheritDoc} 1990 * @throws NullPointerException if {@code toKey} is null 1991 * @throws IllegalArgumentException {@inheritDoc} 1992 */ 1993 public ConcurrentNavigableMap headMap(K toKey, 1994 boolean inclusive) { 1995 if (toKey == null) 1996 throw new NullPointerException(); 1997 return new SubMap 1998 (this, null, false, toKey, inclusive, false); 1999 } 2000 2001 /** 2002 * @throws ClassCastException {@inheritDoc} 2003 * @throws NullPointerException if {@code fromKey} is null 2004 * @throws IllegalArgumentException {@inheritDoc} 2005 */ 2006 public ConcurrentNavigableMap tailMap(K fromKey, 2007 boolean inclusive) { 2008 if (fromKey == null) 2009 throw new NullPointerException(); 2010 return new SubMap 2011 (this, fromKey, inclusive, null, false, false); 2012 } 2013 2014 /** 2015 * @throws ClassCastException {@inheritDoc} 2016 * @throws NullPointerException if {@code fromKey} or {@code toKey} is null 2017 * @throws IllegalArgumentException {@inheritDoc} 2018 */ 2019 public ConcurrentNavigableMap subMap(K fromKey, K toKey) { 2020 return subMap(fromKey, true, toKey, false); 2021 } 2022 2023 /** 2024 * @throws ClassCastException {@inheritDoc} 2025 * @throws NullPointerException if {@code toKey} is null 2026 * @throws IllegalArgumentException {@inheritDoc} 2027 */ 2028 public ConcurrentNavigableMap headMap(K toKey) { 2029 return headMap(toKey, false); 2030 } 2031 2032 /** 2033 * @throws ClassCastException {@inheritDoc} 2034 * @throws NullPointerException if {@code fromKey} is null 2035 * @throws IllegalArgumentException {@inheritDoc} 2036 */ 2037 public ConcurrentNavigableMap tailMap(K fromKey) { 2038 return tailMap(fromKey, true); 2039 } 2040 2041 /* ---------------- Relational operations -------------- */ 2042 2043 /** 2044 * Returns a key-value mapping associated with the greatest key 2045 * strictly less than the given key, or null if there is 2046 * no such key. The returned entry does not support the 2047 * Entry.setValue method. 2048 * 2049 * @throws ClassCastException {@inheritDoc} 2050 * @throws NullPointerException if the specified key is null 2051 */ 2052 public Map.Entry lowerEntry(K key) { 2053 return getNear(key, LT); 2054 } 2055 2056 /** 2057 * @throws ClassCastException {@inheritDoc} 2058 * @throws NullPointerException if the specified key is null 2059 */ 2060 public K lowerKey(K key) { 2061 Node n = findNear(key, LT); 2062 return (n == null) ? null : n.key; 2063 } 2064 2065 /** 2066 * Returns a key-value mapping associated with the greatest key 2067 * less than or equal to the given key, or null if there 2068 * is no such key. The returned entry does not support 2069 * the Entry.setValue method. 2070 * 2071 * @param key the key 2072 * @throws ClassCastException {@inheritDoc} 2073 * @throws NullPointerException if the specified key is null 2074 */ 2075 public Map.Entry floorEntry(K key) { 2076 return getNear(key, LT|EQ); 2077 } 2078 2079 /** 2080 * @param key the key 2081 * @throws ClassCastException {@inheritDoc} 2082 * @throws NullPointerException if the specified key is null 2083 */ 2084 public K floorKey(K key) { 2085 Node n = findNear(key, LT|EQ); 2086 return (n == null) ? null : n.key; 2087 } 2088 2089 /** 2090 * Returns a key-value mapping associated with the least key 2091 * greater than or equal to the given key, or null if 2092 * there is no such entry. The returned entry does not 2093 * support the Entry.setValue method. 2094 * 2095 * @throws ClassCastException {@inheritDoc} 2096 * @throws NullPointerException if the specified key is null 2097 */ 2098 public Map.Entry ceilingEntry(K key) { 2099 return getNear(key, GT|EQ); 2100 } 2101 2102 /** 2103 * @throws ClassCastException {@inheritDoc} 2104 * @throws NullPointerException if the specified key is null 2105 */ 2106 public K ceilingKey(K key) { 2107 Node n = findNear(key, GT|EQ); 2108 return (n == null) ? null : n.key; 2109 } 2110 2111 /** 2112 * Returns a key-value mapping associated with the least key 2113 * strictly greater than the given key, or null if there 2114 * is no such key. The returned entry does not support 2115 * the Entry.setValue method. 2116 * 2117 * @param key the key 2118 * @throws ClassCastException {@inheritDoc} 2119 * @throws NullPointerException if the specified key is null 2120 */ 2121 public Map.Entry higherEntry(K key) { 2122 return getNear(key, GT); 2123 } 2124 2125 /** 2126 * @param key the key 2127 * @throws ClassCastException {@inheritDoc} 2128 * @throws NullPointerException if the specified key is null 2129 */ 2130 public K higherKey(K key) { 2131 Node n = findNear(key, GT); 2132 return (n == null) ? null : n.key; 2133 } 2134 2135 /** 2136 * Returns a key-value mapping associated with the least 2137 * key in this map, or null if the map is empty. 2138 * The returned entry does not support 2139 * the Entry.setValue method. 2140 */ 2141 public Map.Entry firstEntry() { 2142 for (;;) { 2143 Node n = findFirst(); 2144 if (n == null) 2145 return null; 2146 AbstractMap.SimpleImmutableEntry e = n.createSnapshot(); 2147 if (e != null) 2148 return e; 2149 } 2150 } 2151 2152 /** 2153 * Returns a key-value mapping associated with the greatest 2154 * key in this map, or null if the map is empty. 2155 * The returned entry does not support 2156 * the Entry.setValue method. 2157 */ 2158 public Map.Entry lastEntry() { 2159 for (;;) { 2160 Node n = findLast(); 2161 if (n == null) 2162 return null; 2163 AbstractMap.SimpleImmutableEntry e = n.createSnapshot(); 2164 if (e != null) 2165 return e; 2166 } 2167 } 2168 2169 /** 2170 * Removes and returns a key-value mapping associated with 2171 * the least key in this map, or null if the map is empty. 2172 * The returned entry does not support 2173 * the Entry.setValue method. 2174 */ 2175 public Map.Entry pollFirstEntry() { 2176 return doRemoveFirstEntry(); 2177 } 2178 2179 /** 2180 * Removes and returns a key-value mapping associated with 2181 * the greatest key in this map, or null if the map is empty. 2182 * The returned entry does not support 2183 * the Entry.setValue method. 2184 */ 2185 public Map.Entry pollLastEntry() { 2186 return doRemoveLastEntry(); 2187 } 2188 2189 2190 /* ---------------- Iterators -------------- */ 2191 2192 /** 2193 * Base of iterator classes: 2194 */ 2195 abstract class Iter implements Iterator { 2196 /** the last node returned by next() */ 2197 Node lastReturned; 2198 /** the next node to return from next(); */ 2199 Node next; 2200 /** Cache of next value field to maintain weak consistency */ 2201 V nextValue; 2202 2203 /** Initializes ascending iterator for entire range. */ 2204 Iter() { 2205 for (;;) { 2206 next = findFirst(); 2207 if (next == null) 2208 break; 2209 Object x = next.value; 2210 if (x != null && x != next) { 2211 nextValue = (V) x; 2212 break; 2213 } 2214 } 2215 } 2216 2217 public final boolean hasNext() { 2218 return next != null; 2219 } 2220 2221 /** Advances next to higher entry. */ 2222 final void advance() { 2223 if (next == null) 2224 throw new NoSuchElementException(); 2225 lastReturned = next; 2226 for (;;) { 2227 next = next.next; 2228 if (next == null) 2229 break; 2230 Object x = next.value; 2231 if (x != null && x != next) { 2232 nextValue = (V) x; 2233 break; 2234 } 2235 } 2236 } 2237 2238 public void remove() { 2239 Node l = lastReturned; 2240 if (l == null) 2241 throw new IllegalStateException(); 2242 // It would not be worth all of the overhead to directly 2243 // unlink from here. Using remove is fast enough. 2244 ConcurrentSkipListMap.this.remove(l.key); 2245 lastReturned = null; 2246 } 2247 2248 } 2249 2250 final class ValueIterator extends Iter { 2251 public V next() { 2252 V v = nextValue; 2253 advance(); 2254 return v; 2255 } 2256 } 2257 2258 final class KeyIterator extends Iter { 2259 public K next() { 2260 Node n = next; 2261 advance(); 2262 return n.key; 2263 } 2264 } 2265 2266 final class EntryIterator extends Iter> { 2267 public Map.Entry next() { 2268 Node n = next; 2269 V v = nextValue; 2270 advance(); 2271 return new AbstractMap.SimpleImmutableEntry(n.key, v); 2272 } 2273 } 2274 2275 // Factory methods for iterators needed by ConcurrentSkipListSet etc 2276 2277 Iterator keyIterator() { 2278 return new KeyIterator(); 2279 } 2280 2281 Iterator valueIterator() { 2282 return new ValueIterator(); 2283 } 2284 2285 Iterator> entryIterator() { 2286 return new EntryIterator(); 2287 } 2288 2289 /* ---------------- View Classes -------------- */ 2290 2291 /* 2292 * View classes are static, delegating to a ConcurrentNavigableMap 2293 * to allow use by SubMaps, which outweighs the ugliness of 2294 * needing type-tests for Iterator methods. 2295 */ 2296 2297 static final List toList(Collection c) { 2298 // Using size() here would be a pessimization. 2299 List list = new ArrayList(); 2300 for (E e : c) 2301 list.add(e); 2302 return list; 2303 } 2304 2305 static final class KeySet 2306 extends AbstractSet implements NavigableSet { 2307 private final ConcurrentNavigableMap m; 2308 KeySet(ConcurrentNavigableMap map) { m = map; } 2309 public int size() { return m.size(); } 2310 public boolean isEmpty() { return m.isEmpty(); } 2311 public boolean contains(Object o) { return m.containsKey(o); } 2312 public boolean remove(Object o) { return m.remove(o) != null; } 2313 public void clear() { m.clear(); } 2314 public E lower(E e) { return m.lowerKey(e); } 2315 public E floor(E e) { return m.floorKey(e); } 2316 public E ceiling(E e) { return m.ceilingKey(e); } 2317 public E higher(E e) { return m.higherKey(e); } 2318 public Comparatorsuper E> comparator() { return m.comparator(); } 2319 public E first() { return m.firstKey(); } 2320 public E last() { return m.lastKey(); } 2321 public E pollFirst() { 2322 Map.Entry e = m.pollFirstEntry(); 2323 return (e == null) ? null : e.getKey(); 2324 } 2325 public E pollLast() { 2326 Map.Entry e = m.pollLastEntry(); 2327 return (e == null) ? null : e.getKey(); 2328 } 2329 public Iterator iterator() { 2330 if (m instanceof ConcurrentSkipListMap) 2331 return ((ConcurrentSkipListMap)m).keyIterator(); 2332 else 2333 return ((ConcurrentSkipListMap.SubMap)m).keyIterator(); 2334 } 2335 public boolean equals(Object o) { 2336 if (o == this) 2337 return true; 2338 if (!(o instanceof Set)) 2339 return false; 2340 Collection c = (Collection) o; 2341 try { 2342 return containsAll(c) && c.containsAll(this); 2343 } catch (ClassCastException unused) { 2344 return false; 2345 } catch (NullPointerException unused) { 2346 return false; 2347 } 2348 } 2349 public Object[] toArray() { return toList(this).toArray(); } 2350 public T[] toArray(T[] a) { return toList(this).toArray(a); } 2351 public Iterator descendingIterator() { 2352 return descendingSet().iterator(); 2353 } 2354 public NavigableSet subSet(E fromElement, 2355 boolean fromInclusive, 2356 E toElement, 2357 boolean toInclusive) { 2358 return new KeySet(m.subMap(fromElement, fromInclusive, 2359 toElement, toInclusive)); 2360 } 2361 public NavigableSet headSet(E toElement, boolean inclusive) { 2362 return new KeySet(m.headMap(toElement, inclusive)); 2363 } 2364 public NavigableSet tailSet(E fromElement, boolean inclusive) { 2365 return new KeySet(m.tailMap(fromElement, inclusive)); 2366 } 2367 public NavigableSet subSet(E fromElement, E toElement) { 2368 return subSet(fromElement, true, toElement, false); 2369 } 2370 public NavigableSet headSet(E toElement) { 2371 return headSet(toElement, false); 2372 } 2373 public NavigableSet tailSet(E fromElement) { 2374 return tailSet(fromElement, true); 2375 } 2376 public NavigableSet descendingSet() { 2377 return new KeySet(m.descendingMap()); 2378 } 2379 } 2380 2381 static final class Values extends AbstractCollection { 2382 private final ConcurrentNavigableMap m; 2383 Values(ConcurrentNavigableMap map) { 2384 m = map; 2385 } 2386 public Iterator iterator() { 2387 if (m instanceof ConcurrentSkipListMap) 2388 return ((ConcurrentSkipListMap)m).valueIterator(); 2389 else 2390 return ((SubMap)m).valueIterator(); 2391 } 2392 public boolean isEmpty() { 2393 return m.isEmpty(); 2394 } 2395 public int size() { 2396 return m.size(); 2397 } 2398 public boolean contains(Object o) { 2399 return m.containsValue(o); 2400 } 2401 public void clear() { 2402 m.clear(); 2403 } 2404 public Object[] toArray() { return toList(this).toArray(); } 2405 public T[] toArray(T[] a) { return toList(this).toArray(a); } 2406 } 2407 2408 static final class EntrySet extends AbstractSet> { 2409 private final ConcurrentNavigableMap m; 2410 EntrySet(ConcurrentNavigableMap map) { 2411 m = map; 2412 } 2413 2414 public Iterator> iterator() { 2415 if (m instanceof ConcurrentSkipListMap) 2416 return ((ConcurrentSkipListMap)m).entryIterator(); 2417 else 2418 return ((SubMap)m).entryIterator(); 2419 } 2420 2421 public boolean contains(Object o) { 2422 if (!(o instanceof Map.Entry)) 2423 return false; 2424 Map.Entry e = (Map.Entry)o; 2425 V1 v = m.get(e.getKey()); 2426 return v != null && v.equals(e.getValue()); 2427 } 2428 public boolean remove(Object o) { 2429 if (!(o instanceof Map.Entry)) 2430 return false; 2431 Map.Entry e = (Map.Entry)o; 2432 return m.remove(e.getKey(), 2433 e.getValue()); 2434 } 2435 public boolean isEmpty() { 2436 return m.isEmpty(); 2437 } 2438 public int size() { 2439 return m.size(); 2440 } 2441 public void clear() { 2442 m.clear(); 2443 } 2444 public boolean equals(Object o) { 2445 if (o == this) 2446 return true; 2447 if (!(o instanceof Set)) 2448 return false; 2449 Collection c = (Collection) o; 2450 try { 2451 return containsAll(c) && c.containsAll(this); 2452 } catch (ClassCastException unused) { 2453 return false; 2454 } catch (NullPointerException unused) { 2455 return false; 2456 } 2457 } 2458 public Object[] toArray() { return toList(this).toArray(); } 2459 public T[] toArray(T[] a) { return toList(this).toArray(a); } 2460 } 2461 2462 /** 2463 * Submaps returned by {@link ConcurrentSkipListMap} submap operations 2464 * represent a subrange of mappings of their underlying 2465 * maps. Instances of this class support all methods of their 2466 * underlying maps, differing in that mappings outside their range are 2467 * ignored, and attempts to add mappings outside their ranges result 2468 * in {@link IllegalArgumentException}. Instances of this class are 2469 * constructed only using the subMap, headMap, and 2470 * tailMap methods of their underlying maps. 2471 * 2472 * @serial include 2473 */ 2474 static final class SubMap extends AbstractMap 2475 implements ConcurrentNavigableMap, Cloneable, 2476 java.io.Serializable { 2477 private static final long serialVersionUID = -7647078645895051609L; 2478 2479 /** Underlying map */ 2480 private final ConcurrentSkipListMap m; 2481 /** lower bound key, or null if from start */ 2482 private final K lo; 2483 /** upper bound key, or null if to end */ 2484 private final K hi; 2485 /** inclusion flag for lo */ 2486 private final boolean loInclusive; 2487 /** inclusion flag for hi */ 2488 private final boolean hiInclusive; 2489 /** direction */ 2490 private final boolean isDescending; 2491 2492 // Lazily initialized view holders 2493 private transient KeySet keySetView; 2494 private transient Set> entrySetView; 2495 private transient Collection valuesView; 2496 2497 /** 2498 * Creates a new submap, initializing all fields 2499 */ 2500 SubMap(ConcurrentSkipListMap map, 2501 K fromKey, boolean fromInclusive, 2502 K toKey, boolean toInclusive, 2503 boolean isDescending) { 2504 if (fromKey != null && toKey != null && 2505 map.compare(fromKey, toKey) > 0) 2506 throw new IllegalArgumentException("inconsistent range"); 2507 this.m = map; 2508 this.lo = fromKey; 2509 this.hi = toKey; 2510 this.loInclusive = fromInclusive; 2511 this.hiInclusive = toInclusive; 2512 this.isDescending = isDescending; 2513 } 2514 2515 /* ---------------- Utilities -------------- */ 2516 2517 private boolean tooLow(K key) { 2518 if (lo != null) { 2519 int c = m.compare(key, lo); 2520 if (c < 0 || (c == 0 && !loInclusive)) 2521 return true; 2522 } 2523 return false; 2524 } 2525 2526 private boolean tooHigh(K key) { 2527 if (hi != null) { 2528 int c = m.compare(key, hi); 2529 if (c > 0 || (c == 0 && !hiInclusive)) 2530 return true; 2531 } 2532 return false; 2533 } 2534 2535 private boolean inBounds(K key) { 2536 return !tooLow(key) && !tooHigh(key); 2537 } 2538 2539 private void checkKeyBounds(K key) throws IllegalArgumentException { 2540 if (key == null) 2541 throw new NullPointerException(); 2542 if (!inBounds(key)) 2543 throw new IllegalArgumentException("key out of range"); 2544 } 2545 2546 /** 2547 * Returns true if node key is less than upper bound of range 2548 */ 2549 private boolean isBeforeEnd(ConcurrentSkipListMap.Node n) { 2550 if (n == null) 2551 return false; 2552 if (hi == null) 2553 return true; 2554 K k = n.key; 2555 if (k == null) // pass by markers and headers 2556 return true; 2557 int c = m.compare(k, hi); 2558 if (c > 0 || (c == 0 && !hiInclusive)) 2559 return false; 2560 return true; 2561 } 2562 2563 /** 2564 * Returns lowest node. This node might not be in range, so 2565 * most usages need to check bounds 2566 */ 2567 private ConcurrentSkipListMap.Node loNode() { 2568 if (lo == null) 2569 return m.findFirst(); 2570 else if (loInclusive) 2571 return m.findNear(lo, m.GT|m.EQ); 2572 else 2573 return m.findNear(lo, m.GT); 2574 } 2575 2576 /** 2577 * Returns highest node. This node might not be in range, so 2578 * most usages need to check bounds 2579 */ 2580 private ConcurrentSkipListMap.Node hiNode() { 2581 if (hi == null) 2582 return m.findLast(); 2583 else if (hiInclusive) 2584 return m.findNear(hi, m.LT|m.EQ); 2585 else 2586 return m.findNear(hi, m.LT); 2587 } 2588 2589 /** 2590 * Returns lowest absolute key (ignoring directonality) 2591 */ 2592 private K lowestKey() { 2593 ConcurrentSkipListMap.Node n = loNode(); 2594 if (isBeforeEnd(n)) 2595 return n.key; 2596 else 2597 throw new NoSuchElementException(); 2598 } 2599 2600 /** 2601 * Returns highest absolute key (ignoring directonality) 2602 */ 2603 private K highestKey() { 2604 ConcurrentSkipListMap.Node n = hiNode(); 2605 if (n != null) { 2606 K last = n.key; 2607 if (inBounds(last)) 2608 return last; 2609 } 2610 throw new NoSuchElementException(); 2611 } 2612 2613 private Map.Entry lowestEntry() { 2614 for (;;) { 2615 ConcurrentSkipListMap.Node n = loNode(); 2616 if (!isBeforeEnd(n)) 2617 return null; 2618 Map.Entry e = n.createSnapshot(); 2619 if (e != null) 2620 return e; 2621 } 2622 } 2623 2624 private Map.Entry highestEntry() { 2625 for (;;) { 2626 ConcurrentSkipListMap.Node n = hiNode(); 2627 if (n == null || !inBounds(n.key)) 2628 return null; 2629 Map.Entry e = n.createSnapshot(); 2630 if (e != null) 2631 return e; 2632 } 2633 } 2634 2635 private Map.Entry removeLowest() { 2636 for (;;) { 2637 Node n = loNode(); 2638 if (n == null) 2639 return null; 2640 K k = n.key; 2641 if (!inBounds(k)) 2642 return null; 2643 V v = m.doRemove(k, null); 2644 if (v != null) 2645 return new AbstractMap.SimpleImmutableEntry(k, v); 2646 } 2647 } 2648 2649 private Map.Entry removeHighest() { 2650 for (;;) { 2651 Node n = hiNode(); 2652 if (n == null) 2653 return null; 2654 K k = n.key; 2655 if (!inBounds(k)) 2656 return null; 2657 V v = m.doRemove(k, null); 2658 if (v != null) 2659 return new AbstractMap.SimpleImmutableEntry(k, v); 2660 } 2661 } 2662 2663 /** 2664 * Submap version of ConcurrentSkipListMap.getNearEntry 2665 */ 2666 private Map.Entry getNearEntry(K key, int rel) { 2667 if (isDescending) { // adjust relation for direction 2668 if ((rel & m.LT) == 0) 2669 rel |= m.LT; 2670 else 2671 rel &= ~m.LT; 2672 } 2673 if (tooLow(key)) 2674 return ((rel & m.LT) != 0) ? null : lowestEntry(); 2675 if (tooHigh(key)) 2676 return ((rel & m.LT) != 0) ? highestEntry() : null; 2677 for (;;) { 2678 Node n = m.findNear(key, rel); 2679 if (n == null || !inBounds(n.key)) 2680 return null; 2681 K k = n.key; 2682 V v = n.getValidValue(); 2683 if (v != null) 2684 return new AbstractMap.SimpleImmutableEntry(k, v); 2685 } 2686 } 2687 2688 // Almost the same as getNearEntry, except for keys 2689 private K getNearKey(K key, int rel) { 2690 if (isDescending) { // adjust relation for direction 2691 if ((rel & m.LT) == 0) 2692 rel |= m.LT; 2693 else 2694 rel &= ~m.LT; 2695 } 2696 if (tooLow(key)) { 2697 if ((rel & m.LT) == 0) { 2698 ConcurrentSkipListMap.Node n = loNode(); 2699 if (isBeforeEnd(n)) 2700 return n.key; 2701 } 2702 return null; 2703 } 2704 if (tooHigh(key)) { 2705 if ((rel & m.LT) != 0) { 2706 ConcurrentSkipListMap.Node n = hiNode(); 2707 if (n != null) { 2708 K last = n.key; 2709 if (inBounds(last)) 2710 return last; 2711 } 2712 } 2713 return null; 2714 } 2715 for (;;) { 2716 Node n = m.findNear(key, rel); 2717 if (n == null || !inBounds(n.key)) 2718 return null; 2719 K k = n.key; 2720 V v = n.getValidValue(); 2721 if (v != null) 2722 return k; 2723 } 2724 } 2725 2726 /* ---------------- Map API methods -------------- */ 2727 2728 public boolean containsKey(Object key) { 2729 if (key == null) throw new NullPointerException(); 2730 K k = (K)key; 2731 return inBounds(k) && m.containsKey(k); 2732 } 2733 2734 public V get(Object key) { 2735 if (key == null) throw new NullPointerException(); 2736 K k = (K)key; 2737 return ((!inBounds(k)) ? null : m.get(k)); 2738 } 2739 2740 public V put(K key, V value) { 2741 checkKeyBounds(key); 2742 return m.put(key, value); 2743 } 2744 2745 public V remove(Object key) { 2746 K k = (K)key; 2747 return (!inBounds(k)) ? null : m.remove(k); 2748 } 2749 2750 public int size() { 2751 long count = 0; 2752 for (ConcurrentSkipListMap.Node n = loNode(); 2753 isBeforeEnd(n); 2754 n = n.next) { 2755 if (n.getValidValue() != null) 2756 ++count; 2757 } 2758 return count >= Integer.MAX_VALUE ? Integer.MAX_VALUE : (int)count; 2759 } 2760 2761 public boolean isEmpty() { 2762 return !isBeforeEnd(loNode()); 2763 } 2764 2765 public boolean containsValue(Object value) { 2766 if (value == null) 2767 throw new NullPointerException(); 2768 for (ConcurrentSkipListMap.Node n = loNode(); 2769 isBeforeEnd(n); 2770 n = n.next) { 2771 V v = n.getValidValue(); 2772 if (v != null && value.equals(v)) 2773 return true; 2774 } 2775 return false; 2776 } 2777 2778 public void clear() { 2779 for (ConcurrentSkipListMap.Node n = loNode(); 2780 isBeforeEnd(n); 2781 n = n.next) { 2782 if (n.getValidValue() != null) 2783 m.remove(n.key); 2784 } 2785 } 2786 2787 /* ---------------- ConcurrentMap API methods -------------- */ 2788 2789 public V putIfAbsent(K key, V value) { 2790 checkKeyBounds(key); 2791 return m.putIfAbsent(key, value); 2792 } 2793 2794 public boolean remove(Object key, Object value) { 2795 K k = (K)key; 2796 return inBounds(k) && m.remove(k, value); 2797 } 2798 2799 public boolean replace(K key, V oldValue, V newValue) { 2800 checkKeyBounds(key); 2801 return m.replace(key, oldValue, newValue); 2802 } 2803 2804 public V replace(K key, V value) { 2805 checkKeyBounds(key); 2806 return m.replace(key, value); 2807 } 2808 2809 /* ---------------- SortedMap API methods -------------- */ 2810 2811 public Comparatorsuper K> comparator() { 2812 Comparatorsuper K> cmp = m.comparator(); 2813 if (isDescending) 2814 return Collections.reverseOrder(cmp); 2815 else 2816 return cmp; 2817 } 2818 2819 /** 2820 * Utility to create submaps, where given bounds override 2821 * unbounded(null) ones and/or are checked against bounded ones. 2822 */ 2823 private SubMap newSubMap(K fromKey, 2824 boolean fromInclusive, 2825 K toKey, 2826 boolean toInclusive) { 2827 if (isDescending) { // flip senses 2828 K tk = fromKey; 2829 fromKey = toKey; 2830 toKey = tk; 2831 boolean ti = fromInclusive; 2832 fromInclusive = toInclusive; 2833 toInclusive = ti; 2834 } 2835 if (lo != null) { 2836 if (fromKey == null) { 2837 fromKey = lo; 2838 fromInclusive = loInclusive; 2839 } 2840 else { 2841 int c = m.compare(fromKey, lo); 2842 if (c < 0 || (c == 0 && !loInclusive && fromInclusive)) 2843 throw new IllegalArgumentException("key out of range"); 2844 } 2845 } 2846 if (hi != null) { 2847 if (toKey == null) { 2848 toKey = hi; 2849 toInclusive = hiInclusive; 2850 } 2851 else { 2852 int c = m.compare(toKey, hi); 2853 if (c > 0 || (c == 0 && !hiInclusive && toInclusive)) 2854 throw new IllegalArgumentException("key out of range"); 2855 } 2856 } 2857 return new SubMap(m, fromKey, fromInclusive, 2858 toKey, toInclusive, isDescending); 2859 } 2860 2861 public SubMap subMap(K fromKey, 2862 boolean fromInclusive, 2863 K toKey, 2864 boolean toInclusive) { 2865 if (fromKey == null || toKey == null) 2866 throw new NullPointerException(); 2867 return newSubMap(fromKey, fromInclusive, toKey, toInclusive); 2868 } 2869 2870 public SubMap headMap(K toKey, 2871 boolean inclusive) { 2872 if (toKey == null) 2873 throw new NullPointerException(); 2874 return newSubMap(null, false, toKey, inclusive); 2875 } 2876 2877 public SubMap tailMap(K fromKey, 2878 boolean inclusive) { 2879 if (fromKey == null) 2880 throw new NullPointerException(); 2881 return newSubMap(fromKey, inclusive, null, false); 2882 } 2883 2884 public SubMap subMap(K fromKey, K toKey) { 2885 return subMap(fromKey, true, toKey, false); 2886 } 2887 2888 public SubMap headMap(K toKey) { 2889 return headMap(toKey, false); 2890 } 2891 2892 public SubMap tailMap(K fromKey) { 2893 return tailMap(fromKey, true); 2894 } 2895 2896 public SubMap descendingMap() { 2897 return new SubMap(m, lo, loInclusive, 2898 hi, hiInclusive, !isDescending); 2899 } 2900 2901 /* ---------------- Relational methods -------------- */ 2902 2903 public Map.Entry ceilingEntry(K key) { 2904 return getNearEntry(key, (m.GT|m.EQ)); 2905 } 2906 2907 public K ceilingKey(K key) { 2908 return getNearKey(key, (m.GT|m.EQ)); 2909 } 2910 2911 public Map.Entry lowerEntry(K key) { 2912 return getNearEntry(key, (m.LT)); 2913 } 2914 2915 public K lowerKey(K key) { 2916 return getNearKey(key, (m.LT)); 2917 } 2918 2919 public Map.Entry floorEntry(K key) { 2920 return getNearEntry(key, (m.LT|m.EQ)); 2921 } 2922 2923 public K floorKey(K key) { 2924 return getNearKey(key, (m.LT|m.EQ)); 2925 } 2926 2927 public Map.Entry higherEntry(K key) { 2928 return getNearEntry(key, (m.GT)); 2929 } 2930 2931 public K higherKey(K key) { 2932 return getNearKey(key, (m.GT)); 2933 } 2934 2935 public K firstKey() { 2936 return isDescending ? highestKey() : lowestKey(); 2937 } 2938 2939 public K lastKey() { 2940 return isDescending ? lowestKey() : highestKey(); 2941 } 2942 2943 public Map.Entry firstEntry() { 2944 return isDescending ? highestEntry() : lowestEntry(); 2945 } 2946 2947 public Map.Entry lastEntry() { 2948 return isDescending ? lowestEntry() : highestEntry(); 2949 } 2950 2951 public Map.Entry pollFirstEntry() { 2952 return isDescending ? removeHighest() : removeLowest(); 2953 } 2954 2955 public Map.Entry pollLastEntry() { 2956 return isDescending ? removeLowest() : removeHighest(); 2957 } 2958 2959 /* ---------------- Submap Views -------------- */ 2960 2961 public NavigableSet keySet() { 2962 KeySet ks = keySetView; 2963 return (ks != null) ? ks : (keySetView = new KeySet(this)); 2964 } 2965 2966 public NavigableSet navigableKeySet() { 2967 KeySet ks = keySetView; 2968 return (ks != null) ? ks : (keySetView = new KeySet(this)); 2969 } 2970 2971 public Collection values() { 2972 Collection vs = valuesView; 2973 return (vs != null) ? vs : (valuesView = new Values(this)); 2974 } 2975 2976 public Set> entrySet() { 2977 Set> es = entrySetView; 2978 return (es != null) ? es : (entrySetView = new EntrySet(this)); 2979 } 2980 2981 public NavigableSet descendingKeySet() { 2982 return descendingMap().navigableKeySet(); 2983 } 2984 2985 Iterator keyIterator() { 2986 return new SubMapKeyIterator(); 2987 } 2988 2989 Iterator valueIterator() { 2990 return new SubMapValueIterator(); 2991 } 2992 2993 Iterator> entryIterator() { 2994 return new SubMapEntryIterator(); 2995 } 2996 2997 /** 2998 * Variant of main Iter class to traverse through submaps. 2999 */ 3000 abstract class SubMapIter implements Iterator { 3001 /** the last node returned by next() */ 3002 Node lastReturned; 3003 /** the next node to return from next(); */ 3004 Node next; 3005 /** Cache of next value field to maintain weak consistency */ 3006 V nextValue; 3007 3008 SubMapIter() { 3009 for (;;) { 3010 next = isDescending ? hiNode() : loNode(); 3011 if (next == null) 3012 break; 3013 Object x = next.value; 3014 if (x != null && x != next) { 3015 if (! inBounds(next.key)) 3016 next = null; 3017 else 3018 nextValue = (V) x; 3019 break; 3020 } 3021 } 3022 } 3023 3024 public final boolean hasNext() { 3025 return next != null; 3026 } 3027 3028 final void advance() { 3029 if (next == null) 3030 throw new NoSuchElementException(); 3031 lastReturned = next; 3032 if (isDescending) 3033 descend(); 3034 else 3035 ascend(); 3036 } 3037 3038 private void ascend() { 3039 for (;;) { 3040 next = next.next; 3041 if (next == null) 3042 break; 3043 Object x = next.value; 3044 if (x != null && x != next) { 3045 if (tooHigh(next.key)) 3046 next = null; 3047 else 3048 nextValue = (V) x; 3049 break; 3050 } 3051 } 3052 } 3053 3054 private void descend() { 3055 for (;;) { 3056 next = m.findNear(lastReturned.key, LT); 3057 if (next == null) 3058 break; 3059 Object x = next.value; 3060 if (x != null && x != next) { 3061 if (tooLow(next.key)) 3062 next = null; 3063 else 3064 nextValue = (V) x; 3065 break; 3066 } 3067 } 3068 } 3069 3070 public void remove() { 3071 Node l = lastReturned; 3072 if (l == null) 3073 throw new IllegalStateException(); 3074 m.remove(l.key); 3075 lastReturned = null; 3076 } 3077 3078 } 3079 3080 final class SubMapValueIterator extends SubMapIter { 3081 public V next() { 3082 V v = nextValue; 3083 advance(); 3084 return v; 3085 } 3086 } 3087 3088 final class SubMapKeyIterator extends SubMapIter { 3089 public K next() { 3090 Node n = next; 3091 advance(); 3092 return n.key; 3093 } 3094 } 3095 3096 final class SubMapEntryIterator extends SubMapIter> { 3097 public Map.Entry next() { 3098 Node n = next; 3099 V v = nextValue; 3100 advance(); 3101 return new AbstractMap.SimpleImmutableEntry(n.key, v); 3102 } 3103 } 3104 } 3105 3106 // Unsafe mechanics 3107 private static final sun.misc.Unsafe UNSAFE; 3108 private static final long headOffset; 3109 static { 3110 try { 3111 UNSAFE = sun.misc.Unsafe.getUnsafe(); 3112 Class k = ConcurrentSkipListMap.class; 3113 headOffset = UNSAFE.objectFieldOffset 3114 (k.getDeclaredField("head")); 3115 } catch (Exception e) { 3116 throw new Error(e); 3117 } 3118 } 3119 }

复制代码

 

下面从ConcurrentSkipListMap的添加,删除,获取这3个方面对它进行分析。

1. 添加

下面以put(K key, V value)为例,对ConcurrentSkipListMap的添加方法进行说明。

public V put(K key, V value) {
    if (value == null)
        throw new NullPointerException();
    return doPut(key, value, false);
}

实际上,put()是通过doPut()将key-value键值对添加到ConcurrentSkipListMap中的。

doPut()的源码如下:

复制代码
private V doPut(K kkey, V value, boolean onlyIfAbsent) {
    Comparablesuper K> key = comparable(kkey);
    for (;;) {
        // 找到key的前继节点
        Node b = findPredecessor(key);
        // 设置n为“key的前继节点的后继节点”,即n应该是“插入节点”的“后继节点”
        Node n = b.next;
        for (;;) {
            if (n != null) {
                Node f = n.next;
                // 如果两次获得的b.next不是相同的Node,就跳转到”外层for循环“,重新获得b和n后再遍历。
                if (n != b.next)
                    break;
                // v是“n的值”
                Object v = n.value;
                // 当n的值为null(意味着其它线程删除了n);此时删除b的下一个节点,然后跳转到”外层for循环“,重新获得b和n后再遍历。
                if (v == null) {               // n is deleted
                    n.helpDelete(b, f);
                    break;
                }
                // 如果其它线程删除了b;则跳转到”外层for循环“,重新获得b和n后再遍历。
                if (v == n || b.value == null) // b is deleted
                    break;
                // 比较key和n.key
                int c = key.compareTo(n.key);
                if (c > 0) {
                    b = n;
                    n = f;
                    continue;
                }
                if (c == 0) {
                    if (onlyIfAbsent || n.casValue(v, value))
                        return (V)v;
                    else
                        break; // restart if lost race to replace value
                }
                // else c < 0; fall through
            }

            // 新建节点(对应是“要插入的键值对”)
            Node z = new Node(kkey, value, n);
            // 设置“b的后继节点”为z
            if (!b.casNext(n, z))
                break;         // 多线程情况下,break才可能发生(其它线程对b进行了操作)
            // 随机获取一个level
            // 然后在“第1层”到“第level层”的链表中都插入新建节点
            int level = randomLevel();
            if (level > 0)
                insertIndex(z, level);
            return null;
        }
    }
}
复制代码

说明:doPut() 的作用就是将键值对添加到“跳表”中。
要想搞清doPut(),首先要弄清楚它的主干部分 —— 我们先单纯的只考虑“单线程的情况下,将key-value添加到跳表中”,即忽略“多线程相关的内容”。它的流程如下:
第1步:找到“插入位置”。
即,找到“key的前继节点(b)”和“key的后继节点(n)”;key是要插入节点的键。
第2步:新建并插入节点。
即,新建节点z(key对应的节点),并将新节点z插入到“跳表”中(设置“b的后继节点为z”,“z的后继节点为n”)。
第3步:更新跳表。
即,随机获取一个level,然后在“跳表”的第1层~第level层之间,每一层都插入节点z;在第level层之上就不再插入节点了。若level数值大于“跳表的层次”,则新建一层。
主干部分“对应的精简后的doPut()的代码”如下(仅供参考):

复制代码
private V doPut(K kkey, V value, boolean onlyIfAbsent) {
    Comparablesuper K> key = comparable(kkey);
    for (;;) {
        // 找到key的前继节点
        Node b = findPredecessor(key);
        // 设置n为key的后继节点
        Node n = b.next;
        for (;;) {
            
            // 新建节点(对应是“要被插入的键值对”)
            Node z = new Node(kkey, value, n);
            // 设置“b的后继节点”为z
            b.casNext(n, z);

            // 随机获取一个level
            // 然后在“第1层”到“第level层”的链表中都插入新建节点
            int level = randomLevel();
            if (level > 0)
                insertIndex(z, level);
            return null;
        }
    }
}
复制代码

理清主干之后,剩余的工作就相对简单了。主要是上面几步的对应算法的具体实现,以及多线程相关情况的处理!

 

2. 删除

下面以remove(Object key)为例,对ConcurrentSkipListMap的删除方法进行说明。

public V remove(Object key) {
    return doRemove(key, null);
}

实际上,remove()是通过doRemove()将ConcurrentSkipListMap中的key对应的键值对删除的。

doRemove()的源码如下:

 

复制代码
final V doRemove(Object okey, Object value) {
    Comparablesuper K> key = comparable(okey);
    for (;;) {
        // 找到“key的前继节点”
        Node b = findPredecessor(key);
        // 设置n为“b的后继节点”(即若key存在于“跳表中”,n就是key对应的节点)
        Node n = b.next;
        for (;;) {
            if (n == null)
                return null;
            // f是“当前节点n的后继节点”
            Node f = n.next;
            // 如果两次读取到的“b的后继节点”不同(其它线程操作了该跳表),则返回到“外层for循环”重新遍历。
            if (n != b.next)                    // inconsistent read
                break;
            // 如果“当前节点n的值”变为null(其它线程操作了该跳表),则返回到“外层for循环”重新遍历。
            Object v = n.value;
            if (v == null) {                    // n is deleted
                n.helpDelete(b, f);
                break;
            }
            // 如果“前继节点b”被删除(其它线程操作了该跳表),则返回到“外层for循环”重新遍历。
            if (v == n || b.value == null)      // b is deleted
                break;
            int c = key.compareTo(n.key);
            if (c < 0)
                return null;
            if (c > 0) {
                b = n;
                n = f;
                continue;
            }

            // 以下是c=0的情况
            if (value != null && !value.equals(v))
                return null;
            // 设置“当前节点n”的值为null
            if (!n.casValue(v, null))
                break;
            // 设置“b的后继节点”为f
            if (!n.appendMarker(f) || !b.casNext(n, f))
                findNode(key);                  // Retry via findNode
            else {
                // 清除“跳表”中每一层的key节点
                findPredecessor(key);           // Clean index
                // 如果“表头的右索引为空”,则将“跳表的层次”-1。
                if (head.right == null)
                    tryReduceLevel();
            }
            return (V)v;
        }
    }
}
复制代码

说明:doRemove()的作用是删除跳表中的节点。
和doPut()一样,我们重点看doRemove()的主干部分,了解主干部分之后,其余部分就非常容易理解了。下面是“单线程的情况下,删除跳表中键值对的步骤”:
第1步:找到“被删除节点的位置”。
即,找到“key的前继节点(b)”,“key所对应的节点(n)”,“n的后继节点f”;key是要删除节点的键。
第2步:删除节点。
即,将“key所对应的节点n”从跳表中移除 -- 将“b的后继节点”设为“f”!
第3步:更新跳表。
即,遍历跳表,删除每一层的“key节点”(如果存在的话)。如果删除“key节点”之后,跳表的层次需要-1;则执行相应的操作!
主干部分“对应的精简后的doRemove()的代码”如下(仅供参考):

 

复制代码
final V doRemove(Object okey, Object value) {
    Comparablesuper K> key = comparable(okey);
    for (;;) {
        // 找到“key的前继节点”
        Node b = findPredecessor(key);
        // 设置n为“b的后继节点”(即若key存在于“跳表中”,n就是key对应的节点)
        Node n = b.next;
        for (;;) {
            // f是“当前节点n的后继节点”
            Node f = n.next;

            // 设置“当前节点n”的值为null
            n.casValue(v, null);

            // 设置“b的后继节点”为f
            b.casNext(n, f);
            // 清除“跳表”中每一层的key节点
            findPredecessor(key);
            // 如果“表头的右索引为空”,则将“跳表的层次”-1。
            if (head.right == null)
                tryReduceLevel();
            return (V)v;
        }
    }
}
复制代码

 

3. 获取

下面以get(Object key)为例,对ConcurrentSkipListMap的获取方法进行说明。

public V get(Object key) {
    return doGet(key);
}

doGet的源码如下:

复制代码
private V doGet(Object okey) {
    Comparablesuper K> key = comparable(okey);
    for (;;) {
        // 找到“key对应的节点”
        Node n = findNode(key);
        if (n == null)
            return null;
        Object v = n.value;
        if (v != null)
            return (V)v;
    }
}
复制代码

说明:doGet()是通过findNode()找到并返回节点的。

复制代码
private Node findNode(Comparablesuper K> key) {
    for (;;) {
        // 找到key的前继节点
        Node b = findPredecessor(key);
        // 设置n为“b的后继节点”(即若key存在于“跳表中”,n就是key对应的节点)
        Node n = b.next;
        for (;;) {
            // 如果“n为null”,则跳转中不存在key对应的节点,直接返回null。
            if (n == null)
                return null;
            Node f = n.next;
            // 如果两次读取到的“b的后继节点”不同(其它线程操作了该跳表),则返回到“外层for循环”重新遍历。
            if (n != b.next)                // inconsistent read
                break;
            Object v = n.value;
            // 如果“当前节点n的值”变为null(其它线程操作了该跳表),则返回到“外层for循环”重新遍历。
            if (v == null) {                // n is deleted
                n.helpDelete(b, f);
                break;
            }
            if (v == n || b.value == null)  // b is deleted
                break;
            // 若n是当前节点,则返回n。
            int c = key.compareTo(n.key);
            if (c == 0)
                return n;
            // 若“节点n的key”小于“key”,则说明跳表中不存在key对应的节点,返回null
            if (c < 0)
                return null;
            // 若“节点n的key”大于“key”,则更新b和n,继续查找。
            b = n;
            n = f;
        }
    }
}
复制代码

说明:findNode(key)的作用是在返回跳表中key对应的节点;存在则返回节点,不存在则返回null。
先弄清函数的主干部分,即抛开“多线程相关内容”,单纯的考虑单线程情况下,从跳表获取节点的算法。
第1步:找到“被删除节点的位置”。
根据findPredecessor()定位key所在的层次以及找到key的前继节点(b),然后找到b的后继节点n。
第2步:根据“key的前继节点(b)”和“key的前继节点的后继节点(n)”来定位“key对应的节点”。
具体是通过比较“n的键值”和“key”的大小。如果相等,则n就是所要查找的键。

 

ConcurrentSkipListMap示例

复制代码
import java.util.*;
import java.util.concurrent.*;

/*
 *   ConcurrentSkipListMap是“线程安全”的哈希表,而TreeMap是非线程安全的。
 *
 *   下面是“多个线程同时操作并且遍历map”的示例
 *   (01) 当map是ConcurrentSkipListMap对象时,程序能正常运行。
 *   (02) 当map是TreeMap对象时,程序会产生ConcurrentModificationException异常。
 *
 * @author skywang
 */
public class ConcurrentSkipListMapDemo1 {

    // TODO: map是TreeMap对象时,程序会出错。
    //private static Map map = new TreeMap();
    private static Map map = new ConcurrentSkipListMap();
    public static void main(String[] args) {
    
        // 同时启动两个线程对map进行操作!
        new MyThread("a").start();
        new MyThread("b").start();
    }

    private static void printAll() {
        String key, value;
        Iterator iter = map.entrySet().iterator();
        while(iter.hasNext()) {
            Map.Entry entry = (Map.Entry)iter.next();
            key = (String)entry.getKey();
            value = (String)entry.getValue();
            System.out.print("("+key+", "+value+"), ");
        }
        System.out.println();
    }

    private static class MyThread extends Thread {
        MyThread(String name) {
            super(name);
        }
        @Override
        public void run() {
                int i = 0;
            while (i++ < 6) {
                // “线程名” + "序号"
                String val = Thread.currentThread().getName()+i;
                map.put(val, "0");
                // 通过“Iterator”遍历map。
                printAll();
            }
        }
    }
}
复制代码

(某一次)运行结果

 

复制代码
(a1, 0), (a1, 0), (b1, 0), (b1, 0),

(a1, 0), (b1, 0), (b2, 0), 
(a1, 0), (a1, 0), (a2, 0), (a2, 0), (b1, 0), (b1, 0), (b2, 0), (b2, 0), (b3, 0), 
(b3, 0), (a1, 0), 
(a2, 0), (a3, 0), (a1, 0), (b1, 0), (a2, 0), (b2, 0), (a3, 0), (b3, 0), (b1, 0), (b4, 0), 
(b2, 0), (a1, 0), (b3, 0), (a2, 0), (b4, 0), 
(a3, 0), (a1, 0), (a4, 0), (a2, 0), (b1, 0), (a3, 0), (b2, 0), (a4, 0), (b3, 0), (b1, 0), (b4, 0), (b2, 0), (b5, 0), 
(b3, 0), (a1, 0), (b4, 0), (a2, 0), (b5, 0), 
(a3, 0), (a1, 0), (a4, 0), (a2, 0), (a5, 0), (a3, 0), (b1, 0), (a4, 0), (b2, 0), (a5, 0), (b3, 0), (b1, 0), (b4, 0), (b2, 0), (b5, 0), (b3, 0), (b6, 0), 
(b4, 0), (a1, 0), (b5, 0), (a2, 0), (b6, 0), 
(a3, 0), (a4, 0), (a5, 0), (a6, 0), (b1, 0), (b2, 0), (b3, 0), (b4, 0), (b5, 0), (b6, 0), 
复制代码

结果说明
示例程序中,启动两个线程(线程a和线程b)分别对ConcurrentSkipListMap进行操作。以线程a而言,它会先获取“线程名”+“序号”,然后将该字符串作为key,将“0”作为value,插入到ConcurrentSkipListMap中;接着,遍历并输出ConcurrentSkipListMap中的全部元素。 线程b的操作和线程a一样,只不过线程b的名字和线程a的名字不同。
当map是ConcurrentSkipListMap对象时,程序能正常运行。如果将map改为TreeMap时,程序会产生ConcurrentModificationException异常。

 


 

概要

本章对Java.util.concurrent包中的ConcurrentSkipListMap类进行详细的介绍。内容包括:
ConcurrentSkipListMap介绍
ConcurrentSkipListMap原理和数据结构
ConcurrentSkipListMap函数列表
ConcurrentSkipListMap源码分析(JDK1.7.0_40版本)
ConcurrentSkipListMap示例

转载请注明出处:http://www.cnblogs.com/skywang12345/p/3498556.html

 

ConcurrentSkipListMap介绍

ConcurrentSkipListMap是线程安全的有序的哈希表,适用于高并发的场景。
ConcurrentSkipListMap和TreeMap,它们虽然都是有序的哈希表。但是,第一,它们的线程安全机制不同,TreeMap是非线程安全的,而ConcurrentSkipListMap是线程安全的。第二,ConcurrentSkipListMap是通过跳表实现的,而TreeMap是通过红黑树实现的。
关于跳表(Skip List),它是平衡树的一种替代的数据结构,但是和红黑树不相同的是,跳表对于树的平衡的实现是基于一种随机化的算法的,这样也就是说跳表的插入和删除的工作是比较简单的。

 

ConcurrentSkipListMap原理和数据结构

ConcurrentSkipListMap的数据结构,如下图所示:

说明

先以数据“7,14,21,32,37,71,85”序列为例,来对跳表进行简单说明。

跳表分为许多层(level),每一层都可以看作是数据的索引,这些索引的意义就是加快跳表查找数据速度。每一层的数据都是有序的,上一层数据是下一层数据的子集,并且第一层(level 1)包含了全部的数据;层次越高,跳跃性越大,包含的数据越少。
跳表包含一个表头,它查找数据时,是从上往下,从左往右进行查找。现在“需要找出值为32的节点”为例,来对比说明跳表和普遍的链表。

情况1:链表中查找“32”节点
路径如下图1-02所示:

需要4步(红色部分表示路径)。

 

情况2:跳表中查找“32”节点
路径如下图1-03所示:

忽略索引垂直线路上路径的情况下,只需要2步(红色部分表示路径)。


下面说说Java中ConcurrentSkipListMap的数据结构。
(01) ConcurrentSkipListMap继承于AbstractMap类,也就意味着它是一个哈希表。
(02) Index是ConcurrentSkipListMap的内部类,它与“跳表中的索引相对应”。HeadIndex继承于Index,ConcurrentSkipListMap中含有一个HeadIndex的对象head,head是“跳表的表头”。
(03) Index是跳表中的索引,它包含“右索引的指针(right)”,“下索引的指针(down)”和“哈希表节点node”。node是Node的对象,Node也是ConcurrentSkipListMap中的内部类。

 

ConcurrentSkipListMap函数列表

复制代码
// 构造一个新的空映射,该映射按照键的自然顺序进行排序。
ConcurrentSkipListMap()
// 构造一个新的空映射,该映射按照指定的比较器进行排序。
ConcurrentSkipListMap(Comparatorsuper K> comparator)
// 构造一个新映射,该映射所包含的映射关系与给定映射包含的映射关系相同,并按照键的自然顺序进行排序。
ConcurrentSkipListMap(Mapextends K,? extends V> m)
// 构造一个新映射,该映射所包含的映射关系与指定的有序映射包含的映射关系相同,使用的顺序也相同。
ConcurrentSkipListMap(SortedMapextends V> m)

// 返回与大于等于给定键的最小键关联的键-值映射关系;如果不存在这样的条目,则返回 null。
Map.Entry ceilingEntry(K key)
// 返回大于等于给定键的最小键;如果不存在这样的键,则返回 null。
K ceilingKey(K key)
// 从此映射中移除所有映射关系。
void clear()
// 返回此 ConcurrentSkipListMap 实例的浅表副本。
ConcurrentSkipListMap clone()
// 返回对此映射中的键进行排序的比较器;如果此映射使用键的自然顺序,则返回 null。
Comparatorsuper K> comparator()
// 如果此映射包含指定键的映射关系,则返回 true。
boolean containsKey(Object key)
// 如果此映射为指定值映射一个或多个键,则返回 true。
boolean containsValue(Object value)
// 返回此映射中所包含键的逆序 NavigableSet 视图。
NavigableSet descendingKeySet()
// 返回此映射中所包含映射关系的逆序视图。
ConcurrentNavigableMap descendingMap()
// 返回此映射中所包含的映射关系的 Set 视图。
Set> entrySet()
// 比较指定对象与此映射的相等性。
boolean equals(Object o)
// 返回与此映射中的最小键关联的键-值映射关系;如果该映射为空,则返回 null。
Map.Entry firstEntry()
// 返回此映射中当前第一个(最低)键。
K firstKey()
// 返回与小于等于给定键的最大键关联的键-值映射关系;如果不存在这样的键,则返回 null。
Map.Entry floorEntry(K key)
// 返回小于等于给定键的最大键;如果不存在这样的键,则返回 null。
K floorKey(K key)
// 返回指定键所映射到的值;如果此映射不包含该键的映射关系,则返回 null。
V get(Object key)
// 返回此映射的部分视图,其键值严格小于 toKey。
ConcurrentNavigableMap headMap(K toKey)
// 返回此映射的部分视图,其键小于(或等于,如果 inclusive 为 true)toKey。
ConcurrentNavigableMap headMap(K toKey, boolean inclusive)
// 返回与严格大于给定键的最小键关联的键-值映射关系;如果不存在这样的键,则返回 null。
Map.Entry higherEntry(K key)
// 返回严格大于给定键的最小键;如果不存在这样的键,则返回 null。
K higherKey(K key)
// 如果此映射未包含键-值映射关系,则返回 true。
boolean isEmpty()
// 返回此映射中所包含键的 NavigableSet 视图。
NavigableSet keySet()
// 返回与此映射中的最大键关联的键-值映射关系;如果该映射为空,则返回 null。
Map.Entry lastEntry()
// 返回映射中当前最后一个(最高)键。
K lastKey()
// 返回与严格小于给定键的最大键关联的键-值映射关系;如果不存在这样的键,则返回 null。
Map.Entry lowerEntry(K key)
// 返回严格小于给定键的最大键;如果不存在这样的键,则返回 null。
K lowerKey(K key)
// 返回此映射中所包含键的 NavigableSet 视图。
NavigableSet navigableKeySet()
// 移除并返回与此映射中的最小键关联的键-值映射关系;如果该映射为空,则返回 null。
Map.Entry pollFirstEntry()
// 移除并返回与此映射中的最大键关联的键-值映射关系;如果该映射为空,则返回 null。
Map.Entry pollLastEntry()
// 将指定值与此映射中的指定键关联。
V put(K key, V value)
// 如果指定键已经不再与某个值相关联,则将它与给定值关联。
V putIfAbsent(K key, V value)
// 从此映射中移除指定键的映射关系(如果存在)。
V remove(Object key)
// 只有目前将键的条目映射到给定值时,才移除该键的条目。
boolean remove(Object key, Object value)
// 只有目前将键的条目映射到某一值时,才替换该键的条目。
V replace(K key, V value)
// 只有目前将键的条目映射到给定值时,才替换该键的条目。
boolean replace(K key, V oldValue, V newValue)
// 返回此映射中的键-值映射关系数。
int size()
// 返回此映射的部分视图,其键的范围从 fromKey 到 toKey。
ConcurrentNavigableMap subMap(K fromKey, boolean fromInclusive, K toKey, boolean toInclusive)
// 返回此映射的部分视图,其键值的范围从 fromKey(包括)到 toKey(不包括)。
ConcurrentNavigableMap subMap(K fromKey, K toKey)
// 返回此映射的部分视图,其键大于等于 fromKey。
ConcurrentNavigableMap tailMap(K fromKey)
// 返回此映射的部分视图,其键大于(或等于,如果 inclusive 为 true)fromKey。
ConcurrentNavigableMap tailMap(K fromKey, boolean inclusive)
// 返回此映射中所包含值的 Collection 视图。
Collection values()
复制代码

 

ConcurrentSkipListMap源码分析(JDK1.7.0_40版本)

ConcurrentSkipListMap.java的完整源码如下:

复制代码
   1 /*
   2  * ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms.
   3  *
   4  *
   5  *
   6  *
   7  *
   8  *
   9  *
  10  *
  11  *
  12  *
  13  *
  14  *
  15  *
  16  *
  17  *
  18  *
  19  *
  20  *
  21  *
  22  *
  23  */
  24 
  25 /*
  26  *
  27  *
  28  *
  29  *
  30  *
  31  * Written by Doug Lea with assistance from members of JCP JSR-166
  32  * Expert Group and released to the public domain, as explained at
  33  * http://creativecommons.org/publicdomain/zero/1.0/
  34  */
  35 
  36 package java.util.concurrent;
  37 import java.util.*;
  38 import java.util.concurrent.atomic.*;
  39 
  40 /**
  41  * A scalable concurrent {@link ConcurrentNavigableMap} implementation.
  42  * The map is sorted according to the {@linkplain Comparable natural
  43  * ordering} of its keys, or by a {@link Comparator} provided at map
  44  * creation time, depending on which constructor is used.
  45  *
  46  * 

This class implements a concurrent variant of 47 * href="http://en.wikipedia.org/wiki/Skip_list" target="_top">SkipLists 48 * providing expected average log(n) time cost for the 49 * containsKey, get, put and 50 * remove operations and their variants. Insertion, removal, 51 * update, and access operations safely execute concurrently by 52 * multiple threads. Iterators are weakly consistent, returning 53 * elements reflecting the state of the map at some point at or since 54 * the creation of the iterator. They do not throw {@link 55 * ConcurrentModificationException}, and may proceed concurrently with 56 * other operations. Ascending key ordered views and their iterators 57 * are faster than descending ones. 58 * 59 *

All Map.Entry pairs returned by methods in this class 60 * and its views represent snapshots of mappings at the time they were 61 * produced. They do not support the Entry.setValue 62 * method. (Note however that it is possible to change mappings in the 63 * associated map using put, putIfAbsent, or 64 * replace, depending on exactly which effect you need.) 65 * 66 *

Beware that, unlike in most collections, the size 67 * method is not a constant-time operation. Because of the 68 * asynchronous nature of these maps, determining the current number 69 * of elements requires a traversal of the elements, and so may report 70 * inaccurate results if this collection is modified during traversal. 71 * Additionally, the bulk operations putAll, equals, 72 * toArray, containsValue, and clear are 73 * not guaranteed to be performed atomically. For example, an 74 * iterator operating concurrently with a putAll operation 75 * might view only some of the added elements. 76 * 77 *

This class and its views and iterators implement all of the 78 * optional methods of the {@link Map} and {@link Iterator} 79 * interfaces. Like most other concurrent collections, this class does 80 * not permit the use of null keys or values because some 81 * null return values cannot be reliably distinguished from the absence of 82 * elements. 83 * 84 *

This class is a member of the 85 * docRoot}/../technotes/guides/collections/index.html"> 86 * Java Collections Framework. 87 * 88 * @author Doug Lea 89 * @param the type of keys maintained by this map 90 * @param the type of mapped values 91 * @since 1.6 92 */ 93 public class ConcurrentSkipListMap extends AbstractMap 94 implements ConcurrentNavigableMap, 95 Cloneable, 96 java.io.Serializable { 97 /* 98 * This class implements a tree-like two-dimensionally linked skip 99 * list in which the index levels are represented in separate 100 * nodes from the base nodes holding data. There are two reasons 101 * for taking this approach instead of the usual array-based 102 * structure: 1) Array based implementations seem to encounter 103 * more complexity and overhead 2) We can use cheaper algorithms 104 * for the heavily-traversed index lists than can be used for the 105 * base lists. Here's a picture of some of the basics for a 106 * possible list with 2 levels of index: 107 * 108 * Head nodes Index nodes 109 * +-+ right +-+ +-+ 110 * |2|---------------->| |--------------------->| |->null 111 * +-+ +-+ +-+ 112 * | down | | 113 * v v v 114 * +-+ +-+ +-+ +-+ +-+ +-+ 115 * |1|----------->| |->| |------>| |----------->| |------>| |->null 116 * +-+ +-+ +-+ +-+ +-+ +-+ 117 * v | | | | | 118 * Nodes next v v v v v 119 * +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ 120 * | |->|A|->|B|->|C|->|D|->|E|->|F|->|G|->|H|->|I|->|J|->|K|->null 121 * +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ 122 * 123 * The base lists use a variant of the HM linked ordered set 124 * algorithm. See Tim Harris, "A pragmatic implementation of 125 * non-blocking linked lists" 126 * http://www.cl.cam.ac.uk/~tlh20/publications.html and Maged 127 * Michael "High Performance Dynamic Lock-Free Hash Tables and 128 * List-Based Sets" 129 * http://www.research.ibm.com/people/m/michael/pubs.htm. The 130 * basic idea in these lists is to mark the "next" pointers of 131 * deleted nodes when deleting to avoid conflicts with concurrent 132 * insertions, and when traversing to keep track of triples 133 * (predecessor, node, successor) in order to detect when and how 134 * to unlink these deleted nodes. 135 * 136 * Rather than using mark-bits to mark list deletions (which can 137 * be slow and space-intensive using AtomicMarkedReference), nodes 138 * use direct CAS'able next pointers. On deletion, instead of 139 * marking a pointer, they splice in another node that can be 140 * thought of as standing for a marked pointer (indicating this by 141 * using otherwise impossible field values). Using plain nodes 142 * acts roughly like "boxed" implementations of marked pointers, 143 * but uses new nodes only when nodes are deleted, not for every 144 * link. This requires less space and supports faster 145 * traversal. Even if marked references were better supported by 146 * JVMs, traversal using this technique might still be faster 147 * because any search need only read ahead one more node than 148 * otherwise required (to check for trailing marker) rather than 149 * unmasking mark bits or whatever on each read. 150 * 151 * This approach maintains the essential property needed in the HM 152 * algorithm of changing the next-pointer of a deleted node so 153 * that any other CAS of it will fail, but implements the idea by 154 * changing the pointer to point to a different node, not by 155 * marking it. While it would be possible to further squeeze 156 * space by defining marker nodes not to have key/value fields, it 157 * isn't worth the extra type-testing overhead. The deletion 158 * markers are rarely encountered during traversal and are 159 * normally quickly garbage collected. (Note that this technique 160 * would not work well in systems without garbage collection.) 161 * 162 * In addition to using deletion markers, the lists also use 163 * nullness of value fields to indicate deletion, in a style 164 * similar to typical lazy-deletion schemes. If a node's value is 165 * null, then it is considered logically deleted and ignored even 166 * though it is still reachable. This maintains proper control of 167 * concurrent replace vs delete operations -- an attempted replace 168 * must fail if a delete beat it by nulling field, and a delete 169 * must return the last non-null value held in the field. (Note: 170 * Null, rather than some special marker, is used for value fields 171 * here because it just so happens to mesh with the Map API 172 * requirement that method get returns null if there is no 173 * mapping, which allows nodes to remain concurrently readable 174 * even when deleted. Using any other marker value here would be 175 * messy at best.) 176 * 177 * Here's the sequence of events for a deletion of node n with 178 * predecessor b and successor f, initially: 179 * 180 * +------+ +------+ +------+ 181 * ... | b |------>| n |----->| f | ... 182 * +------+ +------+ +------+ 183 * 184 * 1. CAS n's value field from non-null to null. 185 * From this point on, no public operations encountering 186 * the node consider this mapping to exist. However, other 187 * ongoing insertions and deletions might still modify 188 * n's next pointer. 189 * 190 * 2. CAS n's next pointer to point to a new marker node. 191 * From this point on, no other nodes can be appended to n. 192 * which avoids deletion errors in CAS-based linked lists. 193 * 194 * +------+ +------+ +------+ +------+ 195 * ... | b |------>| n |----->|marker|------>| f | ... 196 * +------+ +------+ +------+ +------+ 197 * 198 * 3. CAS b's next pointer over both n and its marker. 199 * From this point on, no new traversals will encounter n, 200 * and it can eventually be GCed. 201 * +------+ +------+ 202 * ... | b |----------------------------------->| f | ... 203 * +------+ +------+ 204 * 205 * A failure at step 1 leads to simple retry due to a lost race 206 * with another operation. Steps 2-3 can fail because some other 207 * thread noticed during a traversal a node with null value and 208 * helped out by marking and/or unlinking. This helping-out 209 * ensures that no thread can become stuck waiting for progress of 210 * the deleting thread. The use of marker nodes slightly 211 * complicates helping-out code because traversals must track 212 * consistent reads of up to four nodes (b, n, marker, f), not 213 * just (b, n, f), although the next field of a marker is 214 * immutable, and once a next field is CAS'ed to point to a 215 * marker, it never again changes, so this requires less care. 216 * 217 * Skip lists add indexing to this scheme, so that the base-level 218 * traversals start close to the locations being found, inserted 219 * or deleted -- usually base level traversals only traverse a few 220 * nodes. This doesn't change the basic algorithm except for the 221 * need to make sure base traversals start at predecessors (here, 222 * b) that are not (structurally) deleted, otherwise retrying 223 * after processing the deletion. 224 * 225 * Index levels are maintained as lists with volatile next fields, 226 * using CAS to link and unlink. Races are allowed in index-list 227 * operations that can (rarely) fail to link in a new index node 228 * or delete one. (We can't do this of course for data nodes.) 229 * However, even when this happens, the index lists remain sorted, 230 * so correctly serve as indices. This can impact performance, 231 * but since skip lists are probabilistic anyway, the net result 232 * is that under contention, the effective "p" value may be lower 233 * than its nominal value. And race windows are kept small enough 234 * that in practice these failures are rare, even under a lot of 235 * contention. 236 * 237 * The fact that retries (for both base and index lists) are 238 * relatively cheap due to indexing allows some minor 239 * simplifications of retry logic. Traversal restarts are 240 * performed after most "helping-out" CASes. This isn't always 241 * strictly necessary, but the implicit backoffs tend to help 242 * reduce other downstream failed CAS's enough to outweigh restart 243 * cost. This worsens the worst case, but seems to improve even 244 * highly contended cases. 245 * 246 * Unlike most skip-list implementations, index insertion and 247 * deletion here require a separate traversal pass occuring after 248 * the base-level action, to add or remove index nodes. This adds 249 * to single-threaded overhead, but improves contended 250 * multithreaded performance by narrowing interference windows, 251 * and allows deletion to ensure that all index nodes will be made 252 * unreachable upon return from a public remove operation, thus 253 * avoiding unwanted garbage retention. This is more important 254 * here than in some other data structures because we cannot null 255 * out node fields referencing user keys since they might still be 256 * read by other ongoing traversals. 257 * 258 * Indexing uses skip list parameters that maintain good search 259 * performance while using sparser-than-usual indices: The 260 * hardwired parameters k=1, p=0.5 (see method randomLevel) mean 261 * that about one-quarter of the nodes have indices. Of those that 262 * do, half have one level, a quarter have two, and so on (see 263 * Pugh's Skip List Cookbook, sec 3.4). The expected total space 264 * requirement for a map is slightly less than for the current 265 * implementation of java.util.TreeMap. 266 * 267 * Changing the level of the index (i.e, the height of the 268 * tree-like structure) also uses CAS. The head index has initial 269 * level/height of one. Creation of an index with height greater 270 * than the current level adds a level to the head index by 271 * CAS'ing on a new top-most head. To maintain good performance 272 * after a lot of removals, deletion methods heuristically try to 273 * reduce the height if the topmost levels appear to be empty. 274 * This may encounter races in which it possible (but rare) to 275 * reduce and "lose" a level just as it is about to contain an 276 * index (that will then never be encountered). This does no 277 * structural harm, and in practice appears to be a better option 278 * than allowing unrestrained growth of levels. 279 * 280 * The code for all this is more verbose than you'd like. Most 281 * operations entail locating an element (or position to insert an 282 * element). The code to do this can't be nicely factored out 283 * because subsequent uses require a snapshot of predecessor 284 * and/or successor and/or value fields which can't be returned 285 * all at once, at least not without creating yet another object 286 * to hold them -- creating such little objects is an especially 287 * bad idea for basic internal search operations because it adds 288 * to GC overhead. (This is one of the few times I've wished Java 289 * had macros.) Instead, some traversal code is interleaved within 290 * insertion and removal operations. The control logic to handle 291 * all the retry conditions is sometimes twisty. Most search is 292 * broken into 2 parts. findPredecessor() searches index nodes 293 * only, returning a base-level predecessor of the key. findNode() 294 * finishes out the base-level search. Even with this factoring, 295 * there is a fair amount of near-duplication of code to handle 296 * variants. 297 * 298 * For explanation of algorithms sharing at least a couple of 299 * features with this one, see Mikhail Fomitchev's thesis 300 * (http://www.cs.yorku.ca/~mikhail/), Keir Fraser's thesis 301 * (http://www.cl.cam.ac.uk/users/kaf24/), and Hakan Sundell's 302 * thesis (http://www.cs.chalmers.se/~phs/). 303 * 304 * Given the use of tree-like index nodes, you might wonder why 305 * this doesn't use some kind of search tree instead, which would 306 * support somewhat faster search operations. The reason is that 307 * there are no known efficient lock-free insertion and deletion 308 * algorithms for search trees. The immutability of the "down" 309 * links of index nodes (as opposed to mutable "left" fields in 310 * true trees) makes this tractable using only CAS operations. 311 * 312 * Notation guide for local variables 313 * Node: b, n, f for predecessor, node, successor 314 * Index: q, r, d for index node, right, down. 315 * t for another index node 316 * Head: h 317 * Levels: j 318 * Keys: k, key 319 * Values: v, value 320 * Comparisons: c 321 */ 322 323 private static final long serialVersionUID = -8627078645895051609L; 324 325 /** 326 * Generates the initial random seed for the cheaper per-instance 327 * random number generators used in randomLevel. 328 */ 329 private static final Random seedGenerator = new Random(); 330 331 /** 332 * Special value used to identify base-level header 333 */ 334 private static final Object BASE_HEADER = new Object(); 335 336 /** 337 * The topmost head index of the skiplist. 338 */ 339 private transient volatile HeadIndex head; 340 341 /** 342 * The comparator used to maintain order in this map, or null 343 * if using natural ordering. 344 * @serial 345 */ 346 private final Comparatorsuper K> comparator; 347 348 /** 349 * Seed for simple random number generator. Not volatile since it 350 * doesn't matter too much if different threads don't see updates. 351 */ 352 private transient int randomSeed; 353 354 /** Lazily initialized key set */ 355 private transient KeySet keySet; 356 /** Lazily initialized entry set */ 357 private transient EntrySet entrySet; 358 /** Lazily initialized values collection */ 359 private transient Values values; 360 /** Lazily initialized descending key set */ 361 private transient ConcurrentNavigableMap descendingMap; 362 363 /** 364 * Initializes or resets state. Needed by constructors, clone, 365 * clear, readObject. and ConcurrentSkipListSet.clone. 366 * (Note that comparator must be separately initialized.) 367 */ 368 final void initialize() { 369 keySet = null; 370 entrySet = null; 371 values = null; 372 descendingMap = null; 373 randomSeed = seedGenerator.nextInt() | 0x0100; // ensure nonzero 374 head = new HeadIndex(new Node(null, BASE_HEADER, null), 375 null, null, 1); 376 } 377 378 /** 379 * compareAndSet head node 380 */ 381 private boolean casHead(HeadIndex cmp, HeadIndex val) { 382 return UNSAFE.compareAndSwapObject(this, headOffset, cmp, val); 383 } 384 385 /* ---------------- Nodes -------------- */ 386 387 /** 388 * Nodes hold keys and values, and are singly linked in sorted 389 * order, possibly with some intervening marker nodes. The list is 390 * headed by a dummy node accessible as head.node. The value field 391 * is declared only as Object because it takes special non-V 392 * values for marker and header nodes. 393 */ 394 static final class Node { 395 final K key; 396 volatile Object value; 397 volatile Node next; 398 399 /** 400 * Creates a new regular node. 401 */ 402 Node(K key, Object value, Node next) { 403 this.key = key; 404 this.value = value; 405 this.next = next; 406 } 407 408 /** 409 * Creates a new marker node. A marker is distinguished by 410 * having its value field point to itself. Marker nodes also 411 * have null keys, a fact that is exploited in a few places, 412 * but this doesn't distinguish markers from the base-level 413 * header node (head.node), which also has a null key. 414 */ 415 Node(Node next) { 416 this.key = null; 417 this.value = this; 418 this.next = next; 419 } 420 421 /** 422 * compareAndSet value field 423 */ 424 boolean casValue(Object cmp, Object val) { 425 return UNSAFE.compareAndSwapObject(this, valueOffset, cmp, val); 426 } 427 428 /** 429 * compareAndSet next field 430 */ 431 boolean casNext(Node cmp, Node val) { 432 return UNSAFE.compareAndSwapObject(this, nextOffset, cmp, val); 433 } 434 435 /** 436 * Returns true if this node is a marker. This method isn't 437 * actually called in any current code checking for markers 438 * because callers will have already read value field and need 439 * to use that read (not another done here) and so directly 440 * test if value points to node. 441 * @param n a possibly null reference to a node 442 * @return true if this node is a marker node 443 */ 444 boolean isMarker() { 445 return value == this; 446 } 447 448 /** 449 * Returns true if this node is the header of base-level list. 450 * @return true if this node is header node 451 */ 452 boolean isBaseHeader() { 453 return value == BASE_HEADER; 454 } 455 456 /** 457 * Tries to append a deletion marker to this node. 458 * @param f the assumed current successor of this node 459 * @return true if successful 460 */ 461 boolean appendMarker(Node f) { 462 return casNext(f, new Node(f)); 463 } 464 465 /** 466 * Helps out a deletion by appending marker or unlinking from 467 * predecessor. This is called during traversals when value 468 * field seen to be null. 469 * @param b predecessor 470 * @param f successor 471 */ 472 void helpDelete(Node b, Node f) { 473 /* 474 * Rechecking links and then doing only one of the 475 * help-out stages per call tends to minimize CAS 476 * interference among helping threads. 477 */ 478 if (f == next && this == b.next) { 479 if (f == null || f.value != f) // not already marked 480 appendMarker(f); 481 else 482 b.casNext(this, f.next); 483 } 484 } 485 486 /** 487 * Returns value if this node contains a valid key-value pair, 488 * else null. 489 * @return this node's value if it isn't a marker or header or 490 * is deleted, else null. 491 */ 492 V getValidValue() { 493 Object v = value; 494 if (v == this || v == BASE_HEADER) 495 return null; 496 return (V)v; 497 } 498 499 /** 500 * Creates and returns a new SimpleImmutableEntry holding current 501 * mapping if this node holds a valid value, else null. 502 * @return new entry or null 503 */ 504 AbstractMap.SimpleImmutableEntry createSnapshot() { 505 V v = getValidValue(); 506 if (v == null) 507 return null; 508 return new AbstractMap.SimpleImmutableEntry(key, v); 509 } 510 511 // UNSAFE mechanics 512 513 private static final sun.misc.Unsafe UNSAFE; 514 private static final long valueOffset; 515 private static final long nextOffset; 516 517 static { 518 try { 519 UNSAFE = sun.misc.Unsafe.getUnsafe(); 520 Class k = Node.class; 521 valueOffset = UNSAFE.objectFieldOffset 522 (k.getDeclaredField("value")); 523 nextOffset = UNSAFE.objectFieldOffset 524 (k.getDeclaredField("next")); 525 } catch (Exception e) { 526 throw new Error(e); 527 } 528 } 529 } 530 531 /* ---------------- Indexing -------------- */ 532 533 /** 534 * Index nodes represent the levels of the skip list. Note that 535 * even though both Nodes and Indexes have forward-pointing 536 * fields, they have different types and are handled in different 537 * ways, that can't nicely be captured by placing field in a 538 * shared abstract class. 539 */ 540 static class Index { 541 final Node node; 542 final Index down; 543 volatile Index right; 544 545 /** 546 * Creates index node with given values. 547 */ 548 Index(Node node, Index down, Index right) { 549 this.node = node; 550 this.down = down; 551 this.right = right; 552 } 553 554 /** 555 * compareAndSet right field 556 */ 557 final boolean casRight(Index cmp, Index val) { 558 return UNSAFE.compareAndSwapObject(this, rightOffset, cmp, val); 559 } 560 561 /** 562 * Returns true if the node this indexes has been deleted. 563 * @return true if indexed node is known to be deleted 564 */ 565 final boolean indexesDeletedNode() { 566 return node.value == null; 567 } 568 569 /** 570 * Tries to CAS newSucc as successor. To minimize races with 571 * unlink that may lose this index node, if the node being 572 * indexed is known to be deleted, it doesn't try to link in. 573 * @param succ the expected current successor 574 * @param newSucc the new successor 575 * @return true if successful 576 */ 577 final boolean link(Index succ, Index newSucc) { 578 Node n = node; 579 newSucc.right = succ; 580 return n.value != null && casRight(succ, newSucc); 581 } 582 583 /** 584 * Tries to CAS right field to skip over apparent successor 585 * succ. Fails (forcing a retraversal by caller) if this node 586 * is known to be deleted. 587 * @param succ the expected current successor 588 * @return true if successful 589 */ 590 final boolean unlink(Index succ) { 591 return !indexesDeletedNode() && casRight(succ, succ.right); 592 } 593 594 // Unsafe mechanics 595 private static final sun.misc.Unsafe UNSAFE; 596 private static final long rightOffset; 597 static { 598 try { 599 UNSAFE = sun.misc.Unsafe.getUnsafe(); 600 Class k = Index.class; 601 rightOffset = UNSAFE.objectFieldOffset 602 (k.getDeclaredField("right")); 603 } catch (Exception e) { 604 throw new Error(e); 605 } 606 } 607 } 608 609 /* ---------------- Head nodes -------------- */ 610 611 /** 612 * Nodes heading each level keep track of their level. 613 */ 614 static final class HeadIndex extends Index { 615 final int level; 616 HeadIndex(Node node, Index down, Index right, int level) { 617 super(node, down, right); 618 this.level = level; 619 } 620 } 621 622 /* ---------------- Comparison utilities -------------- */ 623 624 /** 625 * Represents a key with a comparator as a Comparable. 626 * 627 * Because most sorted collections seem to use natural ordering on 628 * Comparables (Strings, Integers, etc), most internal methods are 629 * geared to use them. This is generally faster than checking 630 * per-comparison whether to use comparator or comparable because 631 * it doesn't require a (Comparable) cast for each comparison. 632 * (Optimizers can only sometimes remove such redundant checks 633 * themselves.) When Comparators are used, 634 * ComparableUsingComparators are created so that they act in the 635 * same way as natural orderings. This penalizes use of 636 * Comparators vs Comparables, which seems like the right 637 * tradeoff. 638 */ 639 static final class ComparableUsingComparator implements Comparable { 640 final K actualKey; 641 final Comparatorsuper K> cmp; 642 ComparableUsingComparator(K key, Comparatorsuper K> cmp) { 643 this.actualKey = key; 644 this.cmp = cmp; 645 } 646 public int compareTo(K k2) { 647 return cmp.compare(actualKey, k2); 648 } 649 } 650 651 /** 652 * If using comparator, return a ComparableUsingComparator, else 653 * cast key as Comparable, which may cause ClassCastException, 654 * which is propagated back to caller. 655 */ 656 private Comparablesuper K> comparable(Object key) 657 throws ClassCastException { 658 if (key == null) 659 throw new NullPointerException(); 660 if (comparator != null) 661 return new ComparableUsingComparator((K)key, comparator); 662 else 663 return (Comparablesuper K>)key; 664 } 665 666 /** 667 * Compares using comparator or natural ordering. Used when the 668 * ComparableUsingComparator approach doesn't apply. 669 */ 670 int compare(K k1, K k2) throws ClassCastException { 671 Comparatorsuper K> cmp = comparator; 672 if (cmp != null) 673 return cmp.compare(k1, k2); 674 else 675 return ((Comparablesuper K>)k1).compareTo(k2); 676 } 677 678 /** 679 * Returns true if given key greater than or equal to least and 680 * strictly less than fence, bypassing either test if least or 681 * fence are null. Needed mainly in submap operations. 682 */ 683 boolean inHalfOpenRange(K key, K least, K fence) { 684 if (key == null) 685 throw new NullPointerException(); 686 return ((least == null || compare(key, least) >= 0) && 687 (fence == null || compare(key, fence) < 0)); 688 } 689 690 /** 691 * Returns true if given key greater than or equal to least and less 692 * or equal to fence. Needed mainly in submap operations. 693 */ 694 boolean inOpenRange(K key, K least, K fence) { 695 if (key == null) 696 throw new NullPointerException(); 697 return ((least == null || compare(key, least) >= 0) && 698 (fence == null || compare(key, fence) <= 0)); 699 } 700 701 /* ---------------- Traversal -------------- */ 702 703 /** 704 * Returns a base-level node with key strictly less than given key, 705 * or the base-level header if there is no such node. Also 706 * unlinks indexes to deleted nodes found along the way. Callers 707 * rely on this side-effect of clearing indices to deleted nodes. 708 * @param key the key 709 * @return a predecessor of key 710 */ 711 private Node findPredecessor(Comparablesuper K> key) { 712 if (key == null) 713 throw new NullPointerException(); // don't postpone errors 714 for (;;) { 715 Index q = head; 716 Index r = q.right; 717 for (;;) { 718 if (r != null) { 719 Node n = r.node; 720 K k = n.key; 721 if (n.value == null) { 722 if (!q.unlink(r)) 723 break; // restart 724 r = q.right; // reread r 725 continue; 726 } 727 if (key.compareTo(k) > 0) { 728 q = r; 729 r = r.right; 730 continue; 731 } 732 } 733 Index d = q.down; 734 if (d != null) { 735 q = d; 736 r = d.right; 737 } else 738 return q.node; 739 } 740 } 741 } 742 743 /** 744 * Returns node holding key or null if no such, clearing out any 745 * deleted nodes seen along the way. Repeatedly traverses at 746 * base-level looking for key starting at predecessor returned 747 * from findPredecessor, processing base-level deletions as 748 * encountered. Some callers rely on this side-effect of clearing 749 * deleted nodes. 750 * 751 * Restarts occur, at traversal step centered on node n, if: 752 * 753 * (1) After reading n's next field, n is no longer assumed 754 * predecessor b's current successor, which means that 755 * we don't have a consistent 3-node snapshot and so cannot 756 * unlink any subsequent deleted nodes encountered. 757 * 758 * (2) n's value field is null, indicating n is deleted, in 759 * which case we help out an ongoing structural deletion 760 * before retrying. Even though there are cases where such 761 * unlinking doesn't require restart, they aren't sorted out 762 * here because doing so would not usually outweigh cost of 763 * restarting. 764 * 765 * (3) n is a marker or n's predecessor's value field is null, 766 * indicating (among other possibilities) that 767 * findPredecessor returned a deleted node. We can't unlink 768 * the node because we don't know its predecessor, so rely 769 * on another call to findPredecessor to notice and return 770 * some earlier predecessor, which it will do. This check is 771 * only strictly needed at beginning of loop, (and the 772 * b.value check isn't strictly needed at all) but is done 773 * each iteration to help avoid contention with other 774 * threads by callers that will fail to be able to change 775 * links, and so will retry anyway. 776 * 777 * The traversal loops in doPut, doRemove, and findNear all 778 * include the same three kinds of checks. And specialized 779 * versions appear in findFirst, and findLast and their 780 * variants. They can't easily share code because each uses the 781 * reads of fields held in locals occurring in the orders they 782 * were performed. 783 * 784 * @param key the key 785 * @return node holding key, or null if no such 786 */ 787 private Node findNode(Comparablesuper K> key) { 788 for (;;) { 789 Node b = findPredecessor(key); 790 Node n = b.next; 791 for (;;) { 792 if (n == null) 793 return null; 794 Node f = n.next; 795 if (n != b.next) // inconsistent read 796 break; 797 Object v = n.value; 798 if (v == null) { // n is deleted 799 n.helpDelete(b, f); 800 break; 801 } 802 if (v == n || b.value == null) // b is deleted 803 break; 804 int c = key.compareTo(n.key); 805 if (c == 0) 806 return n; 807 if (c < 0) 808 return null; 809 b = n; 810 n = f; 811 } 812 } 813 } 814 815 /** 816 * Gets value for key using findNode. 817 * @param okey the key 818 * @return the value, or null if absent 819 */ 820 private V doGet(Object okey) { 821 Comparablesuper K> key = comparable(okey); 822 /* 823 * Loop needed here and elsewhere in case value field goes 824 * null just as it is about to be returned, in which case we 825 * lost a race with a deletion, so must retry. 826 */ 827 for (;;) { 828 Node n = findNode(key); 829 if (n == null) 830 return null; 831 Object v = n.value; 832 if (v != null) 833 return (V)v; 834 } 835 } 836 837 /* ---------------- Insertion -------------- */ 838 839 /** 840 * Main insertion method. Adds element if not present, or 841 * replaces value if present and onlyIfAbsent is false. 842 * @param kkey the key 843 * @param value the value that must be associated with key 844 * @param onlyIfAbsent if should not insert if already present 845 * @return the old value, or null if newly inserted 846 */ 847 private V doPut(K kkey, V value, boolean onlyIfAbsent) { 848 Comparablesuper K> key = comparable(kkey); 849 for (;;) { 850 Node b = findPredecessor(key); 851 Node n = b.next; 852 for (;;) { 853 if (n != null) { 854 Node f = n.next; 855 if (n != b.next) // inconsistent read 856 break; 857 Object v = n.value; 858 if (v == null) { // n is deleted 859 n.helpDelete(b, f); 860 break; 861 } 862 if (v == n || b.value == null) // b is deleted 863 break; 864 int c = key.compareTo(n.key); 865 if (c > 0) { 866 b = n; 867 n = f; 868 continue; 869 } 870 if (c == 0) { 871 if (onlyIfAbsent || n.casValue(v, value)) 872 return (V)v; 873 else 874 break; // restart if lost race to replace value 875 } 876 // else c < 0; fall through 877 } 878 879 Node z = new Node(kkey, value, n); 880 if (!b.casNext(n, z)) 881 break; // restart if lost race to append to b 882 int level = randomLevel(); 883 if (level > 0) 884 insertIndex(z, level); 885 return null; 886 } 887 } 888 } 889 890 /** 891 * Returns a random level for inserting a new node. 892 * Hardwired to k=1, p=0.5, max 31 (see above and 893 * Pugh's "Skip List Cookbook", sec 3.4). 894 * 895 * This uses the simplest of the generators described in George 896 * Marsaglia's "Xorshift RNGs" paper. This is not a high-quality 897 * generator but is acceptable here. 898 */ 899 private int randomLevel() { 900 int x = randomSeed; 901 x ^= x << 13; 902 x ^= x >>> 17; 903 randomSeed = x ^= x << 5; 904 if ((x & 0x80000001) != 0) // test highest and lowest bits 905 return 0; 906 int level = 1; 907 while (((x >>>= 1) & 1) != 0) ++level; 908 return level; 909 } 910 911 /** 912 * Creates and adds index nodes for the given node. 913 * @param z the node 914 * @param level the level of the index 915 */ 916 private void insertIndex(Node z, int level) { 917 HeadIndex h = head; 918 int max = h.level; 919 920 if (level <= max) { 921 Index idx = null; 922 for (int i = 1; i <= level; ++i) 923 idx = new Index(z, idx, null); 924 addIndex(idx, h, level); 925 926 } else { // Add a new level 927 /* 928 * To reduce interference by other threads checking for 929 * empty levels in tryReduceLevel, new levels are added 930 * with initialized right pointers. Which in turn requires 931 * keeping levels in an array to access them while 932 * creating new head index nodes from the opposite 933 * direction. 934 */ 935 level = max + 1; 936 Index[] idxs = (Index[])new Index[level+1]; 937 Index idx = null; 938 for (int i = 1; i <= level; ++i) 939 idxs[i] = idx = new Index(z, idx, null); 940 941 HeadIndex oldh; 942 int k; 943 for (;;) { 944 oldh = head; 945 int oldLevel = oldh.level; 946 if (level <= oldLevel) { // lost race to add level 947 k = level; 948 break; 949 } 950 HeadIndex newh = oldh; 951 Node oldbase = oldh.node; 952 for (int j = oldLevel+1; j <= level; ++j) 953 newh = new HeadIndex(oldbase, newh, idxs[j], j); 954 if (casHead(oldh, newh)) { 955 k = oldLevel; 956 break

你可能感兴趣的:(Java多线程系列--【JUC集合05】- ConcurrentSkipListMap)