JUC包(四) 并发容器与框架

前言

在经过前几章的AQSCAS等原理的轰炸之后. 本章, 我们将了解下JUC包内常见的并发容器.对于各种的并发容器, 我们会各取一个进行细说.

  • Map类型容器(CurrentHashMap)
  • Queue类型容器(ConcurrentLinkedQueue)
  • 阻塞队列类型容器(BlockedLinkedQueue)

正文

在叙述之前, 我们先列举下JUC包内提供给的常见容器.

List

  • CopyOnWriteArrayList

Set

  • CopyOnWriteArraySet
  • ConcurrentSkipListSet

Map

  • ConcurrentHashMap
  • ConcurrentSkipListMap

队列

  • ConcurrentLinkedQueue
  • ConcurrentLinkedDeque

阻塞队列

  • ArrayBlockingQueue
  • LinkedBlockingQueue
  • PriorityBlockingQueue
  • DelayQueue
  • SynchronousQueue
  • LinkedTransferQueue
  • LinkedBlockingDequeue

ConcurrentHashMap使用及原理

前文中我们提及了HashMap 线程安全问题. 已经它的线程安全问题, 及HashTable与SynchronizedMap. 在其使用的过程中, 总是将整个对象都锁起来, 这样处理效率非常低. 于是人们想出了ConcurrentHashMap类型用于处理多线程问题.

  • ConcurrentHashMap的使用
    ConcurrentHashMap的使用也非常的简单,与HashMap基本一致.
public class ConcurrentHashMapTest {
	public static void main(String[] args) {
		ConcurrentHashMap concureentHashMap = new ConcurrentHashMap<>();
		concureentHashMap.put("abc", "abc");
		concureentHashMap.get("abc");
		concureentHashMap.size();		
	}
}
  • ConcurrentHashMap原理及结构
    ConcurrentHashMap的组成为ConcurrentHashMap -> Segment -> HashEntry. 这样设计的好处在于, 每次访问的时候只需要将一个Segment锁定, 而不需要将整个Map类型集合都进行锁定了.
    JUC包(四) 并发容器与框架_第1张图片

  • 初始化(ConcurrentHashMap & Segment & HashEntry)

  • 初始化(JDK 1.7.0_79)
    在后期,貌似不是直接初始化了.而是初始化Segemnt[0] 其余延迟初始化.(ensureSegment)

    public ConcurrentHashMap(int initialCapacity,
                             float loadFactor, int concurrencyLevel) {
        if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0)
            throw new IllegalArgumentException();
        if (concurrencyLevel > MAX_SEGMENTS)
            concurrencyLevel = MAX_SEGMENTS;
        // Find power-of-two sizes best matching arguments
        int sshift = 0;
        int ssize = 1;
        while (ssize < concurrencyLevel) {
            ++sshift;
            ssize <<= 1;
        }
        this.segmentShift = 32 - sshift;
        this.segmentMask = ssize - 1;
        if (initialCapacity > MAXIMUM_CAPACITY)
            initialCapacity = MAXIMUM_CAPACITY;
        int c = initialCapacity / ssize;
        if (c * ssize < initialCapacity)
            ++c;
        int cap = MIN_SEGMENT_TABLE_CAPACITY;
        while (cap < c)
            cap <<= 1;
        // create segments and segments[0]
        Segment s0 =
            new Segment(loadFactor, (int)(cap * loadFactor),
                             (HashEntry[])new HashEntry[cap]);
        // Segment 数组初始化
       //创建segments数组并初始化第一个Segment,其余的Segment延迟初始化
        Segment[] ss = (Segment[])new Segment[ssize];
        UNSAFE.putOrderedObject(ss, SBASE, s0); // ordered write of segments[0]
        this.segments = ss;
    }
// 确认Segment是否存在
    private Segment ensureSegment(int k) {
        final Segment[] ss = this.segments;
        long u = (k << SSHIFT) + SBASE; // raw offset
        Segment seg;
        if ((seg = (Segment)UNSAFE.getObjectVolatile(ss, u)) == null) {
            Segment proto = ss[0]; // use segment 0 as prototype
            int cap = proto.table.length;
            float lf = proto.loadFactor;
            int threshold = (int)(cap * lf);
            HashEntry[] tab = (HashEntry[])new HashEntry[cap];
            if ((seg = (Segment)UNSAFE.getObjectVolatile(ss, u))
                == null) { // recheck
                Segment s = new Segment(lf, threshold, tab);
                while ((seg = (Segment)UNSAFE.getObjectVolatile(ss, u))
                       == null) {
                    if (UNSAFE.compareAndSwapObject(ss, u, null, seg = s))
                        break;
                }
            }
        }
        return seg;
    }
  • get取值
public V get(Object key) {
        Segment s; // manually integrate access methods to reduce overhead
        HashEntry[] tab;
        int h = hash(key);
        long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE;
        if ((s = (Segment)UNSAFE.getObjectVolatile(segments, u)) != null &&
            (tab = s.table) != null) {
            for (HashEntry e = (HashEntry) UNSAFE.getObjectVolatile
                     (tab, ((long)(((tab.length - 1) & h)) << TSHIFT) + TBASE);
                 e != null; e = e.next) {
                K k;
                if ((k = e.key) == key || (e.hash == h && key.equals(k)))
                    return e.value;
            }
        }
        return null;
    }
  • put设值
# ConcurrentHashMap put
public V put(K key, V value) {
        Segment s;
        if (value == null)
            throw new NullPointerException();
            // 计算hash值
        int hash = hash(key);
        // 计算偏移量
        int j = (hash >>> segmentShift) & segmentMask;
        if ((s = (Segment)UNSAFE.getObject          // nonvolatile; recheck
             (segments, (j << SSHIFT) + SBASE)) == null) //  in ensureSegment
             // 确认当前的hash槽存在
            s = ensureSegment(j);
        return s.put(key, hash, value, false);
    }
# Segment
        final V put(K key, int hash, V value, boolean onlyIfAbsent) {
        //	获取独占锁(Segment锁)
            HashEntry node = tryLock() ? null :
                scanAndLockForPut(key, hash, value);
            V oldValue;
            try {
                HashEntry[] tab = table;
                int index = (tab.length - 1) & hash;
                HashEntry first = entryAt(tab, index);
                for (HashEntry e = first;;) {
                    if (e != null) {
                        K k;
                        if ((k = e.key) == key ||
                            (e.hash == hash && key.equals(k))) {
                            oldValue = e.value;
                            if (!onlyIfAbsent) {
                                e.value = value;
                                ++modCount;
                            }
                            break;
                        }
                        e = e.next;
                    }
                    else {
                        if (node != null)
                            node.setNext(first);
                        else
                            node = new HashEntry(hash, key, value, first);
                        int c = count + 1;
                        if (c > threshold && tab.length < MAXIMUM_CAPACITY)
                            rehash(node);
                        else
                            setEntryAt(tab, index, node);
                        ++modCount;
                        count = c;
                        oldValue = null;
                        break;
                    }
                }
            } finally {
            // 释放锁
                unlock();
            }
            return oldValue;
        }

这里经常说的ConcurrentHashMap线程安全且高效的原因就是不会直接将整个HashMap进行锁住.而是锁定的是它的部分Segment.

  • Hash方法
    偏移Hash算法.

  • Size & Count
    count为volatile类型.计算所有的size, 只需要将所有的Segmentcount进行求和即可.


ConcurrentLinkedQueue使用及原理

  • 初始化
# 无值初始化
public ConcurrentLinkedQueue() {
        head = tail = new Node(null);
    }
# Collections转化初始化

 public ConcurrentLinkedQueue(Collection c) {
        Node h = null, t = null;
        for (E e : c) {
            checkNotNull(e);
            Node newNode = new Node(e);
            if (h == null)
                h = t = newNode;
            else {
                t.lazySetNext(newNode);
                t = newNode;
            }
        }
        if (h == null)
            h = t = new Node(null);
        head = h;
        tail = t;
    }
  • 插入
    插入的过程写的比较复杂.其实原理非常简单.
    public boolean offer(E e) {
        checkNotNull(e);
        final Node newNode = new Node(e);

        for (Node t = tail, p = t;;) {
            Node q = p.next;
            // 如果没有其他线程干扰 q即为null
            if (q == null) {
                // p is last node
                if (p.casNext(null, newNode)) {
                	// CAS操作进行插入 快速高效
                    if (p != t) // hop two nodes at a time
                    	// 更新tail结点(可以由其他线程完成)
                        casTail(t, newNode);  // Failure is OK.
                    return true;
                }
                // Lost CAS race to another thread; re-read next
            }
            else if (p == q)
                p = (t != (t = tail)) ? t : head;
            else
                // Check for tail updates after two hops.
                p = (p != t && t != (t = tail)) ? t : q;
        }
    }

此处主要包括几个步骤:
顺序1: 第一次执行, q == null, 尝试casNext()操作.成功即停止运行, 但是并不会更新tail结点. 失败则运行到步骤2.
顺序2: 第一次CAS失败, 说明有其他线程先插入了末尾结点.此时,我们需要的是更新pt的值. 也就是q!=null, 运行p = (p != t && t != (t = tail)) ? t : q;. 感觉这段写的略微复杂,写成t=tail;p=tail;更容易理解.(此处也是为了防止中断错误而写.)更新队尾,重新进行顺序1的操作.
操作3: p==q的情况.(没怎么看懂.)

# 简化操作
public boolean offer(E e) {
        checkNotNull(e);
        final Node newNode = new Node(e);

        for (Node t = tail, p = t;;) {
        	Node t = tail,
        	if(t.casNext(null, newNode)&&  casTail(t, newNode)){
        		return true;
        }
           
    }
  • 取出poll()
public E poll() {
        restartFromHead:
        for (;;) {
            for (Node h = head, p = h, q;;) {
            	// p h 指向头结点; 即需要出队列的结点.
                E item = p.item;
				// 获取p结点的元素
				// 如果p的元素不为空 CAS操作使p结点引用的元素为null
                if (item != null && p.casItem(item, null)) {
                    // Successful CAS is the linearization point
                    // for item to be removed from this queue.
                    
                    if (p != h) // hop two nodes at a time
                    	// 经过2次的迭代才更新成功.指第一次CAS失败
                        updateHead(h, ((q = p.next) != null) ? q : p);
                    return item;
                }
                else if ((q = p.next) == null) {
                	// 如果p的下一个结点也为空 
                    updateHead(h, p);
                    return null;
                }
                else if (p == q)
                    continue restartFromHead;
                else
                    p = q;// 此处因为之前进行 q=p.next的判断赋值过了
            }
        }
    }

此处主要包括几个步骤:
顺序1: 第一次执行, item != null && p.casItem(item, null)直接CAS进行操作.令人疑惑的是为何不进行更新操作?
顺序2: 第一次CAS失败, 说明有其他线程先更改了头结点.进入else if ((q = p.next) == null)判断.主要此处的q被赋值为p.next.判断下一个结点是否为空.如果为空,说明后面的结点后被置空了.队列已经空了. 若没有则p=q,即p=p.next操作.随后重新进行顺序2的操作.
操作3: p==q的情况.(没怎么看懂.)


阻塞队列使用及原理

阻塞队列我们用的比较多的是LinkedBlockingQueue. 阻塞队列的含义在于当队列为空时,阻塞取值线程;当队列满时,阻塞插入进程.使用ConditionLock可以轻松实现其原理.


import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;

public class LinkedBlockingQueue {
	private int array[];
	private volatile int count;
	private Lock lock;
	private final Condition notfull;
	private final Condition notEmpty;
	
	public LinkedBlockingQueue(){
		lock = new ReentrantLock();
		notfull = lock.newCondition();
		notEmpty = lock.newCondition();
		count=0;
		array = new int [10];
	}
	
	public void put(int a) throws InterruptedException{
		lock.lockInterruptibly();
		try{
			while(count==array.length){
				notfull.await();
			}
			array[count++]=a;
		}finally{
			lock.unlock();
		}
	}
	
	public void get() throws InterruptedException{
		lock.lockInterruptibly();
		try{
			while(count==0){
				notEmpty.await();
			}
			count--;
		}finally{
			lock.unlock();
		}
	}
}

注意: 其与普通队列的区别主要在于判断阻塞的两个while循环.从上文可以清楚的看到,其中使用了notfullnotEmpty两个Condition对象分别阻塞了: 当队列为空和当队列已满两种情况.

PS: 至于await()方法当实现在的一节中也已经指明.其是通过LockSupport.park()方法进行锁定的.

java 中 阻塞队列 非阻塞队列 和普通队列的区别是什么?


Reference

[1] JUC中的集合类
[2] 探究ConcurrentHashMap中键值对在Segment的下标如何确定
[3] ConcurrentHashMap原理分析(1.7与1.8)
[4] ConcurrentHashMap源码分析(1.8)
[5] Java7/8 中的 HashMap 和 ConcurrentHashMap 全解析
[6] ConcurrentLinkedQueue源码分析
[7] 高效读写的队列:深度剖析ConcurrentLinkedQueue
[8] JUC源码分析-集合篇(五):ConcurrentLinkedDeque
[9] ConcurrentHashMap详解以及get方法保持同步的解释
[10] 为什么ConcurrentHashMap的读操作不需要加锁?

你可能感兴趣的:(5.,Java,-------5.12.,Java多线程)