多线程之消费者生产者模式加入阻塞队列

队列相关资料:
http://chenjumin.iteye.com/blog/2182322
http://blog.csdn.net/luohuacanyue/article/details/16359777
Queue  
------------  
1.ArrayDeque, (数组双端队列)  
2.PriorityQueue, (优先级队列)  
3.ConcurrentLinkedQueue, (基于链表的并发队列)  
4.DelayQueue, (延期阻塞队列)(阻塞队列实现了BlockingQueue接口)  
5.ArrayBlockingQueue, (基于数组的并发阻塞队列)  
6.LinkedBlockingQueue, (基于链表的FIFO阻塞队列)  
7.LinkedBlockingDeque, (基于链表的FIFO双端阻塞队列)  
8.PriorityBlockingQueue, (带优先级的无界阻塞队列)  
9.SynchronousQueue (并发同步阻塞队列)  
-----------------------------------------------------  
ArrayBlockingQueue  
是一个由数组支持的有界阻塞队列。此队列按 FIFO(先进先出)原则对元素进行排序。队列的头部 是在队列中存在时间最长的元素。队列的尾部 是在队列中存在时间最短的元素。新元素插入到队列的尾部,队列获取操作则是从队列头部开始获得元素。  
这是一个典型的“有界缓存区”,固定大小的数组在其中保持生产者插入的元素和使用者提取的元素。一旦创建了这样的缓存区,就不能再增加其容量。试图向已满队列中放入元素会导致操作受阻塞;试图从空队列中提取元素将导致类似阻塞。  
此 类支持对等待的生产者线程和消费者线程进行排序的可选公平策略。默认情况下,不保证是这种排序。然而,通过将公平性 (fairness) 设置为 true 而构造的队列允许按照 FIFO 顺序访问线程。公平性通常会降低吞吐量,但也减少了可变性和避免了“不平衡性”

===========

BlockingQueue接口定义了一种阻塞的FIFO queue,每一个BlockingQueue都有一个容量,让容量满时往BlockingQueue中添加数据时会造成阻塞,当容量为空时取元素操作会阻塞。

 

ArrayBlockingQueue是一个由数组支持的有界阻塞队列。在读写操作上都需要锁住整个容器,因此吞吐量与一般的实现是相似的,适合于实现“生产者消费者”模式。

 

基于链表的阻塞队列,同ArrayListBlockingQueue类似,其内部也维持着一个数据缓冲队列(该队列由一个链表构成),当生产者往 队列中放入一个数据时,队列会从生产者手中获取数据,并缓存在队列内部,而生产者立即返回;只有当队列缓冲区达到最大值缓存容量时 (LinkedBlockingQueue可以通过构造函数指定该值),才会阻塞生产者队列,直到消费者从队列中消费掉一份数据,生产者线程会被唤醒,反 之对于消费者这端的处理也基于同样的原理。而LinkedBlockingQueue之所以能够高效的处理并发数据,还因为其对于生产者端和消费者端分别 采用了独立的锁来控制数据同步,这也意味着在高并发的情况下生产者和消费者可以并行地操作队列中的数据,以此来提高整个队列的并发性能。

 

ArrayBlockingQueue和LinkedBlockingQueue的区别:

1. 队列中锁的实现不同

    ArrayBlockingQueue实现的队列中的锁是没有分离的,即生产和消费用的是同一个锁;

    LinkedBlockingQueue实现的队列中的锁是分离的,即生产用的是putLock,消费是takeLock

2. 在生产或消费时操作不同

    ArrayBlockingQueue实现的队列中在生产和消费的时候,是直接将枚举对象插入或移除的;

    LinkedBlockingQueue实现的队列中在生产和消费的时候,需要把枚举对象转换为Node<E>进行插入或移除,会影响性能

3. 队列大小初始化方式不同

    ArrayBlockingQueue实现的队列中必须指定队列的大小;

    LinkedBlockingQueue实现的队列中可以不指定队列的大小,但是默认是Integer.MAX_VALUE

   使用BlockingQueue的关键技术点如下: 
    1.BlockingQueue定义的常用方法如下: 
        1)add(anObject):把anObject加到BlockingQueue里,即如果BlockingQueue可以容纳,则返回true,否则报异常 
        2)offer(anObject):表示如果可能的话,将anObject加到BlockingQueue里,即如果BlockingQueue可以容纳,则返回true,否则返回false. 
        3)put(anObject):把anObject加到BlockingQueue里,如果BlockQueue没有空间,则调用此方法的线程被阻断直到BlockingQueue里面有空间再继续. 
        4)poll(time):取走BlockingQueue里排在首位的对象,若不能立即取出,则可以等time参数规定的时间,取不到时返回null 
        5)take():取走BlockingQueue里排在首位的对象,若BlockingQueue为空,阻断进入等待状态直到Blocking有新的对象被加入为止 
    2.BlockingQueue有四个具体的实现类,根据不同需求,选择不同的实现类 
        1)ArrayBlockingQueue:规定大小的BlockingQueue,其构造函数必须带一个int参数来指明其大小.其所含的对象是以FIFO(先入先出)顺序排序的. 
        2)LinkedBlockingQueue:大小不定的BlockingQueue,若其构造函数带一个规定大小的参数,生成的 BlockingQueue有大小限制,若不带大小参数,所生成的BlockingQueue的大小由Integer.MAX_VALUE来决定.其所含 的对象是以FIFO(先入先出)顺序排序的 
        3)PriorityBlockingQueue:类似于LinkedBlockQueue,但其所含对象的排序不是FIFO,而是依据对象的自然排序顺序或者是构造函数的Comparator决定的顺序. 
        4)SynchronousQueue:特殊的BlockingQueue,对其的操作必须是放和取交替完成的. 
    3.LinkedBlockingQueue和ArrayBlockingQueue比较起来,它们背后所用的数据结构不一样,导致 LinkedBlockingQueue的数据吞吐量要大于ArrayBlockingQueue,但在线程数量很大时其性能的可预见性低于 ArrayBlockingQueue.    

一个Demo 供参考:

package cn.com.gome.digger.index.service;

import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;

import org.apache.solr.client.solrj.impl.CloudSolrClient;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import cn.com.gome.digger.index.core.service.ShopIndexReaderWrapper;
import cn.com.gome.digger.index.core.service.ShopIndexWriterWrapper;
import cn.com.gome.digger.solrbean.ShopBean;
import cn.com.gome.digger.solrserver.SolrServerWrapper;

@Service
public class ShopIndexService extends BaseFullIndexService{
    private static final Logger logger = LoggerFactory.getLogger(ShopIndexService.class);

    @Autowired
    private SolrServerWrapper solrServerWrapper;
    @Autowired
    private ShopIndexReaderWrapper shopIndexReaderWrapper;
    @Autowired
    private ShopIndexWriterWrapper shopIndexWriterWrapper;


    /**
     * 全量覆盖索引数据
     * @createTime:2016年5月25日
     * @author: gaojiabao
     */
    public synchronized void fullIndex(){
        int readerThreadNum=3;
        int writerThreadNum=1;

        clearSolrIndex();

        ArrayBlockingQueue<ShopBean> queue=new ArrayBlockingQueue<>(2000);
        runReaders(queue, readerThreadNum);
        runWriters(queue, writerThreadNum);

        logger.info("*******************shop index 全量数据索引完成!***********************");
    }

    /**
     * 清空索引数据
     * @createTime:2016年5月25日
     * @author: gaojiabao
     */
    private void clearSolrIndex() {
        CloudSolrClient solrClient = solrServerWrapper.getByCollection(getCollectionName());
        try {
            solrClient.deleteByQuery("*:*");
            logger.info("{} :all docs have been cleared", getCollectionName());
        } catch (Exception e) {
            logger.error(e.getMessage(), e);
        } 
    }

    /**
     * Main thread can't block when readers read,so run readerWrapper in an extra thread
     * @createTime:2016年5月25日
     * @author: gaojiabao
     */
    private void runReaders(final BlockingQueue<ShopBean> queue,final int readerThreadNum){
        Runnable runnable=new Runnable() {
            public void run() {
                shopIndexReaderWrapper.startThreads(queue, readerThreadNum);
            }
        };
        new Thread(runnable).start();
    }

    /**
     * Main thread block when writers write,so run writerWrapper in main thread directly
     * @createTime:2016年5月25日
     * @author: gaojiabao
     */
    private void runWriters(final BlockingQueue<ShopBean> queue,final int writerThreadNum){
        shopIndexWriterWrapper.startThreads(queue, writerThreadNum);
    }

    @Override
    String getCollectionName() {
        return "meixin_shop";
    }

}
package cn.com.gome.digger.index.core.service;

import java.util.List;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

import cn.com.gome.digger.index.core.provider.ARangeDataProvider;
import cn.com.gome.digger.index.core.provider.IDataProvider;
import cn.com.gome.digger.solrbean.ShopBean;

@Component
public class ShopIndexReaderWrapper {
    private static final Logger logger = LoggerFactory.getLogger(ShopIndexReaderWrapper.class);
    //队列毒丸
    public static final ShopBean POISON=new ShopBean();

    @Autowired
    private ARangeDataProvider<ShopBean> dataProvider;

    /**
     * Blocking method,will not return until all data read out and put in the queue
     * @createTime:2016年5月25日
     * @author: gaojiabao
     */
    public synchronized void startThreads(BlockingQueue<ShopBean> queue,int threadNum){
        try {
            ThreadPoolExecutor executor=new ThreadPoolExecutor(threadNum, threadNum,1, TimeUnit.SECONDS, new ArrayBlockingQueue(threadNum));
            CountDownLatch latch=new CountDownLatch(threadNum);

            //重置dataProvider内部的游标
            dataProvider.resetCursor();
            for (int i = 0; i < threadNum; i++) {
                ShopIndexReader reader=new ShopIndexReader(queue, dataProvider,latch);
                executor.execute(reader);
            }

            latch.await();
            executor.shutdown();
            executor.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS);
            //放置毒丸
            queue.add(POISON);
            logger.info("shopIndexReader all data put in queue complete!");
        } catch (Exception e) {
            logger.error("shopIndexReader start error", e);
        }
    }

    private class ShopIndexReader implements Runnable{
        private BlockingQueue<ShopBean> queue=null;
        private IDataProvider<ShopBean> dataProvider=null;
        private CountDownLatch latch=null;

        public ShopIndexReader(BlockingQueue<ShopBean> queue,IDataProvider<ShopBean> dataProvider,CountDownLatch latch) {
            this.queue=queue;
            this.dataProvider=dataProvider;
            this.latch=latch;
        }
        @Override
        public void run() {
            while (true) {
                List<ShopBean> list=dataProvider.getBatchData();
                if (list!=null) {
                    putAll(queue, list);
                }else {
                    latch.countDown();
                    return;
                }
            }
        }

        private void putAll(BlockingQueue queue,List<ShopBean> list){
            for (ShopBean shopBean : list) {
                try {
                    queue.put(shopBean);
                } catch (InterruptedException e) {
                    Thread.currentThread().interrupt();
                }
            }
        }
    }
}


package cn.com.gome.digger.index.core.service;

import java.io.IOException;
import java.util.LinkedList;
import java.util.List;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;

import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.impl.CloudSolrClient;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

import cn.com.gome.digger.solrbean.ShopBean;
import cn.com.gome.digger.solrserver.SolrServerWrapper;

@Component
public class ShopIndexWriterWrapper {
    private static final Logger logger = LoggerFactory.getLogger(ShopIndexWriterWrapper.class);

    @Autowired
    private SolrServerWrapper solrServerWrapper;

    /**
     * Blocking method,will not return until receive POISON in the queue
     * @createTime:2016年5月25日
     * @author: gaojiabao
     */
    public synchronized void startThreads(BlockingQueue<ShopBean> queue,int threadNum){
        ExecutorService executor=Executors.newFixedThreadPool(threadNum);
        CountDownLatch latch=new CountDownLatch(threadNum);
        for (int i = 0; i < threadNum; i++) {
            ShopIndexWriter writer=new ShopIndexWriter(queue, 10,latch);
            executor.submit(writer);
        }
        try {
            latch.await();
            executor.shutdown();
            executor.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS);
            logger.info("shopIndexReader all data write in solr complete!");
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
        }
    }

    private class ShopIndexWriter implements Runnable{
        private final BlockingQueue<ShopBean> queue;
        private final Integer submitThreshold;
        private final CountDownLatch latch;
        private final CloudSolrClient client;

        public ShopIndexWriter(BlockingQueue<ShopBean> queue, Integer submitThreshold,CountDownLatch latch) {
            this.queue = queue;
            this.submitThreshold = submitThreshold;
            this.latch=latch;
            this.client=solrServerWrapper.getByCollection("meixin_shop");
        }

        public void run() {
            List<ShopBean> holder=new LinkedList<>();
            while (!Thread.currentThread().isInterrupted()) {
                if (holder.size()<submitThreshold) {
                    try {
                        ShopBean queueElement=queue.take();
                        if (queueElement!=ShopIndexReaderWrapper.POISON) {
                            holder.add(queueElement);
                        }else{
                            stopThread(holder);
                        }

                    } catch (InterruptedException e) {
                        Thread.currentThread().interrupt();
                    }
                }else {
                    submit2Solr(holder);
                    holder.clear();
                }
            }
        }

        private void stopThread(List<ShopBean> holder){
            submit2Solr(holder);
            latch.countDown();
            //放回毒丸到队列
            try {
                queue.put(ShopIndexReaderWrapper.POISON);
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
            }
            Thread.currentThread().interrupt();
        }

        private void submit2Solr(List<ShopBean> holder){
            try {
                if (holder.size()>0) {
                    client.addBeans(holder);
                }
            } catch (SolrServerException | IOException e) {
                logger.error("solr client add beans error, data:{}", holder, e);
            }
        }

    }

}


package cn.com.gome.digger.index.core.provider;

import java.util.LinkedList;
import java.util.List;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.stereotype.Component;

@Component
public abstract class ARangeDataProvider<T> implements IDataProvider<T>, InitializingBean {
    private static final Logger logger = LoggerFactory.getLogger(ARangeDataProvider.class);

    private long begin=0l;
    private Long max=null;
    private Integer rowsPerBatch=null;
    @Override
    public void afterPropertiesSet() throws Exception {
        this.max=getMax();
        this.rowsPerBatch=getRowsPerBatch();
    }

    @Override
    public List<T> getBatchData() {
        Long currentBegin=null;
        synchronized (this) {
            if (begin>max) {
                return null;
            }
            currentBegin=begin;
            begin+=(rowsPerBatch);
        }

        List<T> result=getDataByRange(currentBegin, currentBegin+rowsPerBatch-1);
        return result==null?new LinkedList<T>():result;
    }

    public void resetCursor(){
        begin=0l;
    }

    protected abstract List<T> getDataByRange(long begin,long end);
    protected abstract long getMax();
    protected abstract int getRowsPerBatch();


}

最后,给大家介绍一个多线程的查看工作:

jkd/bin目录下















你可能感兴趣的:(多线程,队列,消费者生产者)