运用动态线程池,提升服务器资源利用率

说到我为什么要用动态线程池,有两点原因。一是踩坑了,有次进行了服务优化后低估了线上的流量,导致上了第一台机器后开始报接口超时的错误,发现是设置的线程池核心数量太小,以及队列长度过大,导致过多请求阻塞在任务队列里,没办法,只能修改参数再重新上一次。二是因为想将服务器资源利用率提升到最大化。其实动态线程池在美团已经用上了,只是他们的框架没有开源,没办法,只能自己实现一下。
具体为什么能实现可以参考下美团的这篇文章,说的很详细。https://tech.meituan.com/2020/04/02/java-pooling-pratice-in-meituan.html

如果你公司已经有分布式配置中心中间件的话,就可以直接用我这个代码,如果没有这个中间件的话,可以去Gitee或者Github参考开源框架Cubic或者Hippo,看哪个更适合你,话不多说,下面看代码。


image.png

1、自定义动态线程池

/**
 * @Description 线程池
 * @Date 2021/12/4 4:26 下午
 * @Author chenzhibin
 */
public class DynamicThreadPoolExecutor extends ThreadPoolExecutor {
    /**
     * 线程池名称
     */
    private String threadPoolName;
    /**
     * 任务名称
     */
    private String defaultTaskName = "defaultTask";

    /**
     * 拒绝策略:默认直接抛出RejectedExecutionException异常
     */
    private static final RejectedExecutionHandler defaultHandler = new AbortPolicy();


    private Map runnableNameMap = new ConcurrentHashMap<>();

    public DynamicThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue workQueue) {
        super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue);
    }

    public DynamicThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit,
                                     BlockingQueue workQueue, ThreadFactory threadFactory) {
        super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, threadFactory, defaultHandler);
    }

    public DynamicThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit,
                                     BlockingQueue workQueue, ThreadFactory threadFactory, RejectedExecutionHandler handler, String threadPoolName) {
        super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, threadFactory, handler);
        this.threadPoolName = threadPoolName;
    }

    @Override
    public void execute(Runnable command) {
        runnableNameMap.putIfAbsent(command.getClass().getSimpleName(), defaultTaskName);
        super.execute(command);
    }

    public void execute(Runnable command, String taskName) {
        runnableNameMap.putIfAbsent(command.getClass().getSimpleName(), taskName);
        super.execute(command);
    }

    public Future submit(Runnable task, String taskName) {
        runnableNameMap.putIfAbsent(task.getClass().getSimpleName(), taskName);
        return super.submit(task);
    }

    public  Future submit(Callable task, String taskName) {
        runnableNameMap.putIfAbsent(task.getClass().getSimpleName(), taskName);
        return super.submit(task);
    }

    public  Future submit(Runnable task, T result, String taskName) {
        runnableNameMap.putIfAbsent(task.getClass().getSimpleName(), taskName);
        return super.submit(task, result);
    }

    @Override
    public Future submit(Runnable task) {
        runnableNameMap.putIfAbsent(task.getClass().getSimpleName(), defaultTaskName);
        return super.submit(task);
    }

    @Override
    public  Future submit(Callable task) {
        runnableNameMap.putIfAbsent(task.getClass().getSimpleName(), defaultTaskName);
        return super.submit(task);
    }

    @Override
    public  Future submit(Runnable task, T result) {
        runnableNameMap.putIfAbsent(task.getClass().getSimpleName(), defaultTaskName);
        return super.submit(task, result);
    }

    public String getThreadPoolName() {
        return threadPoolName;
    }

    public void setThreadPoolName(String threadPoolName) {
        this.threadPoolName = threadPoolName;
    }
}

2、实现可重新设置大小的任务队列,其实就是让阻塞队列的capacity不被final修饰

/**
 * @Description 可以重新设置队列大小的ResizableCapacityResizableCapacityLinkedBlockIngQueue
 * @Date 2021/12/4 4:32 下午
 * @Author chenzhibin
 */
public class ResizableCapacityLinkedBlockIngQueue extends AbstractQueue
        implements BlockingQueue, java.io.Serializable {
    private static final long serialVersionUID = -6903933977591709194L;

    /*
     * A variant of the "two lock queue" algorithm.  The putLock gates
     * entry to put (and offer), and has an associated condition for
     * waiting puts.  Similarly for the takeLock.  The "count" field
     * that they both rely on is maintained as an atomic to avoid
     * needing to get both locks in most cases. Also, to minimize need
     * for puts to get takeLock and vice-versa, cascading notifies are
     * used. When a put notices that it has enabled at least one take,
     * it signals taker. That taker in turn signals others if more
     * items have been entered since the signal. And symmetrically for
     * takes signalling puts. Operations such as remove(Object) and
     * iterators acquire both locks.
     *
     * Visibility between writers and readers is provided as follows:
     *
     * Whenever an element is enqueued, the putLock is acquired and
     * count updated.  A subsequent reader guarantees visibility to the
     * enqueued Node by either acquiring the putLock (via fullyLock)
     * or by acquiring the takeLock, and then reading n = count.get();
     * this gives visibility to the first n items.
     *
     * To implement weakly consistent iterators, it appears we need to
     * keep all Nodes GC-reachable from a predecessor dequeued Node.
     * That would cause two problems:
     * - allow a rogue Iterator to cause unbounded memory retention
     * - cause cross-generational linking of old Nodes to new Nodes if
     *   a Node was tenured while live, which generational GCs have a
     *   hard time dealing with, causing repeated major collections.
     * However, only non-deleted Nodes need to be reachable from
     * dequeued Nodes, and reachability does not necessarily have to
     * be of the kind understood by the GC.  We use the trick of
     * linking a Node that has just been dequeued to itself.  Such a
     * self-link implicitly means to advance to head.next.
     */

    /**
     * Linked list node class
     */
    static class Node {
        E item;

        /**
         * One of:
         * - the real successor Node
         * - this Node, meaning the successor is head.next
         * - null, meaning there is no successor (this is the last node)
         */
        ResizableCapacityLinkedBlockIngQueue.Node next;

        Node(E x) {
            item = x;
        }
    }

    /**
     * The capacity bound, or Integer.MAX_VALUE if none
     */
    private int capacity;

    /**
     * Current number of elements
     */
    private final AtomicInteger count = new AtomicInteger();

    /**
     * Head of linked list.
     * Invariant: head.item == null
     */
    transient ResizableCapacityLinkedBlockIngQueue.Node head;

    /**
     * Tail of linked list.
     * Invariant: last.next == null
     */
    private transient ResizableCapacityLinkedBlockIngQueue.Node last;

    /**
     * Lock held by take, poll, etc
     */
    private final ReentrantLock takeLock = new ReentrantLock();

    /**
     * Wait queue for waiting takes
     */
    private final Condition notEmpty = takeLock.newCondition();

    /**
     * Lock held by put, offer, etc
     */
    private final ReentrantLock putLock = new ReentrantLock();

    /**
     * Wait queue for waiting puts
     */
    private final Condition notFull = putLock.newCondition();

    /**
     * Signals a waiting take. Called only from put/offer (which do not
     * otherwise ordinarily lock takeLock.)
     */
    private void signalNotEmpty() {
        final ReentrantLock takeLock = this.takeLock;
        takeLock.lock();
        try {
            notEmpty.signal();
        } finally {
            takeLock.unlock();
        }
    }

    /**
     * Signals a waiting put. Called only from take/poll.
     */
    private void signalNotFull() {
        final ReentrantLock putLock = this.putLock;
        putLock.lock();
        try {
            notFull.signal();
        } finally {
            putLock.unlock();
        }
    }

    /**
     * Links node at end of queue.
     *
     * @param node the node
     */
    private void enqueue(ResizableCapacityLinkedBlockIngQueue.Node node) {
        // assert putLock.isHeldByCurrentThread();
        // assert last.next == null;
        last = last.next = node;
    }

    /**
     * Removes a node from head of queue.
     *
     * @return the node
     */
    private E dequeue() {
        // assert takeLock.isHeldByCurrentThread();
        // assert head.item == null;
        ResizableCapacityLinkedBlockIngQueue.Node h = head;
        ResizableCapacityLinkedBlockIngQueue.Node first = h.next;
        h.next = h; // help GC
        head = first;
        E x = first.item;
        first.item = null;
        return x;
    }

    /**
     * Locks to prevent both puts and takes.
     */
    void fullyLock() {
        putLock.lock();
        takeLock.lock();
    }

    /**
     * Unlocks to allow both puts and takes.
     */
    void fullyUnlock() {
        takeLock.unlock();
        putLock.unlock();
    }

//     /**
//      * Tells whether both locks are held by current thread.
//      */
//     boolean isFullyLocked() {
//         return (putLock.isHeldByCurrentThread() &&
//                 takeLock.isHeldByCurrentThread());
//     }

    /**
     * Creates a {@code ResizableCapacityLinkedBlockIngQueue} with a capacity of
     * {@link Integer#MAX_VALUE}.
     */
    public ResizableCapacityLinkedBlockIngQueue() {
        this(Integer.MAX_VALUE);
    }

    /**
     * Creates a {@code ResizableCapacityLinkedBlockIngQueue} with the given (fixed) capacity.
     *
     * @param capacity the capacity of this queue
     * @throws IllegalArgumentException if {@code capacity} is not greater
     *                                  than zero
     */
    public ResizableCapacityLinkedBlockIngQueue(int capacity) {
        if (capacity <= 0) {
            throw new IllegalArgumentException();
        }
        this.capacity = capacity;
        last = head = new ResizableCapacityLinkedBlockIngQueue.Node(null);
    }

    /**
     * Creates a {@code ResizableCapacityLinkedBlockIngQueue} with a capacity of
     * {@link Integer#MAX_VALUE}, initially containing the elements of the
     * given collection,
     * added in traversal order of the collection's iterator.
     *
     * @param c the collection of elements to initially contain
     * @throws NullPointerException if the specified collection or any
     *                              of its elements are null
     */
    public ResizableCapacityLinkedBlockIngQueue(Collection c) {
        this(Integer.MAX_VALUE);
        final ReentrantLock putLock = this.putLock;
        putLock.lock(); // Never contended, but necessary for visibility
        try {
            int n = 0;
            for (E e : c) {
                if (e == null) {
                    throw new NullPointerException();
                }
                if (n == capacity) {
                    throw new IllegalStateException("Queue full");
                }
                enqueue(new ResizableCapacityLinkedBlockIngQueue.Node(e));
                ++n;
            }
            count.set(n);
        } finally {
            putLock.unlock();
        }
    }

    // this doc comment is overridden to remove the reference to collections
    // greater in size than Integer.MAX_VALUE

    /**
     * Returns the number of elements in this queue.
     *
     * @return the number of elements in this queue
     */
    @Override
    public int size() {
        return count.get();
    }

    // this doc comment is a modified copy of the inherited doc comment,
    // without the reference to unlimited queues.

    /**
     * Returns the number of additional elements that this queue can ideally
     * (in the absence of memory or resource constraints) accept without
     * blocking. This is always equal to the initial capacity of this queue
     * less the current {@code size} of this queue.
     *
     * 

Note that you cannot always tell if an attempt to insert * an element will succeed by inspecting {@code remainingCapacity} * because it may be the case that another thread is about to * insert or remove an element. */ @Override public int remainingCapacity() { return capacity - count.get(); } /** * Inserts the specified element at the tail of this queue, waiting if * necessary for space to become available. * * @throws InterruptedException {@inheritDoc} * @throws NullPointerException {@inheritDoc} */ @Override public void put(E e) throws InterruptedException { if (e == null) { throw new NullPointerException(); } // Note: convention in all put/take/etc is to preset local var // holding count negative to indicate failure unless set. int c = -1; ResizableCapacityLinkedBlockIngQueue.Node node = new ResizableCapacityLinkedBlockIngQueue.Node(e); final ReentrantLock putLock = this.putLock; final AtomicInteger count = this.count; putLock.lockInterruptibly(); try { /* * Note that count is used in wait guard even though it is * not protected by lock. This works because count can * only decrease at this point (all other puts are shut * out by lock), and we (or some other waiting put) are * signalled if it ever changes from capacity. Similarly * for all other uses of count in other wait guards. */ while (count.get() == capacity) { notFull.await(); } enqueue(node); c = count.getAndIncrement(); if (c + 1 < capacity) { notFull.signal(); } } finally { putLock.unlock(); } if (c == 0) { signalNotEmpty(); } } /** * Inserts the specified element at the tail of this queue, waiting if * necessary up to the specified wait time for space to become available. * * @return {@code true} if successful, or {@code false} if * the specified waiting time elapses before space is available * @throws InterruptedException {@inheritDoc} * @throws NullPointerException {@inheritDoc} */ @Override public boolean offer(E e, long timeout, TimeUnit unit) throws InterruptedException { if (e == null) { throw new NullPointerException(); } long nanos = unit.toNanos(timeout); int c = -1; final ReentrantLock putLock = this.putLock; final AtomicInteger count = this.count; putLock.lockInterruptibly(); try { while (count.get() == capacity) { if (nanos <= 0) { return false; } nanos = notFull.awaitNanos(nanos); } enqueue(new ResizableCapacityLinkedBlockIngQueue.Node(e)); c = count.getAndIncrement(); if (c + 1 < capacity) { notFull.signal(); } } finally { putLock.unlock(); } if (c == 0) { signalNotEmpty(); } return true; } /** * Inserts the specified element at the tail of this queue if it is * possible to do so immediately without exceeding the queue's capacity, * returning {@code true} upon success and {@code false} if this queue * is full. * When using a capacity-restricted queue, this method is generally * preferable to method {@link BlockingQueue#add add}, which can fail to * insert an element only by throwing an exception. * * @throws NullPointerException if the specified element is null */ @Override public boolean offer(E e) { if (e == null) { throw new NullPointerException(); } final AtomicInteger count = this.count; if (count.get() == capacity) { return false; } int c = -1; ResizableCapacityLinkedBlockIngQueue.Node node = new ResizableCapacityLinkedBlockIngQueue.Node(e); final ReentrantLock putLock = this.putLock; putLock.lock(); try { if (count.get() < capacity) { enqueue(node); c = count.getAndIncrement(); if (c + 1 < capacity) { notFull.signal(); } } } finally { putLock.unlock(); } if (c == 0) { signalNotEmpty(); } return c >= 0; } @Override public E take() throws InterruptedException { E x; int c = -1; final AtomicInteger count = this.count; final ReentrantLock takeLock = this.takeLock; takeLock.lockInterruptibly(); try { while (count.get() == 0) { notEmpty.await(); } x = dequeue(); c = count.getAndDecrement(); if (c > 1) { notEmpty.signal(); } } finally { takeLock.unlock(); } if (c == capacity) { signalNotFull(); } return x; } @Override public E poll(long timeout, TimeUnit unit) throws InterruptedException { E x = null; int c = -1; long nanos = unit.toNanos(timeout); final AtomicInteger count = this.count; final ReentrantLock takeLock = this.takeLock; takeLock.lockInterruptibly(); try { while (count.get() == 0) { if (nanos <= 0) { return null; } nanos = notEmpty.awaitNanos(nanos); } x = dequeue(); c = count.getAndDecrement(); if (c > 1) { notEmpty.signal(); } } finally { takeLock.unlock(); } if (c == capacity) { signalNotFull(); } return x; } @Override public E poll() { final AtomicInteger count = this.count; if (count.get() == 0) { return null; } E x = null; int c = -1; final ReentrantLock takeLock = this.takeLock; takeLock.lock(); try { if (count.get() > 0) { x = dequeue(); c = count.getAndDecrement(); if (c > 1) { notEmpty.signal(); } } } finally { takeLock.unlock(); } if (c == capacity) { signalNotFull(); } return x; } @Override public E peek() { if (count.get() == 0) { return null; } final ReentrantLock takeLock = this.takeLock; takeLock.lock(); try { ResizableCapacityLinkedBlockIngQueue.Node first = head.next; if (first == null) { return null; } else { return first.item; } } finally { takeLock.unlock(); } } /** * Unlinks interior Node p with predecessor trail. */ void unlink(ResizableCapacityLinkedBlockIngQueue.Node p, ResizableCapacityLinkedBlockIngQueue.Node trail) { // assert isFullyLocked(); // p.next is not changed, to allow iterators that are // traversing p to maintain their weak-consistency guarantee. p.item = null; trail.next = p.next; if (last == p) { last = trail; } if (count.getAndDecrement() == capacity) { notFull.signal(); } } /** * Removes a single instance of the specified element from this queue, * if it is present. More formally, removes an element {@code e} such * that {@code o.equals(e)}, if this queue contains one or more such * elements. * Returns {@code true} if this queue contained the specified element * (or equivalently, if this queue changed as a result of the call). * * @param o element to be removed from this queue, if present * @return {@code true} if this queue changed as a result of the call */ @Override public boolean remove(Object o) { if (o == null) { return false; } fullyLock(); try { for (ResizableCapacityLinkedBlockIngQueue.Node trail = head, p = trail.next; p != null; trail = p, p = p.next) { if (o.equals(p.item)) { unlink(p, trail); return true; } } return false; } finally { fullyUnlock(); } } /** * Returns {@code true} if this queue contains the specified element. * More formally, returns {@code true} if and only if this queue contains * at least one element {@code e} such that {@code o.equals(e)}. * * @param o object to be checked for containment in this queue * @return {@code true} if this queue contains the specified element */ @Override public boolean contains(Object o) { if (o == null) { return false; } fullyLock(); try { for (ResizableCapacityLinkedBlockIngQueue.Node p = head.next; p != null; p = p.next) { if (o.equals(p.item)) { return true; } } return false; } finally { fullyUnlock(); } } /** * Returns an array containing all of the elements in this queue, in * proper sequence. * *

The returned array will be "safe" in that no references to it are * maintained by this queue. (In other words, this method must allocate * a new array). The caller is thus free to modify the returned array. * *

This method acts as bridge between array-based and collection-based * APIs. * * @return an array containing all of the elements in this queue */ @Override public Object[] toArray() { fullyLock(); try { int size = count.get(); Object[] a = new Object[size]; int k = 0; for (ResizableCapacityLinkedBlockIngQueue.Node p = head.next; p != null; p = p.next) { a[k++] = p.item; } return a; } finally { fullyUnlock(); } } /** * Returns an array containing all of the elements in this queue, in * proper sequence; the runtime type of the returned array is that of * the specified array. If the queue fits in the specified array, it * is returned therein. Otherwise, a new array is allocated with the * runtime type of the specified array and the size of this queue. * *

If this queue fits in the specified array with room to spare * (i.e., the array has more elements than this queue), the element in * the array immediately following the end of the queue is set to * {@code null}. * *

Like the {@link #toArray()} method, this method acts as bridge between * array-based and collection-based APIs. Further, this method allows * precise control over the runtime type of the output array, and may, * under certain circumstances, be used to save allocation costs. * *

Suppose {@code x} is a queue known to contain only strings. * The following code can be used to dump the queue into a newly * allocated array of {@code String}: * *

 {@code String[] y = x.toArray(new String[0]);}
*

* Note that {@code toArray(new Object[0])} is identical in function to * {@code toArray()}. * * @param a the array into which the elements of the queue are to * be stored, if it is big enough; otherwise, a new array of the * same runtime type is allocated for this purpose * @return an array containing all of the elements in this queue * @throws ArrayStoreException if the runtime type of the specified array * is not a supertype of the runtime type of every element in * this queue * @throws NullPointerException if the specified array is null */ @Override @SuppressWarnings("unchecked") public T[] toArray(T[] a) { fullyLock(); try { int size = count.get(); if (a.length < size) { a = (T[]) java.lang.reflect.Array.newInstance (a.getClass().getComponentType(), size); } int k = 0; for (ResizableCapacityLinkedBlockIngQueue.Node p = head.next; p != null; p = p.next) { a[k++] = (T) p.item; } if (a.length > k) { a[k] = null; } return a; } finally { fullyUnlock(); } } @Override public String toString() { fullyLock(); try { ResizableCapacityLinkedBlockIngQueue.Node p = head.next; if (p == null) { return "[]"; } StringBuilder sb = new StringBuilder(); sb.append('['); for (; ; ) { E e = p.item; sb.append(e == this ? "(this Collection)" : e); p = p.next; if (p == null) { return sb.append(']').toString(); } sb.append(',').append(' '); } } finally { fullyUnlock(); } } /** * Atomically removes all of the elements from this queue. * The queue will be empty after this call returns. */ @Override public void clear() { fullyLock(); try { for (ResizableCapacityLinkedBlockIngQueue.Node p, h = head; (p = h.next) != null; h = p) { h.next = h; p.item = null; } head = last; // assert head.item == null && head.next == null; if (count.getAndSet(0) == capacity) { notFull.signal(); } } finally { fullyUnlock(); } } /** * @throws UnsupportedOperationException {@inheritDoc} * @throws ClassCastException {@inheritDoc} * @throws NullPointerException {@inheritDoc} * @throws IllegalArgumentException {@inheritDoc} */ @Override public int drainTo(Collection c) { return drainTo(c, Integer.MAX_VALUE); } /** * @throws UnsupportedOperationException {@inheritDoc} * @throws ClassCastException {@inheritDoc} * @throws NullPointerException {@inheritDoc} * @throws IllegalArgumentException {@inheritDoc} */ @Override public int drainTo(Collection c, int maxElements) { if (c == null) { throw new NullPointerException(); } if (c == this) { throw new IllegalArgumentException(); } if (maxElements <= 0) { return 0; } boolean signalNotFull = false; final ReentrantLock takeLock = this.takeLock; takeLock.lock(); try { int n = Math.min(maxElements, count.get()); // count.get provides visibility to first n Nodes ResizableCapacityLinkedBlockIngQueue.Node h = head; int i = 0; try { while (i < n) { ResizableCapacityLinkedBlockIngQueue.Node p = h.next; c.add(p.item); p.item = null; h.next = h; h = p; ++i; } return n; } finally { // Restore invariants even if c.add() threw if (i > 0) { // assert h.item == null; head = h; signalNotFull = (count.getAndAdd(-i) == capacity); } } } finally { takeLock.unlock(); if (signalNotFull) { signalNotFull(); } } } /** * Returns an iterator over the elements in this queue in proper sequence. * The elements will be returned in order from first (head) to last (tail). * *

The returned iterator is * weakly consistent. * * @return an iterator over the elements in this queue in proper sequence */ @Override public Iterator iterator() { return new ResizableCapacityLinkedBlockIngQueue.Itr(); } private class Itr implements Iterator { /* * Basic weakly-consistent iterator. At all times hold the next * item to hand out so that if hasNext() reports true, we will * still have it to return even if lost race with a take etc. */ private ResizableCapacityLinkedBlockIngQueue.Node current; private ResizableCapacityLinkedBlockIngQueue.Node lastRet; private E currentElement; Itr() { fullyLock(); try { current = head.next; if (current != null) { currentElement = current.item; } } finally { fullyUnlock(); } } @Override public boolean hasNext() { return current != null; } /** * Returns the next live successor of p, or null if no such. *

* Unlike other traversal methods, iterators need to handle both: * - dequeued nodes (p.next == p) * - (possibly multiple) interior removed nodes (p.item == null) */ private ResizableCapacityLinkedBlockIngQueue.Node nextNode(ResizableCapacityLinkedBlockIngQueue.Node p) { for (; ; ) { ResizableCapacityLinkedBlockIngQueue.Node s = p.next; if (s == p) { return head.next; } if (s == null || s.item != null) { return s; } p = s; } } @Override public E next() { fullyLock(); try { if (current == null) { throw new NoSuchElementException(); } E x = currentElement; lastRet = current; current = nextNode(current); currentElement = (current == null) ? null : current.item; return x; } finally { fullyUnlock(); } } @Override public void remove() { if (lastRet == null) { throw new IllegalStateException(); } fullyLock(); try { ResizableCapacityLinkedBlockIngQueue.Node node = lastRet; lastRet = null; for (ResizableCapacityLinkedBlockIngQueue.Node trail = head, p = trail.next; p != null; trail = p, p = p.next) { if (p == node) { unlink(p, trail); break; } } } finally { fullyUnlock(); } } } /** * A customized variant of Spliterators.IteratorSpliterator */ static final class LBQSpliterator implements Spliterator { static final int MAX_BATCH = 1 << 25; // max batch array size; final ResizableCapacityLinkedBlockIngQueue queue; ResizableCapacityLinkedBlockIngQueue.Node current; // current node; null until initialized int batch; // batch size for splits boolean exhausted; // true when no more nodes long est; // size estimate LBQSpliterator(ResizableCapacityLinkedBlockIngQueue queue) { this.queue = queue; this.est = queue.size(); } @Override public long estimateSize() { return est; } @Override public Spliterator trySplit() { ResizableCapacityLinkedBlockIngQueue.Node h; final ResizableCapacityLinkedBlockIngQueue q = this.queue; int b = batch; int n = (b <= 0) ? 1 : (b >= MAX_BATCH) ? MAX_BATCH : b + 1; if (!exhausted && ((h = current) != null || (h = q.head.next) != null) && h.next != null) { Object[] a = new Object[n]; int i = 0; ResizableCapacityLinkedBlockIngQueue.Node p = current; q.fullyLock(); try { if (p != null || (p = q.head.next) != null) { do { if ((a[i] = p.item) != null) { ++i; } } while ((p = p.next) != null && i < n); } } finally { q.fullyUnlock(); } if ((current = p) == null) { est = 0L; exhausted = true; } else if ((est -= i) < 0L) { est = 0L; } if (i > 0) { batch = i; return Spliterators.spliterator (a, 0, i, Spliterator.ORDERED | Spliterator.NONNULL | Spliterator.CONCURRENT); } } return null; } @Override public void forEachRemaining(Consumer action) { if (action == null) { throw new NullPointerException(); } final ResizableCapacityLinkedBlockIngQueue q = this.queue; if (!exhausted) { exhausted = true; ResizableCapacityLinkedBlockIngQueue.Node p = current; do { E e = null; q.fullyLock(); try { if (p == null) { p = q.head.next; } while (p != null) { e = p.item; p = p.next; if (e != null) { break; } } } finally { q.fullyUnlock(); } if (e != null) { action.accept(e); } } while (p != null); } } @Override public boolean tryAdvance(Consumer action) { if (action == null) { throw new NullPointerException(); } final ResizableCapacityLinkedBlockIngQueue q = this.queue; if (!exhausted) { E e = null; q.fullyLock(); try { if (current == null) { current = q.head.next; } while (current != null) { e = current.item; current = current.next; if (e != null) { break; } } } finally { q.fullyUnlock(); } if (current == null) { exhausted = true; } if (e != null) { action.accept(e); return true; } } return false; } @Override public int characteristics() { return Spliterator.ORDERED | Spliterator.NONNULL | Spliterator.CONCURRENT; } } /** * Returns a {@link Spliterator} over the elements in this queue. * *

The returned spliterator is * weakly consistent. * *

The {@code Spliterator} reports {@link Spliterator#CONCURRENT}, * {@link Spliterator#ORDERED}, and {@link Spliterator#NONNULL}. * * @return a {@code Spliterator} over the elements in this queue * @implNote The {@code Spliterator} implements {@code trySplit} to permit limited * parallelism. * @since 1.8 */ @Override public Spliterator spliterator() { return new ResizableCapacityLinkedBlockIngQueue.LBQSpliterator(this); } /** * Saves this queue to a stream (that is, serializes it). * * @param s the stream * @throws java.io.IOException if an I/O error occurs * @serialData The capacity is emitted (int), followed by all of * its elements (each an {@code Object}) in the proper order, * followed by a null */ private void writeObject(java.io.ObjectOutputStream s) throws java.io.IOException { fullyLock(); try { // Write out any hidden stuff, plus capacity s.defaultWriteObject(); // Write out all elements in the proper order. for (ResizableCapacityLinkedBlockIngQueue.Node p = head.next; p != null; p = p.next) { s.writeObject(p.item); } // Use trailing null as sentinel s.writeObject(null); } finally { fullyUnlock(); } } /** * Reconstitutes this queue from a stream (that is, deserializes it). * * @param s the stream * @throws ClassNotFoundException if the class of a serialized object * could not be found * @throws java.io.IOException if an I/O error occurs */ private void readObject(java.io.ObjectInputStream s) throws java.io.IOException, ClassNotFoundException { // Read in capacity, and any hidden stuff s.defaultReadObject(); count.set(0); last = head = new ResizableCapacityLinkedBlockIngQueue.Node(null); // Read in all elements and place in queue for (; ; ) { @SuppressWarnings("unchecked") E item = (E) s.readObject(); if (item == null) { break; } add(item); } } public void setCapacity(int capacity) { this.capacity = capacity; } }

3、动态线程池管理


/**
 * @Description 动态线程池管理
 * @Date 2021/12/4 4:20 下午
 * @Author chenzhibin
 */
public class DynamicThreadPoolManager {
    /**
     * 存储线程池对象,Key:名称 Value:对象
     */
    private Map threadPoolExecutorMap = new HashMap<>();

    /**
     * 存储线程池拒绝次数,Key:名称 Value:次数
     */
    private static Map threadPoolExecutorRejectCountMap = new ConcurrentHashMap<>();


    public void createThreadPool(ThreadPoolParam param) {
        createThreadPoolExecutor(param);
    }

    /**
     * 创建线程池
     *
     * @param threadPoolParam
     */
    private void createThreadPoolExecutor(ThreadPoolParam threadPoolParam) {
        if (!threadPoolExecutorMap.containsKey(threadPoolParam.getThreadPoolName())) {
            DynamicThreadPoolExecutor threadPoolExecutor = new DynamicThreadPoolExecutor(
                    threadPoolParam.getCorePoolSize(),
                    threadPoolParam.getMaximumPoolSize(),
                    threadPoolParam.getKeepAliveTime(),
                    threadPoolParam.getUnit(),
                    getBlockingQueue(threadPoolParam.getQueueType(), threadPoolParam.getQueueCapacity(), threadPoolParam.isFair()),
                    new DefaultThreadFactory(threadPoolParam.getThreadPoolName()),
                    getRejectedExecutionHandler(threadPoolParam.getRejectedExecutionType(), threadPoolParam.getThreadPoolName()), threadPoolParam.getThreadPoolName());

            threadPoolExecutorMap.put(threadPoolParam.getThreadPoolName(), threadPoolExecutor);
        }
    }


    /**
     * 根据线程池昵称获取线程池
     *
     * @param threadPoolName
     * @return
     */
    public DynamicThreadPoolExecutor getThreadPoolExecutor(String threadPoolName) {
        DynamicThreadPoolExecutor threadPoolExecutor = threadPoolExecutorMap.get(threadPoolName);
        if (threadPoolExecutor == null) {
            return null;
        }
        return threadPoolExecutor;
    }

    /**
     * 获取阻塞队列
     *
     * @param queueType
     * @param queueCapacity
     * @param fair
     * @return
     */
    private BlockingQueue getBlockingQueue(String queueType, int queueCapacity, boolean fair) {
        if (!QueueTypeEnum.exists(queueType)) {
            throw new RuntimeException("队列不存在 " + queueType);
        }
        if (QueueTypeEnum.ARRAY_BLOCKING_QUEUE.getType().equals(queueType)) {
            return new ArrayBlockingQueue(queueCapacity);
        }
        if (QueueTypeEnum.SYNCHRONOUS_QUEUE.getType().equals(queueType)) {
            return new SynchronousQueue(fair);
        }
        if (QueueTypeEnum.PRIORITY_BLOCKING_QUEUE.getType().equals(queueType)) {
            return new PriorityBlockingQueue(queueCapacity);
        }
        if (QueueTypeEnum.DELAY_QUEUE.getType().equals(queueType)) {
            return new DelayQueue();
        }
        if (QueueTypeEnum.LINKED_BLOCKING_DEQUE.getType().equals(queueType)) {
            return new LinkedBlockingDeque(queueCapacity);
        }
        if (QueueTypeEnum.LINKED_TRANSFER_DEQUE.getType().equals(queueType)) {
            return new LinkedTransferQueue();
        }
        return new ResizableCapacityLinkedBlockIngQueue(queueCapacity);
    }

    /**
     * 获取拒绝策略
     *
     * @param rejectedExecutionType
     * @param threadPoolName
     * @return
     */
    private RejectedExecutionHandler getRejectedExecutionHandler(String rejectedExecutionType, String threadPoolName) {
        if (RejectedExecutionHandlerEnum.CALLER_RUNS_POLICY.getType().equals(rejectedExecutionType)) {
            return new ThreadPoolExecutor.CallerRunsPolicy();
        }
        if (RejectedExecutionHandlerEnum.DISCARD_OLDEST_POLICY.getType().equals(rejectedExecutionType)) {
            return new ThreadPoolExecutor.DiscardOldestPolicy();
        }
        if (RejectedExecutionHandlerEnum.DISCARD_POLICY.getType().equals(rejectedExecutionType)) {
            return new ThreadPoolExecutor.DiscardPolicy();
        }
        ServiceLoader serviceLoader = ServiceLoader.load(RejectedExecutionHandler.class);
        Iterator iterator = serviceLoader.iterator();
        while (iterator.hasNext()) {
            RejectedExecutionHandler rejectedExecutionHandler = iterator.next();
            String rejectedExecutionHandlerName = rejectedExecutionHandler.getClass().getSimpleName();
            if (rejectedExecutionType.equals(rejectedExecutionHandlerName)) {
                return rejectedExecutionHandler;
            }
        }
        return new DefaultAbortPolicy(threadPoolName);
    }

    static class DefaultThreadFactory implements ThreadFactory {
        private static final AtomicInteger poolNumber = new AtomicInteger(1);
        private final ThreadGroup group;
        private final AtomicInteger threadNumber = new AtomicInteger(1);
        private final String namePrefix;

        DefaultThreadFactory(String threadPoolName) {
            SecurityManager s = System.getSecurityManager();
            group = (s != null) ? s.getThreadGroup() :
                    Thread.currentThread().getThreadGroup();
            namePrefix = threadPoolName + "-" + poolNumber.getAndIncrement() + "-thread-";
        }

        @Override
        public Thread newThread(Runnable r) {
            Thread t = new Thread(group, r,
                    namePrefix + threadNumber.getAndIncrement(),
                    0);
            if (t.isDaemon()) {
                t.setDaemon(false);
            }
            if (t.getPriority() != Thread.NORM_PRIORITY) {
                t.setPriority(Thread.NORM_PRIORITY);
            }
            return t;
        }
    }

    static class DefaultAbortPolicy implements RejectedExecutionHandler {

        private String threadPoolName;

        /**
         * Creates an {@code AbortPolicy}.
         */
        public DefaultAbortPolicy() {
        }

        public DefaultAbortPolicy(String threadPoolName) {
            this.threadPoolName = threadPoolName;
        }

        /**
         * Always throws RejectedExecutionException.
         *
         * @param r the runnable task requested to be executed
         * @param e the executor attempting to execute this task
         * @throws RejectedExecutionException always
         */
        @Override
        public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
            AtomicLong atomicLong = threadPoolExecutorRejectCountMap.putIfAbsent(threadPoolName, new AtomicLong(1));
            if (atomicLong != null) {
                atomicLong.incrementAndGet();
            }
            throw new RejectedExecutionException("Task " + r.toString() + " rejected from " + e.toString());
        }
    }

    /**
     * 刷新线程池
     */
    public synchronized void refreshThreadPoolExecutor(List params) {
        if (params != null && params.size() != 0) {
            params.forEach(param -> {
                if (threadPoolExecutorMap.containsKey(param.getThreadPoolName())) {
                    if (param.getCorePoolSize() <= 0) {
                        throw new IllegalArgumentException(param.getThreadPoolName() + " corePoolSize must be grater than zero");
                    }
                    if (param.getMaximumPoolSize() <= 0) {
                        throw new IllegalArgumentException(param.getThreadPoolName() + " getMaximumPoolSize must be grater than zero");
                    }
                    // 如果核心线程数大于最大线程数,不成立,抛出异常IllegalArgumentException
                    if (param.getCorePoolSize() > param.getMaximumPoolSize()) {
                        throw new IllegalArgumentException(param.getThreadPoolName() + " corePoolSize must not bigger than maxPoolSize");
                    }
                    if (param.getQueueCapacity() <= 0) {
                        throw new IllegalArgumentException("Dynamic Thread [" + param.getThreadPoolName() + "] update error, queueCapacity must be grater than zero.");
                    }
                    DynamicThreadPoolExecutor threadPoolExecutor = threadPoolExecutorMap.get(param.getThreadPoolName());
                    //判断值是否发送变化
                    if (checkUpdate(param, threadPoolExecutor)) {
                        // 为了防止修改时最大线程数小于原核心线程数导致更新失败,所以更新时需要按照值大小排序更新
                        setThreadSize(param, threadPoolExecutor);
                        threadPoolExecutor.setKeepAliveTime(param.getKeepAliveTime(), param.getUnit());
                        threadPoolExecutor.setRejectedExecutionHandler(getRejectedExecutionHandler(param.getRejectedExecutionType(), param.getThreadPoolName()));
                    }
                    BlockingQueue queue = threadPoolExecutor.getQueue();
                    if (queue != null && queue instanceof ResizableCapacityLinkedBlockIngQueue) {
                        ((ResizableCapacityLinkedBlockIngQueue) queue).setCapacity(param.getQueueCapacity());
                    }
                }
            });
        }
    }

    /**
     * 正确更新核心线程数和最大线程数方式
     *
     * @param param
     * @param threadPoolExecutor
     */
    private void setThreadSize(ThreadPoolParam param, DynamicThreadPoolExecutor threadPoolExecutor) {
        if (param.getCorePoolSize() <= threadPoolExecutor.getCorePoolSize()) {
            threadPoolExecutor.setCorePoolSize(param.getCorePoolSize());
            threadPoolExecutor.setMaximumPoolSize(param.getMaximumPoolSize());
        } else {
            threadPoolExecutor.setMaximumPoolSize(param.getMaximumPoolSize());
            threadPoolExecutor.setCorePoolSize(param.getCorePoolSize());
        }
    }

    /**
     * 验证更新项是否发生变化
     *
     * @param param
     * @param threadPoolExecutor
     * @return
     */
    private boolean checkUpdate(ThreadPoolParam param, DynamicThreadPoolExecutor threadPoolExecutor) {
        if (param.getCorePoolSize() != threadPoolExecutor.getCorePoolSize()) {
            return true;
        }
        if (param.getMaximumPoolSize() != threadPoolExecutor.getMaximumPoolSize()) {
            return true;
        }
        if (param.getKeepAliveTime() != threadPoolExecutor.getKeepAliveTime(TimeUnit.SECONDS)) {
            return true;
        }
        if (!param.getRejectedExecutionType().equalsIgnoreCase(threadPoolExecutor.getRejectedExecutionHandler().getClass().getSimpleName())) {
            return true;
        }
        return false;
    }

    /**
     * 获取当前服务所有的线程池
     *
     * @return
     */
    public List getCurrentAppThreadPoolExecutors() {
        List dynamicThreadPoolExecutors = new ArrayList<>();
        if (threadPoolExecutorMap.size() != 0) {
            Iterator> iterator = threadPoolExecutorMap.entrySet().iterator();
            while (iterator.hasNext()) {
                Map.Entry threadPoolExecutorEntry = iterator.next();
                if (threadPoolExecutorEntry.getValue() != null) {
                    dynamicThreadPoolExecutors.add(threadPoolExecutorEntry.getValue());
                }
            }
        }
        return dynamicThreadPoolExecutors;
    }

    /**
     * 获取拒绝数量
     *
     * @param threadPoolName
     * @return
     */
    public Long getRejectCount(String threadPoolName) {
        if (StringUtils.isBlank(threadPoolName)) {
            return 0L;
        }
        AtomicLong val = threadPoolExecutorRejectCountMap.get(threadPoolName);
        if (val == null) {
            return 0L;
        }
        return val.get();
    }
}

4、定义一个默认的创建线程池的参数

/**
 * @Description 线程池参数
 * @Date 2021/12/4 5:06 下午
 * @Author chenzhibin
 */
@Data
@AllArgsConstructor
@NoArgsConstructor
@Builder
public class ThreadPoolParam {

    /**
     * 线程池名称
     */
    private String threadPoolName = "DefaultThreadPool";

    /**
     * 核心线程数
     */
    private Integer corePoolSize = 1;

    /**
     * 最大线程数, 默认值为CPU核心数量
     */
    private Integer maximumPoolSize = Runtime.getRuntime().availableProcessors();

    /**
     * 队列最大数量,默认1000,不用Integer.MAX_VALUE防止OOM
     */
    private Integer queueCapacity = 1000;

    /**
     * 队列类型
     *
     * @see QueueTypeEnum
     */
    private String queueType = QueueTypeEnum.LINKED_BLOCKING_QUEUE.getType();

    /**
     * SynchronousQueue 是否公平策略
     */
    private boolean fair;

    /**
     * 拒绝策略
     *
     * @see RejectedExecutionHandlerEnum
     */
    private String rejectedExecutionType = RejectedExecutionHandlerEnum.ABORT_POLICY.getType();

    /**
     * 空闲线程存活时间,默认15s
     */
    private Long keepAliveTime = 15L;

    /**
     * 空闲线程存活时间单位,默认s
     */
    private TimeUnit unit = TimeUnit.SECONDS;

    /**
     * 队列容量阀值
     */
    private Integer queueCapacityThreshold = queueCapacity;
}

好了,这些代码弄好以后该怎么使用呢?
在SpringBoot中:
首先将动态线程池管理者交给Spring管理,并设置监听线程池的定时任务线程池大小,1就可以了

/**
 * @Description 本服务设置
 * @Date 2021/12/6 4:25 下午
 * @Author chenzhibin
 */
@Configuration
public class AppConfig {

    @Bean
    public DynamicThreadPoolManager dynamicThreadPoolManager() {
        return new DynamicThreadPoolManager();
    }


    /**
     * 设置定时任务线程池大小
     *
     * @return
     */
    @Bean
    public TaskScheduler taskScheduler() {
        ThreadPoolTaskScheduler taskScheduler = new ThreadPoolTaskScheduler();
        //设置线程池大小
        taskScheduler.setPoolSize(1);
        return taskScheduler;
    }
}

然后在项目初始化时监听分布式配置中心,并创建线程池,根据自己业务需要可以创建多个。

/**
 * @Description 分布式配置中心
 * @Date 2021/12/6 3:52 下午
 * @Author chenzhibin
 */
@Component
@Slf4j
public class ConfigurationService implements InitializingBean {

    public static class Key {
        /**
         * 动态线程池
         */
        public static final String THREAD_POOL_KEY = "dynamicThreadPool";

    }

    private final Map proper = new ConcurrentHashMap<>();

    private final String GROUP_ID = "xx";
    private final String SERVICE_ID = "xxx";

    @Resource
    private DynamicThreadPoolManager dynamicThreadPoolManager;

    @Override
    public void afterPropertiesSet() throws Exception {
        initVitamin();
        initThreadPool();
    }

    private void initVitamin() {
        log.info("before proper={}", JSON.toJSONString(proper));
        // 加载分布式配置项至本地,并注册配置项变更的监听器
        Map nodeMap =lookup(GROUP_ID, SERVICE_ID, nodes -> {
            handlerNode(nodes);
        });

        if (MapUtils.isNotEmpty(nodeMap)) {
            proper.putAll(nodeMap);
        }
        log.info("before after={}", JSON.toJSONString(proper));
    }

    /**
     * 初始化线程池
     */
    private void initThreadPool() {
        List threadPoolParams = new ArrayList<>();
        try {
            threadPoolParams = jsonStrToList(Key.THREAD_POOL_KEY, ThreadPoolParam.class);
        } catch (Exception e) {
            log.error("threadPool param illegal e :{}", e);
        }
        ThreadPoolParam msgPoolParam = ExecutorUtils.getMsgPoolParam();
        ThreadPoolParam dbPoolParam = ExecutorUtils.getDbPoolParam();
        for (ThreadPoolParam param : threadPoolParams) {
            if (param.getThreadPoolName().equals(msgPoolParam.getThreadPoolName())) {
                dynamicThreadPoolManager.createThreadPool(param);
            } else {
                dynamicThreadPoolManager.createThreadPool(msgPoolParam);
            }

            if (param.getThreadPoolName().equals(dbPoolParam.getThreadPoolName())) {
                dynamicThreadPoolManager.createThreadPool(param);
            } else {
                dynamicThreadPoolManager.createThreadPool(dbPoolParam);
            }
        }
    }


    private void handlerNode(List nodes) {
        if (CollectionUtils.isEmpty(nodes)) {
            return;
        }
        for (NodeDO nodeDO : nodes) {
            if (nodeDO == null) {
                continue;
            }
            if (nodeDO.getOperateType() == NodeDO.OperateType.DELETE) {
                removeValue(nodeDO.getNodeKey());
            }
            putValue(nodeDO.getNodeKey(), nodeDO.getNodeValue());
            try {
                if (nodeDO.getNodeKey().equals(Key.THREAD_POOL_KEY)) {
                    List threadPoolParams = jsonStrToList(Key.THREAD_POOL_KEY, ThreadPoolParam.class);
                    dynamicThreadPoolManager.refreshThreadPoolExecutor(threadPoolParams);
                    log.info("refresh threadPool success!,new threadPoolParams={}", nodeDO.getNodeValue());
                }
            } catch (Exception e) {
                log.error("refreshThreadPoolExecutor fail,param illegal");
            }
        }
    }

    private void removeValue(String nodeKey) {
        proper.remove(nodeKey);
    }

    private void putValue(String key, String value) {
        proper.put(key, value);
        log.info("put key ={},value ={}", key, value);
    }

    public  T getValue(String key, TypeReference type) {
        T t = null;
        String value = getValue(key);
        t = JSON.parseObject(value, type);
        log.info("get value key={},value={}", key, t);
        return t;
    }

    private String getValue(String key) {
        String v;
        v = proper.get(key);
        return v;
    }

    public  List jsonStrToList(String key, Class clazz) {
        String v;
        v = proper.get(key);
        List res = (List) JSONArray.parseArray(v, clazz);
        return res;
    }
}

分布式配置中心设置key,val
key ="dynamicThreadPoool"
val=
"[{"threadPoolName":"default","corePoolSize":1500,"maximumPoolSize":3000,"keepAliveTime":30,"queueType":"LinkedBlockingQueue","rejectedExecutionType":"AbortPolicy","queueCapacity":1000}]"

最后就是定时任务监听线程池状态:

/**
 * @Description 定时任务
 * 注意:添加其他任务时需要在AppConfig判断线程池数够不够使用,不够自己调整
 * @Date 2021/12/28 5:35 下午
 * @Author chenzhibin
 */
@Component
@Slf4j(topic = "monitorLogger")
public class TimeTask {

    @Resource
    private DynamicThreadPoolManager dynamicThreadPoolManager;

    /**
     * 监控线程池状态,每3秒打印一次
     */
    @Scheduled(cron = "0/3 * * * * *")
    public void monitorThreadPool() {
        List executors = dynamicThreadPoolManager.getCurrentAppThreadPoolExecutors();
        if (CollectionUtils.isNotEmpty(executors)) {
            executors.forEach(dynamicThreadPoolExecutor -> {
                String threadPoolName = dynamicThreadPoolExecutor.getThreadPoolName();
                //当前线程数
                int currentThreadSize = dynamicThreadPoolExecutor.getPoolSize();
                //核心线程数
                int corePoolSize = dynamicThreadPoolExecutor.getCorePoolSize();
                //最大线程数
                int maximumPoolSize = dynamicThreadPoolExecutor.getMaximumPoolSize();
                //队列类型
                String queueType = dynamicThreadPoolExecutor.getQueue().getClass().getSimpleName();
                //队列初始容量
                int queueCapacity = dynamicThreadPoolExecutor.getQueue().size() + dynamicThreadPoolExecutor.getQueue().remainingCapacity();
                //队列剩余容量
                int remainingCapacity = dynamicThreadPoolExecutor.getQueue().remainingCapacity();
                //活跃线程数
                int activeCount = dynamicThreadPoolExecutor.getActiveCount();
                //存活时间
                long keepAliveTime = dynamicThreadPoolExecutor.getKeepAliveTime(TimeUnit.MILLISECONDS);
                //拒绝策略
                String rejectType = dynamicThreadPoolExecutor.getRejectedExecutionHandler().getClass().getSimpleName();
                //任务数
                long taskCount = dynamicThreadPoolExecutor.getTaskCount();
                //已完成任务数
                long completeTaskCount = dynamicThreadPoolExecutor.getCompletedTaskCount();
                //历史最大线程数
                int largestPoolSize = dynamicThreadPoolExecutor.getLargestPoolSize();
                //当前负载
                String currentLoad = divide(activeCount, maximumPoolSize) + "%";
                //峰值负载
                String peakLoad = divide(largestPoolSize, maximumPoolSize) + "%";
                //拒绝数量
                long rejectCount = dynamicThreadPoolManager.getRejectCount(dynamicThreadPoolExecutor.getThreadPoolName());

                log.info("线程池名称:{} " +
                                "当前线程数:{}, 核心线程数:{}, 最大线程数:{}, 队列类型:{}, 队列初始容量:{}, " +
                                "队列剩余容量:{},活跃线程数:{}, 空闲存活时间:{}ms, 拒绝策略:{}, 任务数:{}, 完成任务数:{}, " +
                                "历史最大线程数:{}, 当前负载:{}, 峰值负载:{}, 拒绝数量:{}",
                        threadPoolName, currentThreadSize, corePoolSize, maximumPoolSize, queueType, queueCapacity,
                        remainingCapacity, activeCount, keepAliveTime, rejectType, taskCount, completeTaskCount,
                        largestPoolSize, currentLoad, peakLoad, rejectCount);
            });
        }
    }

    public static int divide(int num1, int num2) {
        return ((int) (Double.parseDouble(num1 + "") / Double.parseDouble(num2 + "") * 100));
    }
}

到这就已经弄好动态线程池了,接下来看看有什么好处。
1、可以时刻观察业务中线程池负载情况,包括当前负载、最大负载、拒绝数量等等

image.png

image.png

2、当线上出现报警时

image.png

可以查看当前服务器负载情况,如果cpu使用率不高且load不高的话,适当调整线程池数量,最大化利用服务器资源。
如有疑问可联系:[email protected]

你可能感兴趣的:(运用动态线程池,提升服务器资源利用率)