StackExchange.Redis TimeOut 记录

前言

最近在用.Net Core 做业务模块时,发现经常会出现TimeOut 超时的情况。然后看了官方的解释,说2.0版本之后维护了一个专用的线程池。我就打算阅读源码,看一下这个线程池的实现。

源码

StackExchange.Redis源码 中可以看到,其中维护了一个名为:DedicatedThreadPoolPipeScheduler 的线程池,此线程池引用了一个使用并不多的第三方开源库Pipelines.Sockets.Unofficial。这个库给我的第一感觉就是,用的很少,估计有坑。

我们先来看一下这个 DedicatedThreadPoolpipeScheduler 的实现:

/// 
    /// An implementation of a pipe-scheduler that uses a dedicated pool of threads, deferring to
    /// the thread-pool if that becomes too backlogged
    /// 
    public sealed class DedicatedThreadPoolPipeScheduler : PipeScheduler, IDisposable
    {
        /// 
        /// Reusable shared scheduler instance
        /// 
        public static DedicatedThreadPoolPipeScheduler Default => StaticContext.Instance;

        private static class StaticContext
        {   // locating here rather than as a static field on DedicatedThreadPoolPipeScheduler so that it isn't instantiated too eagerly
            internal static readonly DedicatedThreadPoolPipeScheduler Instance = new DedicatedThreadPoolPipeScheduler(nameof(Default));
        }

        /// 
        /// The name of the pool
        /// 
        public override string ToString() => Name;

        /// 
        /// The number of workers associated with this pool
        /// 
        public int WorkerCount { get; }

        private int UseThreadPoolQueueLength { get; }

        private ThreadPriority Priority { get; }

        private string Name { get; }

        /// 
        /// Create a new dedicated thread-pool
        /// 
        public DedicatedThreadPoolPipeScheduler(string name = null, int workerCount = 5, int useThreadPoolQueueLength = 10,
            ThreadPriority priority = ThreadPriority.Normal)
        {
            if (workerCount < 0) throw new ArgumentNullException(nameof(workerCount));

            WorkerCount = workerCount;
            UseThreadPoolQueueLength = useThreadPoolQueueLength;
            if (string.IsNullOrWhiteSpace(name)) name = GetType().Name;
            Name = name.Trim();
            Priority = priority;
            for (int i = 0; i < workerCount; i++)
            {
                StartWorker(i);
            }
        }

        private long _totalServicedByQueue, _totalServicedByPool;

        /// 
        /// The total number of operations serviced by the queue
        /// 
        public long TotalServicedByQueue => Volatile.Read(ref _totalServicedByQueue);

        /// 
        /// The total number of operations that could not be serviced by the queue, but which were sent to the thread-pool instead
        /// 
        public long TotalServicedByPool => Volatile.Read(ref _totalServicedByPool);

        private readonly struct WorkItem
        {
            public readonly Action Action;
            public readonly object State;
            public WorkItem(Action action, object state)
            {
                Action = action;
                State = state;
            }
        }

        private volatile bool _disposed;

        private readonly Queue _queue = new Queue();
        private void StartWorker(int id)
        {
            var thread = new Thread(ThreadRunWorkLoop)
            {
                Name = $"{Name}:{id}",
                Priority = Priority,
                IsBackground = true
            };
            thread.Start(this);
            Helpers.Incr(Counter.ThreadPoolWorkerStarted);
        }

        /// 
        /// Requests  to be run on scheduler with  being passed in
        /// 
        public override void Schedule(Action action, object state)
        {
            if (action == null) return; // nothing to do
            int queueLength;
            lock (_queue)
            {
                _queue.Enqueue(new WorkItem(action, state));
                if (_availableCount != 0)
                {
                    Monitor.Pulse(_queue); // wake up someone
                }
                queueLength = _queue.Count;
            }

            if (_disposed || queueLength > UseThreadPoolQueueLength)
            {
                Helpers.Incr(Counter.ThreadPoolPushedToMainThreadPool);
                System.Threading.ThreadPool.QueueUserWorkItem(ThreadPoolRunSingleItem, this);
            }
            else
            {
                Helpers.Incr(Counter.ThreadPoolScheduled);
            }
        }

        private static readonly ParameterizedThreadStart ThreadRunWorkLoop = state => ((DedicatedThreadPoolPipeScheduler)state).RunWorkLoop();
        private static readonly WaitCallback ThreadPoolRunSingleItem = state => ((DedicatedThreadPoolPipeScheduler)state).RunSingleItem();

        private int _availableCount;
        /// 
        /// The number of workers currently actively engaged in work
        /// 
        public int AvailableCount => Thread.VolatileRead(ref _availableCount);

        [MethodImpl(MethodImplOptions.AggressiveInlining)]
        private void Execute(Action action, object state)
        {
            try
            {
                action(state);
                Helpers.Incr(Counter.ThreadPoolExecuted);
                Helpers.Incr(action == SocketAwaitableEventArgs.InvokeStateAsAction ? ((Action)state).Method : action.Method);
            }
            catch (Exception ex)
            {
                Helpers.DebugLog(Name, ex.Message);
            }
        }

        private void RunSingleItem()
        {
            WorkItem next;
            lock (_queue)
            {
                if (_queue.Count == 0) return;
                next = _queue.Dequeue();
            }
            Interlocked.Increment(ref _totalServicedByPool);
            Execute(next.Action, next.State);
        }
        private void RunWorkLoop()
        {
            while (true)
            {
                WorkItem next;
                lock (_queue)
                {
                    while (_queue.Count == 0)
                    {
                        if (_disposed) break;
                        _availableCount++;
                        Monitor.Wait(_queue);
                        _availableCount--;
                    }
                    if (_queue.Count == 0)
                    {
                        if (_disposed) break;
                        else continue;
                    }
                    next = _queue.Dequeue();
                }
                Interlocked.Increment(ref _totalServicedByQueue);
                Execute(next.Action, next.State);
            }
        }
        /// 
        /// Release the threads associated with this pool; if additional work is requested, it will
        /// be sent to the main thread-pool
        /// 
        public void Dispose()
        {
            _disposed = true;
            lock (_queue)
            {
                Monitor.PulseAll(_queue);
            }
        }
    }
 
 

可以看到内部维护了一个任务队列,默认工作线程为5个。最重要的一段是这里:

    /// 
        /// Requests  to be run on scheduler with  being passed in
        /// 
        public override void Schedule(Action action, object state)
        {
            if (action == null) return; // nothing to do
            int queueLength;
            lock (_queue)
            {
                _queue.Enqueue(new WorkItem(action, state));
                if (_availableCount != 0)
                {
                    Monitor.Pulse(_queue); // wake up someone
                }
                queueLength = _queue.Count;
            }

            if (_disposed || queueLength > UseThreadPoolQueueLength)
            {
                Helpers.Incr(Counter.ThreadPoolPushedToMainThreadPool);
                System.Threading.ThreadPool.QueueUserWorkItem(ThreadPoolRunSingleItem, this);
            }
            else
            {
                Helpers.Incr(Counter.ThreadPoolScheduled);
            }
        }
 
 

当任务队列长度超过 UseThreadPoolQueueLength (默认为10)或者这个专用线程池被释放而又存在没有处理完的任务时,就会使用.Net全局线程池来帮助处理任务。
另外的问题就是,这个默认线程池工作线程数量,在StackExchange.Redis中是不可配置的,默认就是5。所以如果使用单实例模型来使用这个Redis库时,一旦并发数高就会出现超时的任务,根据现有资料可以看到

For these .NET-provided global thread pools: once the number of existing (busy) threads hits the "minimum" number of threads, the ThreadPool will throttle the rate at which is injects new threads to one thread per 500 milliseconds. This means that if your system gets a burst of work needing an IOCP thread, it will process that work very quickly. However, if the burst of work is more than the configured "Minimum" setting, there will be some delay in processing some of the work as the ThreadPool waits for one of two things to happen 1. An existing thread becomes free to process the work 2. No existing thread becomes free for 500ms, so a new thread is created.
Basically, if you're hitting the global thread pool (rather than the dedicated StackExchange.Redis thread-pool) it means that when the number of Busy threads is greater than Min threads, you are likely paying a 500ms delay before network traffic is processed by the application. Also, it is important to note that when an existing thread stays idle for longer than 15 seconds (based on what I remember), it will be cleaned up and this cycle of growth and shrinkage can repeat.

.Net全局线程池在超过最小线程数时,要花费500ms 创建一个线程,当线程闲时超过15秒时就会被回收,是一个动态伸缩的线程池。

那么总结一下就是:

  1. StackExchange.Redis维护了一个专有线程池,但是在线程数超过5,且并发数超过10(每个任务处理的非常慢的极端情况)就会使用.Net 全局线程池。
  2. StackExchange.Redis无法配置这个工作线程数。
  3. 当并发过多,且全局线程池的Minimum 很小时,就会出现超时TimeOut 的情况。

解决方案:

  1. 创建多个ConnectionMultiplexer 实例(每个实例有5个专用工作线程),理论上粗略计算可以处理N510个并发。
  2. 自己编译源码修改工作线程数。
  3. System.Threading.ThreadPool.SetMinThreads(200, 200); //根据并发估算,设置一下全局线程池的最小值。

但是以上的解决方案,感觉都不好。最好的方式应该就是,自己写一个Redis的客户端,然后实现一个能够动态扩容和收缩的线程池。StackExchange.Redis代码结构看着也不够清晰。

你可能感兴趣的:(StackExchange.Redis TimeOut 记录)