Rx第四部分--并发

Rx是一个可查询异步动态数据的系统。为了高效的提供开发者需要的异步编程功能,需要一定级别的并发控制。我们需要具有为消费者并发地生成序列数据的能力。

在本系列文章的最后一篇中,我们将研究运行数据查询时必须考虑的各种并发问题。将研究如何避免使用并发,以及如何正确的使用并发。将看到Rx提供的优异的抽象性,以声明的方式使用并发,并可进行单元测试。在我看来这两个特性足够作为引入Rx的理由。我们将研究并发序列查询的复杂问题,并在滑动时间窗内分析数据。

Rx is primarily a system for querying data in motion asynchronously. To effectively provide the level of asynchrony that developers require, some level of concurrency control is required. We need the ability to generate sequence data concurrently to the consumption of the sequence data.

In this fourth and final part of the book, we will look at the various concurrency considerations one must undertake when querying data in motion. We will look how to avoid concurrency when possible and use it correctly when justifiable. We will look at the excellent abstractions Rx provides, that enable concurrency to become declarative and also unit testable. In my opinion, theses two features are enough reason alone to adopt Rx into your code base. We will also look at the complex issue of querying concurrent sequences and analyzing data in sliding windows of time.

调度时序和线程(Scheduling and threading)

至此,已经避免显示的使用线程或并发。我们已经介绍了一些方法,它们隐式地引入了某种程度的并发性来执行任务(如Buffer、Delay、Sample,都需要在独立的线程、调度时序、定时器内完成相应功能)。然而很多都进行了友好封装。本文我们将看到优雅美丽的Rx API,无需使用WaitHandle类型,也不用显示的使用Thread、ThreadPool和Task类。

So far, we have managed to avoid any explicit usage of threading or concurrency. There are some methods that we have covered that implicitly introduce some level of concurrency to perform their jobs (e.g. Buffer, Delay, Sample each require a separate thread/scheduler/timer to work their magic). Most of this however, has been kindly abstracted away from us. This chapter will look at the elegant beauty of the Rx API and its ability to effectively remove the need for WaitHandle types, and any explicit calls to Threads, the ThreadPool or Tasks.

Rx默认是单线程的(Rx is single-threaded by default)

通常误以为Rx模式是多线程的。这是错误的假设,就像将.NET事件误以为是多线程的一样。我们在附录(Appendix)中揭穿这种假设,事件就是基于单线程和同步的。

与事件相似,Rx就是将回调连在一起获取特定通知的方式。虽然Rx是线程自由的模式,但也不是在序列上订阅和调用OnNext时就会引入多线程。线程自由的意思是代码执行不必受限于某一个具体线程。例如,可随意选择执行订阅、观察、产生通知的线程。自由线程模型的替代方案是一个单线程单元(STA)模型,必须在给定的线程上与系统交互。通常操作UI或COM组件时都使用STA模式。因此,作为总结,如果不引入任何调度时序,回调执行的线程与OnNext、OnError、OnCommpleted执行的线程一致。

本例中,我们创建一个Subject并在各种线程中调用OnNext,并输出线程Id。

A popular misconception is that Rx is multithreaded by default. It is perhaps more an idle assumption than a strong belief, much in the same way some assume that standard .NET events are multithreaded until they challenge that notion. We debunk this myth and assert that events are most certainly single threaded and synchronous in the Appendix.

Like events, Rx is just a way of chaining callbacks together for a given notification. While Rx is a free-threaded model, this does not mean that subscribing or calling OnNext will introduce multi-threading to your sequence. Being free-threaded means that you are not restricted to which thread you choose to do your work. For example, you can choose to do your work such as invoking a subscription, observing or producing notifications, on any thread you like. The alternative to a free-threaded model is a Single Threaded Apartment (STA) model where you must interact with the system on a given thread. It is common to use the STA model when working with User Interfaces and some COM interop. So, just as a recap: if you do not introduce any scheduling, your callbacks will be invoked on the same thread that the OnNext/OnError/OnCompleted methods are invoked from.

In this example, we create a subject then call OnNext on various threads and record the threadId in our handler.

Console.WriteLine("Starting on threadId:{0}", Thread.CurrentThread.ManagedThreadId);

var subject = new Subject<object>();

subject.Subscribe(

o => Console.WriteLine("Received {1} on threadId:{0}",

Thread.CurrentThread.ManagedThreadId,

o));

ParameterizedThreadStart notify = obj =>

{

Console.WriteLine("OnNext({1}) on threadId:{0}",

Thread.CurrentThread.ManagedThreadId,

obj);

subject.OnNext(obj);

};

notify(1);

new Thread(notify).Start(2);

new Thread(notify).Start(3);

Output:

Starting on threadId:9

OnNext(1) on threadId:9

Received 1 on threadId:9

OnNext(2) on threadId:10

Received 2 on threadId:10

OnNext(3) on threadId:11

Received 3 on threadId:11

注意每个OnNext与其通知在同一个线程上执行。事实上并不总是这样。Rx为并发和多线程提供了非常便利的机制:调度时序。

Note that each OnNext was called back on the same thread that it was notified on. This is not always what we are looking for. Rx introduces a very handy mechanism for introducing concurrency and multithreading to your code: Scheduling.

SubscribeOn 和ObserveOn

在Rx中,通常需要控制两个并发模式的要素:

  1. 订阅调用线程
  2. 观察者通知线程

可能你已经猜到了,这是通过IObservable的两个扩展函数SubscribeOn和ObserverOn来实现的。两个方法都接收IScheduler参数(或SynchronizationContext参数),并返回IObsrevable将方法链接到一起。

In the Rx world, there are generally two things you want to control the concurrency model for:

  1. The invocation of the subscription
  2. The observing of notifications

As you could probably guess, these are exposed via two extension methods to IObservable called SubscribeOn and ObserveOn. Both methods have an overload that take an IScheduler (or SynchronizationContext) and return an IObservable so you can chain methods together.

public static class Observable

{

public static IObservable ObserveOn(

this IObservable source,

IScheduler scheduler)

{...}

public static IObservable ObserveOn(

this IObservable source,

SynchronizationContext context)

{...}

public static IObservable SubscribeOn(

this IObservable source,

IScheduler scheduler)

{...}

public static IObservable SubscribeOn(

this IObservable source,

SynchronizationContext context)

{...}

}

这里需要指出一个陷阱,第一个重载很少使用,对其实际效果还有误解。您应该使用SubscribeOn方法来描述希望如何调度准备代码和后台处理代码。例如,如果对Observable.Create使用SubscribeOn,传递给Create方法的委托将在指定的调度时序上执行。

本例中,使用Observable.Create方法创建一个序列,并进行标准的订阅。

One pitfall I want to point out here is, the first few times I used these overloads, I was confused as to what they actually do. You should use the SubscribeOn method to describe how you want any warm-up and background processing code to be scheduled. For example, if you were to use SubscribeOn with Observable.Create, the delegate passed to the Create method would be run on the specified scheduler.

In this example, we have a sequence produced by Observable.Create with a standard subscription.

Console.WriteLine("Starting on threadId:{0}", Thread.CurrentThread.ManagedThreadId);

var source = Observable.Create<int>(

o =>

{

Console.WriteLine("Invoked on threadId:{0}", Thread.CurrentThread.ManagedThreadId);

o.OnNext(1);

o.OnNext(2);

o.OnNext(3);

o.OnCompleted();

Console.WriteLine("Finished on threadId:{0}",

Thread.CurrentThread.ManagedThreadId);

return Disposable.Empty;

});

source

//.SubscribeOn(Scheduler.ThreadPool)

.Subscribe(

o => Console.WriteLine("Received {1} on threadId:{0}",

Thread.CurrentThread.ManagedThreadId,

o),

() => Console.WriteLine("OnCompleted on threadId:{0}",

Thread.CurrentThread.ManagedThreadId));

Console.WriteLine("Subscribed on threadId:{0}", Thread.CurrentThread.ManagedThreadId);

Output:

Starting on threadId:9

Invoked on threadId:9

Received 1 on threadId:9

Received 2 on threadId:9

Received 3 on threadId:9

OnCompleted on threadId:9

Finished on threadId:9

Subscribed on threadId:9

你会注意到所有的动作都在同一个线程上执行。同时注意所有的动作都顺序执行。当进行订阅时,Create的委托参数被调用。当OnNext(1)被调用时,OnNext的通知处理函数被调用。都是保持同步的,直到委托结束,Subscribe可以移到最后一行代码,在线程9上进行订阅。

如果在链上应用SubscribeOn(取消注释),执行顺序将完全不同。

You will notice that all actions were performed on the same thread. Also, note that everything is sequential. When the subscription is made, the Create delegate is called. When OnNext(1) is called, the OnNext handler is called, and so on. This all stays synchronous until the Create delegate is finished, and the Subscribe line can move on to the final line that declares we are subscribed on thread 9.

If we apply SubscribeOn to the chain (i.e. un-comment it), the order of execution is quite different.

Starting on threadId:9

Subscribed on threadId:9

Invoked on threadId:10

Received 1 on threadId:10

Received 2 on threadId:10

Received 3 on threadId:10

OnCompleted on threadId:10

Finished on threadId:10

观察者的订阅调用现在时非阻塞的了。Create函数的委托参数在线程池上执行,其通知处理函数也一样。

ObserveOn方法用于需要调整通知调度时序的情况。ObserverOn在STA系统中会非常有用,如UI应用程序。当开发UI应用程序,SubscribeOn和ObserverOn很有用,因为:

  1. 不希望阻塞UI线程
  2. 需要在UI线程上更新UI对象

UI线程避免阻塞,否则将导致很差的用户体验。通常在Silverlight和WPF中如果执行时间超过150-250ms则不能在UI线程上执行。这是用户感觉到UI阻塞的大致时长(鼠标卡顿、动画不流畅)。在Win8的Metro风格中,最大阻塞时长时50ms。这种更加严格的规则可以确保应用程序持续的快速和流畅的体验。使用桌面处理器提供的计算能力,可以在50ms内执行大量的计算。然而,处理器有很多种(单核、双核、多核,大功率桌面处理器、低功率平板手机ARM),50ms能够处理多少数据?通常,任何IO、计算量集中的运算、与UI无关的处理,都应该从UI线程中隔离。创建响应式UI应用程序的通常模式是:

  • 响应一系列用户动作
  • 在后台线程执行
  • 将计算结果反馈到UI线程
  • 更新UI

Observe that the subscribe call is now non-blocking. The Create delegate is executed on the thread pool and so are all our handlers.

The ObserveOn method is used to declare where you want your notifications to be scheduled to. I would suggest the ObserveOn method is most useful when working with STA systems, most commonly UI applications. When writing UI applications, the SubscribeOn/ObserveOn pair is very useful for two reasons:

  1. you do not want to block the UI thread
  2. but you do need to update UI objects on the UI thread.

It is critical to avoid blocking the UI thread, as doing so leads to a poor user experience. General guidance for Silverlight and WPF is that any work that blocks for longer than 150-250ms should not be performed on the UI thread (Dispatcher). This is approximately the period of time over which a user can notice a lag in the UI (mouse becomes sticky, animations sluggish). In the upcoming Metro style apps for Windows 8, the maximum allowed blocking time is only 50ms. This more stringent rule is to ensure a consistent fast and fluid experience across applications. With the processing power offered by current desktop processors, you can achieve a lot of processing 50ms. However, as processor become more varied (single/multi/many core, plus high power desktop vs. lower power ARM tablet/phones), how much you can do in 50ms fluctuates widely. In general terms: any I/O, computational intensive work or any processing unrelated to the UI should be marshaled off the UI thread. The general pattern for creating responsive UI applications is:

  • respond to some sort of user action
  • do work on a background thread
  • pass the result back to the UI thread
  • update the UI

这些很适合使用Rx:响应事件、可组合多个事件、将数据传递给方法链。使用内置的调度时序,我们甚至有能力在UI线程与后台线程间来回切换以满足用户的需求。

考虑WPF应用程序中使用Rx实现的ObsrevableCollection.您肯定希望使用SubscribeOn而不是Dispatcher,其次是ObserveOn,以确保在Dispatcher上收到了通知。如果没有使用ObserverOn方法,OnNext的处理函数与抛出通知的代码运行在同一个线程。在Silverlight/WPF中,这将引起一系列不支持的跨线程异常。本例中,我们在叫做Customers的序列上进行订阅。我们在一个新线程上执行订阅,并确保在接收到Customer通知时,将它们添加到Dispatcher上的Customers集合中。

This is a great fit for Rx: responding to events, potentially composing multiple events, passing data to chained method calls. With the inclusion of scheduling, we even have the power to get off and back onto the UI thread for that responsive application feel that users demand.

Consider a WPF application that used Rx to populate an ObservableCollection. You would almost certainly want to use SubscribeOn to leave the Dispatcher, followed by ObserveOn to ensure you were notified back on the Dispatcher. If you failed to use the ObserveOn method, then your OnNext handlers would be invoked on the same thread that raised the notification. In Silverlight/WPF, this would cause some sort of not-supported/cross-threading exception. In this example, we subscribe to a sequence of Customers. We perform the subscription on a new thread and ensure that as we receive Customer notifications, we add them to the Customers collection on the Dispatcher.

_customerService.GetCustomers()

.SubscribeOn(Scheduler.NewThread)

.ObserveOn(DispatcherScheduler.Instance)

//or .ObserveOnDispatcher()

.Subscribe(Customers.Add);

Schedulers

SubscribeOn和ObserverOn方法需要传递一个IScheduler参数。这里将深入讲解调度时序,以及可用的实现。

这里有两个主要类型可以使用:

  • IScheduler接口--所有调度时序类的通用接口
  • 静态类Scheduler
  • 实现IScheduler接口的各种类和帮助函数

IScheduler的实现类比这个接口本身更重要。理解Rx中的IScheduler接口的关键在于用来调度动作的执行,或立即执行或指定时间点。IScheduler的实现定义了动作的调用方式,如使用线程池、创建新线程、消息泵进行异步调用,或在当前线程进行同步调用。根据使用的平台(Silverlight4、Silverlight5、.NET3.5、.NET4.0),可从静态类Scheduler中获取更多可用的实现。

在查看IScheduler接口之前,现看看常用的扩展方法,接着介绍普通的实现。

这是IScheduler常见的扩展方法。用来立即设置动作的执行方式。

The SubscribeOn and ObserveOn methods required us to pass in an IScheduler. Here we will dig a little deeper and see what schedulers are, and what implementations are available to us.

There are two main types we use when working with schedulers:

The IScheduler interface

A common interface for all schedulers

The static Scheduler class

Exposes both implementations of IScheduler and helpful extension methods to the IScheduler interface

The IScheduler interface is of less importance right now than the types that implement the interface. The key concept to understand is that an IScheduler in Rx is used to schedule some action to be performed, either as soon as possible or at a given point in the future. The implementation of the IScheduler defines how that action will be invoked i.e. asynchronously via a thread pool, a new thread or a message pump, or synchronously on the current thread. Depending on your platform (Silverlight 4, Silverlight 5, .NET 3.5, .NET 4.0), you will be exposed most of the implementations you will need via a static class Scheduler.

Before we look at the IScheduler interface in detail, let's look at the extension method we will use the most often and then introduce the common implementations.

This is the most commonly used (extension) method for IScheduler. It simply sets an action to be performed as soon as possible.

public static IDisposable Schedule(this IScheduler scheduler, Action action)

{...}

可按如下方式使用这个函数:

You could use the method like this:

IScheduler scheduler = ...;

scheduler.Schedule(()=>{ Console.WriteLine("Work to be scheduled"); });

在Scheduler类型中有几个静态属性。

Scheduler.Immediate:动作不做调度,直接执行。

Scheduler.CurrentThread动作在原始调用线程上执行。与Immediate不同,CurrentThread将动作进行排队后在执行。后面例子将比较他们的区别。

Scheduler.NewThread :在新线程上执行动作。

Scheduler.ThreadPool :在线程池上获取线程执行动作。

 Scheduler.TaskPool :在TaskPool上执行动作。在Silverlight4.0和.NET3.5上不可用。

 如果使用WPF和Silverlight,会用到DispatcherScheduler.Instance。可使用公共接口将任务调度到Dispatcher上立即执行或延后执行。同时IObservable提供了SubscribeOnDispatcher和ObserverOnDispatcher扩展方法,用于访问Dispatcher。虽然很有用,但在产品中应该避免使用这两个方法,具体解释请见Testing Rx章节。

上面的时序调度类用法都是很容易理解的。后面章节将深入讲解IScheduler的全部实现类。

These are the static properties that you can find on the Scheduler type.

Scheduler.Immediate will ensure the action is not scheduled, but rather executed immediately.

Scheduler.CurrentThread ensures that the actions are performed on the thread that made the original call. This is different from Immediate, as CurrentThread will queue the action to be performed. We will compare these two schedulers using a code example soon.

Scheduler.NewThread will schedule work to be done on a new thread.

Scheduler.ThreadPool will schedule all actions to take place on the Thread Pool.

Scheduler.TaskPool will schedule actions onto the TaskPool. This is not available in Silverlight 4 or .NET 3.5 builds.

If you are using WPF or Silverlight, then you will also have access to DispatcherScheduler.Instance. This allows you to schedule tasks onto the Dispatcher with the common interface, either now or in the future. There is the SubscribeOnDispatcher() and ObserveOnDispatcher() extension methods to IObservable, that also help you access the Dispatcher. While they appear useful, you will want to avoid these two methods for production code, and we explain why in the Testing Rx chapter.

Most of the schedulers listed above are quite self explanatory for basic usage. We will take an in-depth look at all of the implementations of IScheduler later in the chapter.

并发陷阱(Concurrency pitfalls)

在应用程序中引入并发也提升了复杂性。如果添加并发后并没有改善应用程序,则应该避免使用并发。并发应用程序可能会在调试、测试和重构方面引入了可维护问题。

并发通常会带来不可预知的时序问题。不可预知时序可能由系统可变负荷引起,也可能由可变的配置项引起(例如,改变内核时钟速度和处理器的性能)。最终可能导致并发冲突。并发冲突包括无序的执行、死锁(deadlocks)、活锁(livelocks)以及状态混乱。

在我看来,并发给应用程序带来的最大危险是可能引入潜在bug。开发、单元测试、测试过程中可能会遗漏这些缺陷,而在生产环境中出现。

而Rx简化了可观测序列上的并发处理,可以缓解上述问题。还是有可能发生错误的,但如果遵守开发原则,就会更加安全,因为已经大大降低了并发冲突的可能性。

后续章节Testing Rx中,将看到Rx对并发工作流的改进。

Introducing concurrency to your application will increase its complexity. If your application is not noticeably improved by adding a layer of concurrency, then you should avoid doing so. Concurrent applications can exhibit maintenance problems with symptoms surfacing in the areas of debugging, testing and refactoring.

The common problem that concurrency introduces is unpredictable timing. Unpredictable timing can be caused by variable load on a system, as well as variations in system configurations (e.g. varying core clock speed and availability of processors). These can ultimately can result in race conditions. Symptoms of race conditions include out-of-order execution, deadlocks, livelocks and corrupted state.

In my opinion, the biggest danger when introducing concurrency haphazardly to an application, is that you can silently introduce bugs. These defects may slip past Development, QA and UAT and only manifest themselves in Production environments.

Rx, however, does such a good job of simplifying the concurrent processing of observable sequences that many of these concerns can be mitigated. You can still create problems, but if you follow the guidelines then you can feel a lot safer in the knowledge that you have heavily reduced the capacity for unwanted race conditions.

In a later chapter, Testing Rx, we will look at how Rx improves your ability to test concurrent workflows.

锁定(Lock-ups)

在我使用Rx开发第一个商业应用程序过程中,团队发现Rx代码很可能会发生死锁。如果考虑到操作(First、Last、Single、ForEach)正在阻塞中,而后又安排一些延时动作,很明显会发生并发冲突。这个范例是我能想到的最简单的阻塞场景。当然这很轻微但会恶化。

When working on my first commercial application that used Rx, the team found out the hard way that Rx code can most certainly deadlock. When you consider that some calls (like First, Last, Single and ForEach) are blocking, and that we can schedule work to be done in the future, it becomes obvious that a race condition can occur. This example is the simplest block I could think of. Admittedly, it is fairly elementary but it will get the ball rolling.

var sequence = new Subject<int>();

Console.WriteLine("Next line should lock the system.");

var value = sequence.First();

sequence.OnNext(1);

Console.WriteLine("I can never execute....");

希望我们永远不会写出这样的代码,如果我们写了,我们的测试会给我们一个快速的反馈,告诉我们出了问题。更实际的情况下,并发冲突经常会在系统整合的时候出现。下一个例子中,错误将会更难检测到,但仅仅对第一个例子稍作改动。这里在Dispatcher线程上创建UI元素时发生了阻塞。阻塞调用正在等待只能从Dispatcher发出的事件,从而造成死锁。

Hopefully, we won't ever write such code though, and if we did, our tests would give us quick feedback that things went wrong. More realistically, race conditions often slip into the system at integration points. The next example may be a little harder to detect, but is only small step away from our first, unrealistic example. Here, we block in the constructor of a UI element which will always be created on the dispatcher. The blocking call is waiting for an event that can only be raised from the dispatcher, thus creating a deadlock.

public Window1()

{

InitializeComponent();

DataContext = this;

Value = "Default value";

//Deadlock!

//We need the dispatcher to continue to allow me to click the button to produce a value

Value = _subject.First();

//This will give same result but will not be blocking (deadlocking).

_subject.Take(1).Subscribe(value => Value = value);

}

private void MyButton_Click(object sender, RoutedEventArgs e)

{

_subject.OnNext("New Value");

}

public string Value

{

get { return _value; }

set

{

_value = value;

var handler = PropertyChanged;

if (handler != null) handler(this, new PropertyChangedEventArgs("Value"));

}

}

接下来我们看一下更严重的问题。按钮的点击事件处理函数在等待可观测序列发送的第一个数据项。

Next, we start seeing things that can become more sinister. The button's click handler will try to get the first value from an observable sequence exposed via an interface.

public partial class Window1 : INotifyPropertyChanged

{

//Imagine DI here.

private readonly IMyService _service = new MyService();

private int _value2;

public Window1()

{

InitializeComponent();

DataContext = this;

}

public int Value2

{

get { return _value2; }

set

{

_value2 = value;

var handler = PropertyChanged;

if (handler != null) handler(this, new PropertyChangedEventArgs("Value2"));

}

}

#region INotifyPropertyChanged Members

public event PropertyChangedEventHandler PropertyChanged;

#endregion

private void MyButton2_Click(object sender, RoutedEventArgs e)

{

Value2 = _service.GetTemperature().First();

}

}

这里只有一个小问题,我们阻塞在了Dispatcher线程上(First调用被阻塞),但是,如果服务代码写错了,就会出现死锁。

There is only one small problem here in that we block on the Dispatcher thread (First is a blocking call), however this manifests itself into a deadlock if the service code is written incorrectly.

class MyService : IMyService

{

public IObservable<int> GetTemperature()

{

return Observable.Create<int>(

o =>

{

o.OnNext(27);

o.OnNext(26);

o.OnNext(24);

return () => { };

})

.SubscribeOnDispatcher();

}

}

这是很奇怪的实现,带有显式的调度,将在First()调用后,继续执行三个OnNext调用;然而,First函数在等待OnNext被调用时,发生了死锁。

到目前为止,这一章似乎在说,对于必须面对的问题,并发性真是一种厄运;但这并不是我们的真正意图。使用Rx并不能避免这些经典的并发问题。然而,只要您遵循这两条简单的规则,Rx将使您更容易正确地进行操作。

  1. 只有最终的订阅可以设置调度时序
  2. 避免使用阻塞调用,如First、Last、Single

最后一个范例针对一个简单的问题;服务内使用了调度时序,事实上,他不应该这样做。在我们知道应该在Rx项目哪里进行调度之前,我们在各层上添加了“有用的”调度代码。它最终创建的是一个线程噩梦。当我们删除所有调度时序相关的代码只将其限制在一个单独的位置(至少在Silverlight中是这样)时,大多数并发问题都消失了。我建议你也这么做。至少在WPF/Silverlight应用程序中,模式应该很简单:“在后台线程上订阅;在Dispatcher上观察”。

This odd implementation, with explicit scheduling, will cause the three OnNext calls to be scheduled once the First() call has finished; however, that is waiting for an OnNext to be called: we are deadlocked.

So far, this chapter may seem to say that concurrency is all doom and gloom by focusing on the problems you could face; this is not the intent though. We do not magically avoid classic concurrency problems simply by adopting Rx. Rx will however make it easier to get it right, provided you follow these two simple rules.

  1. Only the final subscriber should be setting the scheduling
  2. Avoid using blocking calls: e.g. First, Last and Single

The last example came unstuck with one simple problem; the service was dictating the scheduling paradigm when, really, it had no business doing so. Before we had a clear idea of where we should be doing the scheduling in my first Rx project, we had all sorts of layers adding 'helpful' scheduling code. What it ended up creating was a threading nightmare. When we removed all the scheduling code and then confined it it in a single layer (at least in the Silverlight client), most of our concurrency problems went away. I recommend you do the same. At least in WPF/Silverlight applications, the pattern should be simple: "Subscribe on a Background thread; Observe on the Dispatcher".

调度时序的高级特性(Advanced features of schedulers)

至此我们只看到调度时序的简单用法:

  • 调度动作立即执行
  • 调度可观测序列的订阅
  • 调度可观测序列的观察通知

调度时序提供了很多高级特性帮助你解决各种问题。

We have only looked at the most simple usage of schedulers so far:

  • Scheduling an action to be executed as soon as possible
  • Scheduling the subscription of an observable sequence
  • Scheduling the observation of notifications coming from an observable sequence

Schedulers also provide more advanced features that can help you with various problems.

传递状态(Passing state)

已经看到的IScheduler扩展方法中,你只可以提供要执行的动作。这些动作不接受任何参数。如果要给动作传递参数,需要使用闭包:

In the extension method to IScheduler we have looked at, you could only provide an Action to execute. This Action did not accept any parameters. If you want to pass state to the Action, you could use a closure to share the data like this:

var myName = "Lee";

Scheduler.NewThread.Schedule(

() => Console.WriteLine("myName = {0}", myName));

这将产生一个问题,在两个不同的范围共享了状态。如果修改myName变量,可能会带来非期望的结果。

下例中我们使用了闭包传递状态。并修改闭包、产生并发冲突:修改发生在调度器使用状态之前还是之后呢?

This could create a problem, as you are sharing state across two different scopes. I could modify the variable myName and get unexpected results.

In this example, we use a closure as above to pass state. I immediately modify the closure and this creates a race condition: will my modification happen before or after the state is used by the scheduler?

var myName = "Lee";

scheduler.Schedule(

() => Console.WriteLine("myName = {0}", myName));

myName = "John";//What will get written to the console?

在我的测试中,当调度器是NewThreadScheduler类型的实例时,控制台输出了John。如果是ImmediateScheduler类型则输出Lee。这个问题就是代码的不确定性。

解决这个问题最好的方式是使用Schedule的重载版本,接收一个状态参数。下例使用这个重载,传递特定的状态。

In my tests, "John" is generally written to the console when scheduler is an instance of NewThreadScheduler. If I use the ImmediateScheduler then "Lee" would be written. The problem with this is the non-deterministic nature of the code.

A preferable way to pass state is to use the Schedule overloads that accept state. This example takes advantage of this overload, giving us certainty about our state.

var myName = "Lee";

scheduler.Schedule(myName,

(_, state) =>

{

Console.WriteLine(state);

return Disposable.Empty;

});

myName = "John";

这里我们传递了myName参数。同时传递了一个具有state参数并返回Disposable的委托。Disposable用于取消操作;稍后可以看到。委托同时有一个IScheduler参数,命名为_(下划线)。这表示我们忽略了这个参数。当我们传递myName,这个状态的值将保存在内部。因此当我们将myName命名为John,调度器内的引用值仍然为Lee。

注意上例中,我们修改了字符串myName的值。如果状态是可修改的实例(引用类型),还会得到不确定的行为。下例中,我们将使用List作为状态。在调度一个操作来打印列表的元素数量之后,我们修改了这个列表。

Here, we pass myName as the state. We also pass a delegate that will take the state and return a disposable. The disposable is used for cancellation; we will look into that later. The delegate also takes an IScheduler parameter, which we name "_" (underscore). This is the convention to indicate we are ignoring the argument. When we pass myName as the state, a reference to the state is kept internally. So when we update the myName variable to "John", the reference to "Lee" is still maintained by the scheduler's internal workings.

Note that in our previous example, we modify the myName variable to point to a new instance of a string. If we were to instead have an instance that we actually modified, we could still get unpredictable behavior. In the next example, we now use a list for our state. After scheduling an action to print out the element count of the list, we modify that list.

var list = new List<int>();

scheduler.Schedule(list,

(innerScheduler, state) =>

{

Console.WriteLine(state.Count);

return Disposable.Empty;

});

list.Add(1);

现在已经修改了共享的状态,得到了不确定结果。本例中,我们不知道调度器的类型,无法预知产生的并发冲突类型。在所有并发软件中,都要避免使用可修改的共享状态(引用类型)。

Now that we are modifying shared state, we can get unpredictable results. In this example, we don't even know what type the scheduler is, so we cannot predict the race conditions we are creating. As with any concurrent software, you should avoid modifying shared state.

延时调度(Future scheduling)

使用特定的IScheduler实现,可以调度动作延时执行。可以指定调用某个操作的确切时间点来实现,也可以指定等待该操作被调用的时间段。这个功能可以用来实现缓冲区、定时器等。

因此,两种类型的重载使延时调度成为可能,一种是需要指定TimeSpan,另一种是需要指定DateTimeOffset。这是延时执行操作的两个最简单的重载。

As you would expect with a type called "IScheduler", you are able to schedule an action to be executed in the future. You can do so by specifying the exact point in time an action should be invoked, or you can specify the period of time to wait until the action is invoked. This is clearly useful for features such as buffering, timers etc.

Scheduling in the future is thus made possible by two styles of overloads, one that takes a TimeSpan and one that takes a DateTimeOffset. These are the two most simple overloads that execute an action in the future.

public static IDisposable Schedule(

this IScheduler scheduler,

TimeSpan dueTime,

Action action)

{...}

public static IDisposable Schedule(

this IScheduler scheduler,

DateTimeOffset dueTime,

Action action)

{...}

可按如下所示使用TimeSpan重载:

You can use the TimeSpan overload like this:

var delay = TimeSpan.FromSeconds(1);

Console.WriteLine("Before schedule at {0:o}", DateTime.Now);

scheduler.Schedule(delay,

() => Console.WriteLine("Inside schedule at {0:o}", DateTime.Now));

Console.WriteLine("After schedule at  {0:o}", DateTime.Now);

Output:

Before schedule at 2012-01-01T12:00:00.000000+00:00

After schedule at 2012-01-01T12:00:00.058000+00:00

Inside schedule at 2012-01-01T12:00:01.044000+00:00

因此,我们可以看到,调度是非阻塞的,因为“前”和“后”调用在时间上非常接近。您还可以看到,在调度操作后大约一秒钟,它就被调用了。

您可以使用带有DateTimeOffset的重载来指定一个特定的时间点去调度任务。如果由于某种原因,您指定的时间点已经超时,那么将尽快安排执行动作。

We can see therefore that scheduling is non-blocking as the 'before' and 'after' calls are very close together in time. You can also see that approximately one second after the action was scheduled, it was invoked.

You can specify a specific point in time to schedule the task with the DateTimeOffset overload. If, for some reason, the point in time you specify is in the past, then the action is scheduled as soon as possible.

取消(Cancelation)

每个调度的重载都返回一个IDisposable;通过这种方式,使用者可以取消预定的工作。在上一个示例中,我们计划在一秒钟内执行动作。我们可以通过取消令牌(即返回值)来取消动作。

Each of the overloads to Schedule returns an IDisposable; this way, a consumer can cancel the scheduled work. In the previous example, we scheduled work to be invoked in one second. We could cancel that work by disposing of the cancellation token (i.e. the return value).

var delay = TimeSpan.FromSeconds(1);

Console.WriteLine("Before schedule at {0:o}", DateTime.Now);

var token = scheduler.Schedule(delay,

() => Console.WriteLine("Inside schedule at {0:o}", DateTime.Now));

Console.WriteLine("After schedule at  {0:o}", DateTime.Now);

token.Dispose();

Output:

Before schedule at 2012-01-01T12:00:00.000000+00:00

After schedule at 2012-01-01T12:00:00.058000+00:00

注意调度的动作没有执行,因为我们已经立即取消了。

当用户在动作调度执行前取消,动作将从工作队列中移除。可从上例中验证。如果想取消已经执行的动作,需要使用接收Func参数的重载Schedule方法。为用户提供了取消已经执行动作的方法。可用于I/O操作,大计算量运算,或者使用Task的情况。

现在可能产生一个问题;如果要取消已经启动的动作,需要调用IDisposable实例的Dispose方法,但如果已经在执行动作了,如何使用返回Disposable对象呢?您可以启动另一个线程,使工作同时进行,但是创建线程是我们试图避免的事情。

本例中,有一个作为委托传递给调度函数的函数。其只执行等待并向List添加值来模拟计算。这里的关键点是通过返回的Disposable对象,使用CancellationTocken来取消执行。

Note that the scheduled action never occurs, as we have cancelled it almost immediately.

When the user cancels the scheduled action method before the scheduler is able to invoke it, that action is just removed from the queue of work. This is what we see in example above. If you want to cancel scheduled work that is already running, then you can use one of the overloads to the Schedule method that takes a Func. This gives a way for users to cancel out of a job that may already be running. This job could be some sort of I/O, heavy computations or perhaps usage of Task to perform some work.

Now this may create a problem; if you want to cancel work that has already been started, you need to dispose of an instance of IDisposable, but how do you return the disposable if you are still doing the work? You could fire up another thread so the work happens concurrently, but creating threads is something we are trying to steer away from.

In this example, we have a method that we will use as the delegate to be scheduled. It just fakes some work by performing a spin wait and adding values to the list argument. The key here is that we allow the user to cancel with the CancellationToken via the disposable we return.

public IDisposable Work(IScheduler scheduler, List<int> list)

{

var tokenSource = new CancellationTokenSource();

var cancelToken = tokenSource.Token;

var task = new Task(() =>

{

Console.WriteLine();

for (int i = 0; i < 1000; i++)

{

var sw = new SpinWait();

for (int j = 0; j < 3000; j++) sw.SpinOnce();

Console.Write(".");

list.Add(i);

if (cancelToken.IsCancellationRequested)

{

Console.WriteLine("Cancelation requested");

//cancelToken.ThrowIfCancellationRequested();

return;

}

}

}, cancelToken);

task.Start();

return Disposable.Create(tokenSource.Cancel);

}

下面的代码调度上述代码,按下回车键取消正在执行的动作

This code schedules the above code and allows the user to cancel the processing work by pressing Enter

var list = new List<int>();

Console.WriteLine("Enter to quit:");

var token = scheduler.Schedule(list, Work);

Console.ReadLine();

Console.WriteLine("Cancelling...");

token.Dispose();

Console.WriteLine("Cancelled");

Output:

Enter to quit:

........

Cancelling...

Cancelled

Cancelation requested

问题是我们显示的引入了Task类。如果使用Rx递归调度特性,就应该避免显示的使用并发模型。

The problem here is that we have introduced explicit use of Task. We can avoid explicit usage of a concurrency model if we use the Rx recursive scheduler features instead.

递归(Recursion)

Schedule函数的更高级重载传递看上去参数很奇怪的委托。请特别注意Schedule扩展方法中每个重载中的最后一个参数。

The more advanced overloads of Schedule extension methods take some strange looking delegates as parameters. Take special note of the final parameter in each of these overloads of the Schedule extension method.

public static IDisposable Schedule(

this IScheduler scheduler,

Action<Action> action)

{...}

public static IDisposable Schedule(

this IScheduler scheduler,

TState state,

ActionAction> action)

{...}

public static IDisposable Schedule(

this IScheduler scheduler,

TimeSpan dueTime,

Action<Action<TimeSpan>> action)

{...}

public static IDisposable Schedule(

this IScheduler scheduler,

TState state,

TimeSpan dueTime,

ActionActionTimeSpan>> action)

{...}

public static IDisposable Schedule(

this IScheduler scheduler,

DateTimeOffset dueTime,

Action<Action> action)

{...}

public static IDisposable Schedule(

this IScheduler scheduler,

TState state, DateTimeOffset dueTime,

ActionAction> action)

{...}  

每个重载都带有action委托,可以递归的调用action。签名看起来非常古怪,但却是非常强大的API。允许你创建高效的递归委托调用。最好使用一个范例来说明。

下例使用最简单的递归重载。有个可递归调用的Action。

Each of these overloads take a delegate "action" that allows you to call "action" recursively. This may seem a very odd signature, but it makes for a great API. This effectively allows you to create a recursive delegate call. This may be best shown with an example.

This example uses the most simple recursive overload. We have an Action that can be called recursively.

Action<Action> work = (Action self)

=>

{

Console.WriteLine("Running");

self();

};

var token = s.Schedule(work);

Console.ReadLine();

Console.WriteLine("Cancelling");

token.Dispose();

Console.WriteLine("Cancelled");

Output:

Enter to quit:

Running

Running

Running

Running

Cancelling

Cancelled

Running

注意我们不必在委托中写取消代码。Rx为我们处理循环并检查取消。有才!与C#中简单的递归函数不同,Rx提供了额外的抽象级别,防止了栈溢出。进而,Rx获取我们的递归函数并转换为循环结构。

Note that we didn't have to write any cancellation code in our delegate. Rx handled the looping and checked for cancellation on our behalf. Brilliant! Unlike simple recursive methods in C#, we are also protected from stack overflows, as Rx provides an extra level of abstraction. Indeed, Rx takes our recursive method and transforms it to a loop structure instead.

自定义迭代器(Creating your own iterator)

系列文章的前面已经看到如何在Rx中使用异步编程模型Rx with APM。范例中,我们将整个文件读入内存。同时也引用了Jeffery van Gogh的博客blog post,很不幸已经过时了;然而,他的观点还是合理的。除了Jeffery文章中的迭代方法,我们也可以使用调度器获取同样的结果。

下面的示例的目标是打开一个文件并将其按块流化。这使我们能够处理大于可用内存的文件,因为我们每次只读取和缓存一部分文件。除此之外,我们还可以利用Rx的组合特性对文件应用多个转换,比如加密和压缩。通过按块读取,我们可以在完成读取文件之前启动转换。

首先,我们回忆一下如何在Rx中使用FileStream的APM方法。

Earlier in the book, we looked at how we can use Rx with APM. In our example, we just read the entire file into memory. We also referenced Jeffery van Gogh's blog post, which sadly is now out of date; however, his concepts are still sound. Instead of the Iterator method from Jeffery's post, we can use schedulers to achieve the same result.

The goal of the following sample is to open a file and stream it in chunks. This enables us to work with files that are larger than the memory available to us, as we would only ever read and cache a portion of the file at a time. In addition to this, we can leverage the compositional nature of Rx to apply multiple transformations to the file such as encryption and compression. By reading chunks at a time, we are able to start the other transformations before we have finished reading the file.

First, let us refresh our memory with how to get from the FileStream's APM methods into Rx.

var source = new FileStream(@"C:\Somefile.txt", FileMode.Open, FileAccess.Read);

var factory = Observable.FromAsyncPattern<byte[], int, int, int>(

source.BeginRead,

source.EndRead);

var buffer = new byte[source.Length];

IObservable<int> reader = factory(buffer, 0, (int)source.Length);

reader.Subscribe(

bytesRead =>

Console.WriteLine("Read {0} bytes from file into buffer", bytesRead));

上例使用FromAsyncPattern创建一个factory对象。factory对象获取一个字节数组(buffer)、一个偏移量(0)及一个长度(source.Length)参数;他在单值序列中返回读取到的字节数组的长度。这时,我们将读取整个文件。文件一旦读取到缓冲区,将向序列中推送一个单值(bytesRead)。

一切都很好,但如果我们想按块读取数据,这就无法应对了。我们需要指定缓冲区大小。比如4KB(4096字节)。

The example above uses FromAsyncPattern to create a factory. The factory will take a byte array (buffer), an offset (0) and a length (source.Length); it effectively returns the count of the bytes read as a single-value sequence. When the sequence (reader) is subscribed to, BeginRead will read values, starting from the offset, into the buffer. In this case, we will read the whole file. Once the file has been read into the buffer, the sequence (reader) will push the single value (bytesRead) in to the sequence.

This is all fine, but if we want to read chunks of data at a time then this is not good enough. We need to specify the buffer size we want to use. Let's start with 4KB (4096 bytes).

var bufferSize = 4096;

var buffer = new byte[bufferSize];

IObservable<int> reader = factory(buffer, 0, bufferSize);

reader.Subscribe(

bytesRead =>

Console.WriteLine("Read {0} bytes from file", bytesRead));

这段代码最多从文件中读取4KB字节。如果文件更大,我们希望继续从中读取。我们需要利用FileStream的Position属性来停止读取,我们可以重用factory对象再次加载缓冲区。接下来,我们想向一个可观测序列推送读取到的字节数组。让我们从创建扩展方法的签名开始。

This works but will only read a max of 4KB from the file. If the file is larger, we want to keep reading all of it. As the Position of the FileStream will have advanced to the point it stopped reading, we can reuse the factory to reload the buffer. Next, we want to start pushing these bytes into an observable sequence. Let's start by creating the signature of an extension method.

public static IObservable<byte> ToObservable(

this FileStream source,

int buffersize,

IScheduler scheduler)

{...}

我们使用Observable.Create来确保我们的扩展方法是延迟计算的。利用Observable.Using操作符我们可以确保当订阅被取消时FileStream会被关闭。

We can ensure that our extension method is lazily evaluated by using Observable.Create. We can also ensure that the FileStream is closed when the consumer disposes of the subscription by taking advantage of the Observable.Using operator.

public static IObservable<byte> ToObservable(

this FileStream source,

int buffersize,

IScheduler scheduler)

{

var bytes = Observable.Create<byte>(o =>

{

...

});

return Observable.Using(() => source, _ => bytes);

}

接下来,我们希望利用调度器的递归功能来连续读取数据块,同时仍然向用户提供释放/取消的能力。这里制造一点麻烦;我们只能传入一个状态参数,但需要管理多个可变的对象(缓冲区、工厂、文件流)。为此,我们创建了自己的私有助手类:

Next, we want to leverage the scheduler's recursive functionality to continuously read chunks of data while still providing the user with the ability to dispose/cancel when they choose. This creates a bit of a pickle; we can only pass in one state parameter but need to manage multiple moving parts (buffer, factory, filestream). To do this, we create our own private helper class:

private sealed class StreamReaderState

{

private readonly int _bufferSize;

private readonly Func<byte[], int, int, IObservable<int>> _factory;

public StreamReaderState(FileStream source, int bufferSize)

{

_bufferSize = bufferSize;

_factory = Observable.FromAsyncPattern<byte[], int, int, int>(

source.BeginRead,

source.EndRead);

Buffer = new byte[bufferSize];

}

public IObservable<int> ReadNext()

{

return _factory(Buffer, 0, _bufferSize);

}

public byte[] Buffer { get; set; }

}

这个类允许我们将数据读入缓冲区,然后通过调用ReadNext()读取下一个数据块。在我们的Observable.Create委托中,我们实例化我们的助手类,并使用它将缓冲区推入我们的可观察序列。

This class will allow us to read data into a buffer, then read the next chunk by calling ReadNext(). In our Observable.Create delegate, we instantiate our helper class and use it to push the buffer into our observable sequence.

public static IObservable<byte> ToObservable(

this FileStream source,

int buffersize,

IScheduler scheduler)

{

var bytes = Observable.Create<byte>(o =>

{

var initialState = new StreamReaderState(source, buffersize);

initialState

.ReadNext()

.Subscribe(bytesRead =>

{

for (int i = 0; i < bytesRead; i++)

{

o.OnNext(initialState.Buffer[i]);

}

});

...

});

return Observable.Using(() => source, _ => bytes);

}

这样我们就可以开始了,但是仍然不支持读取大于缓冲区的文件。现在,我们需要添加递归调度。为此,我们需要一个匹配所需签名的委托。我们需要一个接受StreamReaderState参数并可以递归调用动作的函数。

So this gets us off the ground, but we are still do not support reading files larger than the buffer. Now, we need to add recursive scheduling. To do this, we need a delegate to fit the required signature. We will need one that accepts a StreamReaderState and can recursively call an Action.

public static IObservable<byte> ToObservable(

this FileStream source,

int buffersize,

IScheduler scheduler)

{

var bytes = Observable.Create<byte>(o =>

{

var initialState = new StreamReaderState(source, buffersize);

ActionAction> iterator;

iterator = (state, self) =>

{

state.ReadNext()

.Subscribe(bytesRead =>

{

for (int i = 0; i < bytesRead; i++)

{

o.OnNext(state.Buffer[i]);

}

self(state);

});

};

return scheduler.Schedule(initialState, iterator);

});

return Observable.Using(() => source, _ => bytes);

}

现在我们已经创建了一个递归动作:

  1. 调用ReadNext()
  2. 订阅返回值
  3. 向可观测序列推送缓冲区
  4. 递归调用自己

同时也可以在指定的调度时序是调度这个递归动作。接下来,在读取到文件尾部时结束序列。很简单,一直执行递归,直到byteRead为0为止。

We now have an iterator action that will:

  1. call ReadNext()
  2. subscribe to the result
  3. push the buffer into the observable sequence
  4. and recursively call itself.

We also schedule this recursive action to be called on the provided scheduler. Next, we want to complete the sequence when we get to the end of the file. This is easy, we maintain the recursion until the bytesRead is 0.

public static IObservable<byte> ToObservable(

this FileStream source,

int buffersize,

IScheduler scheduler)

{

var bytes = Observable.Create<byte>(o =>

{

var initialState = new StreamReaderState(source, buffersize);

ActionAction> iterator;

iterator = (state, self) =>

{

state.ReadNext()

.Subscribe(bytesRead =>

{

for (int i = 0; i < bytesRead; i++)

{

o.OnNext(state.Buffer[i]);

}

if (bytesRead > 0)

self(state);

else

o.OnCompleted();

});

};

return scheduler.Schedule(initialState, iterator);

});

return Observable.Using(() => source, _ => bytes);

}

现在,我们有一个扩展方法迭代文件流中的字节数组。最后整理一下代码,以便正确地管理资源和异常,最终方法如下:

At this point, we have an extension method that iterates on the bytes from a file stream. Finally, let us apply some clean up so that we correctly manage our resources and exceptions, and the finished method looks something like this:

public static IObservable<byte> ToObservable(

this FileStream source,

int buffersize,

IScheduler scheduler)

{

var bytes = Observable.Create<byte>(o =>

{

var initialState = new StreamReaderState(source, buffersize);

var currentStateSubscription = new SerialDisposable();

ActionAction> iterator =

(state, self) =>

currentStateSubscription.Disposable = state.ReadNext()

.Subscribe(

bytesRead =>

{

for (int i = 0; i < bytesRead; i++)

{

o.OnNext(state.Buffer[i]);

}

if (bytesRead > 0)

self(state);

else

o.OnCompleted();

},

o.OnError);

var scheduledWork = scheduler.Schedule(initialState, iterator);

return new CompositeDisposable(currentStateSubscription, scheduledWork);

});

return Observable.Using(() => source, _ => bytes);

}

这是一个示例代码作用因人而异。我发现增加缓冲区大小并返回IObservable>更适合我,但是上面的例子也可以很好的执行。这里的目标是提供一个迭代器示例,该迭代器提供具有取消和资源高效缓冲的并发I/O访问。

This is example code and your mileage may vary. I find that increasing the buffer size and returning IObservable> suits me better, but the example above works fine too. The goal here was to provide an example of an iterator that provides concurrent I/O access with cancellation and resource-efficient buffering.

调度器特性组合(Combinations of scheduler features)

我们已经讨论了许多可以与IScheduler接口一起使用的特性。然而,这些示例中的大多数是使用扩展方法来完成我们所需的功能。接口本身具有最丰富的重载方法。扩展方法实际上只是在权衡,减少过多的重载来提高可用性/可发现性。如果您希望使用传递状态、取消、延时调度和递归,那么都可以直接从接口方法获得。

We have discussed many features that you can use with the IScheduler interface. Most of these examples, however, are actually using extension methods to invoke the functionality that we are looking for. The interface itself exposes the richest overloads. The extension methods are effectively just making a trade-off; improving usability/discoverability by reducing the richness of the overload. If you want access to passing state, cancellation, future scheduling and recursion, it is all available directly from the interface methods.

namespace System.Reactive.Concurrency

{

public interface IScheduler

{

//Gets the scheduler's notion of current time.

DateTimeOffset Now { get; }

// Schedules an action to be executed with given state.

//  Returns a disposable object used to cancel the scheduled action (best effort).

IDisposable Schedule(

TState state,

Func<IScheduler, TState, IDisposable> action);

// Schedules an action to be executed after dueTime with given state.

//  Returns a disposable object used to cancel the scheduled action (best effort).

IDisposable Schedule(

TState state,

TimeSpan dueTime,

Func<IScheduler, TState, IDisposable> action);

//Schedules an action to be executed at dueTime with given state.

//  Returns a disposable object used to cancel the scheduled action (best effort).

IDisposable Schedule(

TState state,

DateTimeOffset dueTime,

Func<IScheduler, TState, IDisposable> action);

}

}

深入调度器(Schedulers in-depth)

我们主要关注调度器和IScheduler接口的抽象概念。这种抽象允许低层管道的实现对并发模型保持透明。与上面的文件读取示例一样,代码不需要知道传递了IScheduler的哪个实现,因为这是消费代码所关心的问题。

We have largely been concerned with the abstract concept of a scheduler and the IScheduler interface. This abstraction allows low-level plumbing to remain agnostic towards the implementation of the concurrency model. As in the file reader example above, there was no need for the code to know which implementation of IScheduler was passed, as this is a concern of the consuming code.

现在我们深入了解IScheduler的每种实现,考虑每种实现的利弊,以及可用的场景。

Now we take an in-depth look at each implementation of IScheduler, consider the benefits and tradeoffs they each make, and when each is appropriate to use.

ImmediateScheduler

ImmediateScheduler对象可从Scheduler.Immediate静态属性得到。这是最简单的调度器,因为没有做任何调度。如果调用了Schedule(Action)方法将会立即执行动作。如有要调度动作延时执行,ImmediateScheduler将调用Tread.Sleep方法延时给定时间段后执行动作。总之,ImmediateScheduler是同步的。

The ImmediateScheduler is exposed via the Scheduler.Immediate static property. This is the most simple of schedulers as it does not actually schedule anything. If you call Schedule(Action) then it will just invoke the action. If you schedule the action to be invoked in the future, the ImmediateScheduler will invoke a Thread.Sleep for the given period of time and then execute the action. In summary, the ImmediateScheduler is synchronous.

CurrentThreadScheduler

与ImmediateScheduler类似,CurrentThreadScheduler也是单线程的。通过Scheduler.Current静态属性得到。主要的不同在于CurrentThreadScheduler类似与消息队列或消息泵。如果你在调度一个动作,将由其自己(本线程)调度动作,CurrentThreadScheduler将内部的动作压入队列并延时执行;与此相反,ImmediateScheduler将直接执行这个内部动作。可能用一个范例来说明最好。

Like the ImmediateScheduler, the CurrentThreadScheduler is single-threaded. It is exposed via the Scheduler.Current static property. The key difference is that the CurrentThreadScheduler acts like a message queue or a Trampoline. If you schedule an action that itself schedules an action, the CurrentThreadScheduler will queue the inner action to be performed later; in contrast, the ImmediateScheduler would start working on the inner action straight away. This is probably best explained with an example.

In this example, we analyze how ImmediateScheduler and CurrentThreadScheduler perform nested scheduling differently.

private static void ScheduleTasks(IScheduler scheduler)

{

Action leafAction = () => Console.WriteLine("----leafAction.");

Action innerAction = () =>

{

Console.WriteLine("--innerAction start.");

scheduler.Schedule(leafAction);

Console.WriteLine("--innerAction end.");

};

Action outerAction = () =>

{

Console.WriteLine("outer start.");

scheduler.Schedule(innerAction);

Console.WriteLine("outer end.");

};

scheduler.Schedule(outerAction);

}

public void CurrentThreadExample()

{

ScheduleTasks(Scheduler.CurrentThread);

/*Output:

outer start.

outer end.

--innerAction start.

--innerAction end.

----leafAction.

*/

}

public void ImmediateExample()

{

ScheduleTasks(Scheduler.Immediate);

/*Output:

outer start.

--innerAction start.

----leafAction.

--innerAction end.

outer end.

*/

}

注意ImmediateScheduler是没有实际调度动作的,所有动作都立即执行(同步)。在Schedule被调用的同时,传递的委托也被调用。然而CurrentThreadScheduler,调用了第一个委托,然后调度嵌套委托,压入队列后延时执行。初始委托执行完毕后,检查队列查找剩余的委托(及对Schedule的嵌套调用)并进行调用。理解两者的区别很重要,使用错误了会导致乱序执行、意外阻塞、甚至死锁。

Note how the ImmediateScheduler does not really "schedule" anything at all, all work is performed immediately (synchronously). As soon as Schedule is called with a delegate, that delegate is invoked. The CurrentThreadScheduler, however, invokes the first delegate, and, when nested delegates are scheduled, queues them to be invoked later. Once the initial delegate is complete, the queue is checked for any remaining delegates (i.e. nested calls to Schedule) and they are invoked. The difference here is quite important as you can potentially get out-of-order execution, unexpected blocking, or even deadlocks by using the wrong one.

DispatcherScheduler

DispatcherScheduler在System.Reactive.Window.Threading.dll中引入(针对WPF、Silverlight4和Silverlight5).当动作使用DispatchScheduler进行调度,他们被高效的编组为Dispatcher的BeginInvoke方法。这将把操作添加到dispatcher正常优先级队列的末尾。为Schedule嵌套调用提供了与CurrentThreadScheduler类似的排队语义。

当动作被延时调度,将会创建一个同样时间间隔的DispatcherTimer对象。定时器到时回调将停止计时,并继续在DispatchScheduler上执行定时工作。如果DispatcherScheduler的dueTime不需延时则不会创建定时器,动作将会立即得到调度。

我想强调一下使用DispatcherScheduler的风险。您可以通过传递Dispatcher的引用来构造自己的DispatcherScheduler实例。另一种方法是使用静态属性DispatcherScheduler.Instance。如果使用不当,就会引入难以理解的问题。静态属性不返回对静态字段的引用,而是每次都以Dispatcher.CurrentDispatcher作为构造函数参数创建一个新实例。如果在非UI线程上访问Dispatcher.CurrentDispatcher,将会得到一个Dispatcher的新实例,但并不是我们期望的那个实例。

例如,假设有一个WPF应用程序使用Observabel.Create方法。将一个委托传递给Observabe.Create,我们想在Dispatcher上调度执行通知。我们认为这是一个好主意,因为序列的用户可以在Dispatcher上自由的使用通知。

The DispatcherScheduler is found in System.Reactive.Window.Threading.dll (for WPF, Silverlight 4 and Silverlight 5). When actions are scheduled using the DispatcherScheduler, they are effectively marshaled to the Dispatcher's BeginInvoke method. This will add the action to the end of the dispatcher's Normal priority queue of work. This provides similar queuing semantics to the CurrentThreadScheduler for nested calls to Schedule.

When an action is scheduled for future work, then a DispatcherTimer is created with a matching interval. The callback for the timer's tick will stop the timer and re-schedule the work onto the DispatcherScheduler. If the DispatcherScheduler determines that the dueTime is actually not in the future then no timer is created, and the action will just be scheduled normally.

I would like to highlight a hazard of using the DispatcherScheduler. You can construct your own instance of a DispatcherScheduler by passing in a reference to a Dispatcher. The alternative way is to use the static property DispatcherScheduler.Instance. This can introduce hard to understand problems if it is not used properly. The static property does not return a reference to a static field, but creates a new instance each time, with the static property Dispatcher.CurrentDispatcher as the constructor argument. If you access Dispatcher.CurrentDispatcher from a thread that is not the UI thread, it will thus give you a new instance of a Dispatcher, but it will not be the instance you were hoping for.

For example, imagine that we have a WPF application with an Observable.Create method. In the delegate that we pass to Observable.Create, we want to schedule the notifications on the dispatcher. We think this is a good idea because any consumers of the sequence would get the notifications on the dispatcher for free.

var fileLines = Observable.Create<string>(

o =>

{

var dScheduler = DispatcherScheduler.Instance;

var lines = File.ReadAllLines(filePath);

foreach (var line in lines)

{

var localLine = line;

dScheduler.Schedule(

() => o.OnNext(localLine));

}

return Disposable.Empty;

});

从直观上看,这段代码似乎是正确的,但实际上会剥夺序列使用者的权力。当我们订阅序列时,我们发现在UI线程上读取文件是一个坏主意。因此,我们向链中添加一个SubscribeOn(Scheduler.NewThread),如下所示:

This code may intuitively seem correct, but actually takes away power from consumers of the sequence. When we subscribe to the sequence, we decide that reading a file on the UI thread is a bad idea. So we add in a SubscribeOn(Scheduler.NewThread) to the chain as below:

fileLines

.SubscribeOn(Scheduler.ThreadPool)

.Subscribe(line => Lines.Add(line));

这将导致在新线程上执行create委托。委托将读取文件,然后获取DispatcherScheduler的实例。DispatcherScheduler试图获取当前线程的Dispatcher,但我们已经不在UI线程上了,所以不是同一个对象。因此,它创建了一个用于创建DispatcherScheduler实例的新Dispatcher对象。我们安排了一些工作(通知),但是,由于底层调度程序没有运行,所以什么也没有发生;我们甚至没有得到异常。我在一个商业项目上看到过这种情况,它让很多人挠头。

这带给我们一个关于调度的指导方针:使用SubscribeOn和ObserveOn时只应由最终订阅者调用。如果您在自己的扩展方法或服务方法中引入调度,您应该允许使用者指定自己的调度程序。在下一章中,我们将看到关于这个指导的更多原因。

This causes the create delegate to be executed on a new thread. The delegate will read the file then get an instance of a DispatcherScheduler. The DispatcherScheduler tries to get the Dispatcher for the current thread, but we are no longer on the UI thread, so there isn't one. As such, it creates a new dispatcher that is used for the DispatcherScheduler instance. We schedule some work (the notifications), but, as the underlying Dispatcher has not been run, nothing happens; we do not even get an exception. I have seen this on a commercial project and it left quite a few people scratching their heads.

This takes us to one of our guidelines regarding scheduling: the use of SubscribeOn and ObserveOn should only be invoked by the final subscriber. If you introduce scheduling in your own extension methods or service methods, you should allow the consumer to specify their own scheduler. We will see more reasons for this guidance in the next chapter.

EventLoopScheduler

EventLoopScheduler允许您为调度器指定一个特定线程。与CurrentThreadScheduler类似,EventLoopScheduler为嵌套的计划动作提供了相同的消息队列机制。不同之处在于,为EventLoopScheduler提供了一个线程,将使用它进行调度(执行委托),而不是使用当前线程。

EventLoopScheduler可以使用空的构造函数创建,也可以传递线程工厂委托。

The EventLoopScheduler allows you to designate a specific thread to a scheduler. Like the CurrentThreadScheduler that acts like a trampoline for nested scheduled actions, the EventLoopScheduler provides the same trampoline mechanism. The difference is that you provide an EventLoopScheduler with the thread you want it to use for scheduling instead, of just picking up the current thread.

The EventLoopScheduler can be created with an empty constructor, or you can pass it a thread factory delegate.

// Creates an object that schedules units of work on a designated thread.

public EventLoopScheduler()

{...}

// Creates an object that schedules units of work on a designated thread created by the

//  provided factory function.

public EventLoopScheduler(FuncThread> threadFactory)

{...}

允许传递工厂的重载使您能够在将线程分配给EventLoopScheduler之前定制它。例如,您可以设置线程名称、优先级、本地化以及是否是后台线程。请记住,如果您没有将线程的属性IsBackground设置为false,那么线程没有中止,您的应用程序将不会终止。EventLoopScheduler实现了IDisposable,调用Dispose将允许线程终止。与任何IDisposable的实现一样,您应该显式地管理所创建资源的生命周期。

如果愿意的话,配合Observable.Using方法可以很好的工作。这将EventLoopScheduler的生命周期绑定到可观察序列的生命周期--例如,GetPrices方法接受IScheduler作为参数,并返回可观察序列。

The overload that allows you to pass a factory enables you to customize the thread before it is assigned to the EventLoopScheduler. For example, you can set the thread name, priority, culture and most importantly whether the thread is a background thread or not. Remember that if you do not set the thread's property IsBackground to false, then your application will not terminate until it the thread is terminated. The EventLoopScheduler implements IDisposable, and calling Dispose will allow the thread to terminate. As with any implementation of IDisposable, it is appropriate that you explicitly manage the lifetime of the resources you create.

This can work nicely with the Observable.Using method, if you are so inclined. This allows you to bind the lifetime of your EventLoopScheduler to that of an observable sequence - for example, this GetPrices method that takes an IScheduler for an argument and returns an observable sequence.

private IObservable GetPrices(IScheduler scheduler)

{...}

这里,我们将EventLoopScheduler的生命周期绑定到GetPrices方法返回结果的生命周期。

Here we bind the lifetime of the EventLoopScheduler to that of the result from the GetPrices method.

Observable.Using(()=>new EventLoopScheduler(), els=> GetPrices(els))

.Subscribe(...)

新线程(New Thread)

如果您不希望管理线程或EventLoopScheduler的资源,那么可以使用NewThreadScheduler。您可以创建自己的NewThreadScheduler实例,或者通过访问静态属性Scheduler.NewThread。像EventLoopScheduler一样,您可以使用无参数的构造函数或提供自己的线程工厂函数。如果您确实提供了自己的工厂,请设置IsBackground属性。

当您在NewThreadScheduler上调用Schedule时,实际上是在幕后创建一个EventLoopScheduler。这样,任何嵌套的调度都将发生在同一个线程上。后续(非嵌套)调用Schedule将创建一个新的EventLoopScheduler,并为新线程调用线程工厂函数。

在这个例子中,我们运行了一段代码,让人想起了ImmediateScheduler和CurrentScheduler之间的比较。但是,这里的不同之处在于,我们输出执行动作的线程ThreadId。我们使用Schedule方法重载,允许我们将Scheduler实例传递到嵌套委托中。这允许我们正确地嵌套调用。

If you do not wish to manage the resources of a thread or an EventLoopScheduler, then you can use NewThreadScheduler. You can create your own instance of NewThreadScheduler or get access to the static instance via the property Scheduler.NewThread. Like EventLoopScheduler, you can use the parameterless constructor or provide your own thread factory function. If you do provide your own factory, be careful to set the IsBackground property appropriately.

When you call Schedule on the NewThreadScheduler, you are actually creating an EventLoopScheduler under the covers. This way, any nested scheduling will happen on the same thread. Subsequent (non-nested) calls to Schedule will create a new EventLoopScheduler and call the thread factory function for a new thread too.

In this example we run a piece of code reminiscent of our comparison between Immediate and Current schedulers. The difference here, however, is that we track the ThreadId that the action is performed on. We use the Schedule overload that allows us to pass the Scheduler instance into our nested delegates. This allows us to correctly nest calls.

private static IDisposable OuterAction(IScheduler scheduler, string state)

{

    Console.WriteLine("{0} start. ThreadId:{1}", state, Thread.CurrentThread.ManagedThreadId);

     scheduler.Schedule(state + ".inner", InnerAction);

    Console.WriteLine("{0} end. ThreadId:{1}", state, Thread.CurrentThread.ManagedThreadId);

    return Disposable.Empty;

}

private static IDisposable InnerAction(IScheduler scheduler, string state)

{

    Console.WriteLine("{0} start. ThreadId:{1}", state, Thread.CurrentThread.ManagedThreadId);

    scheduler.Schedule(state + ".Leaf", LeafAction);

    Console.WriteLine("{0} end. ThreadId:{1}", state, Thread.CurrentThread.ManagedThreadId);

    return Disposable.Empty;

}

private static IDisposable LeafAction(IScheduler scheduler, string state)

{

    Console.WriteLine("{0}. ThreadId:{1}", state, Thread.CurrentThread.ManagedThreadId);

    return Disposable.Empty;

}

而后使用NewThreadScheduler执行代码:

When executed with the NewThreadScheduler like this:

Console.WriteLine("Starting on thread :{0}", Thread.CurrentThread.ManagedThreadId);

Scheduler.NewThread.Schedule("A", OuterAction);

Output:

Starting on thread :9

A start. ThreadId:10

A end. ThreadId:10

A.inner start . ThreadId:10

A.inner end. ThreadId:10

A.inner.Leaf. ThreadId:10

如您所见,结果与CurrentThreadScheduler非常相似,只是消息队列发生在一个单独的线程上。实际上,如果我们使用EventLoopScheduler,就会得到这样的输出。当我们引入第二个(非嵌套的)调度任务时,EventLoopScheduler和NewThreadScheduler的用法之间的区别开始出现。

As you can see, the results are very similar to the CurrentThreadScheduler, except that the trampoline happens on a separate thread. This is in fact exactly the output we would get if we used an EventLoopScheduler. The differences between usages of the EventLoopScheduler and the NewThreadScheduler start to appear when we introduce a second (non-nested) scheduled task.

Console.WriteLine("Starting on thread :{0}",

Thread.CurrentThread.ManagedThreadId);

Scheduler.NewThread.Schedule("A", OuterAction);

Scheduler.NewThread.Schedule("B", OuterAction);

Output:

Starting on thread :9

A start. ThreadId:10

A end. ThreadId:10

A.inner start . ThreadId:10

A.inner end. ThreadId:10

A.inner.Leaf. ThreadId:10

B start. ThreadId:11

B end. ThreadId:11

B.inner start . ThreadId:11

B.inner end. ThreadId:11

B.inner.Leaf. ThreadId:11

注意,现在这里有三个线程在起作用。线程9是我们启动的线程,而线程10和11正在为我们的两个调用执行调度工作。

Note that there are now three threads at play here. Thread 9 is the thread we started on and threads 10 and 11 are performing the work for our two calls to Schedule.

线程池(Thread Pool)

ThreadPoolScheduler只会将请求传送到ThreadPool。对于尽快安排的请求,操作只发送到ThreadPool.QueueUserWorkItem。对于延时执行的请求,使用System.Threading.Timer。

由于所有操作都被发送到ThreadPool,因此操作可能会超时。与我们前面看到的调度器不同,嵌套调用不能保证串行处理(会使用线程池中的多个线程执行迭代操作)。我们可以通过运行与上面相同的测试(但是使用ThreadPoolScheduler)来看到这一点。

The ThreadPoolScheduler will simply just tunnel requests to the ThreadPool. For requests that are scheduled as soon as possible, the action is just sent to ThreadPool.QueueUserWorkItem. For requests that are scheduled in the future, a System.Threading.Timer is used.

As all actions are sent to the ThreadPool, actions can potentially run out of order. Unlike the previous schedulers we have looked at, nested calls are not guaranteed to be processed serially. We can see this by running the same test as above but with the ThreadPoolScheduler.

Console.WriteLine("Starting on thread :{0}",

Thread.CurrentThread.ManagedThreadId);

Scheduler.ThreadPool.Schedule("A", OuterAction);

Scheduler.ThreadPool.Schedule("B", OuterAction);

The output

Starting on thread :9

A start. ThreadId:10

A end. ThreadId:10

A.inner start . ThreadId:10

A.inner end. ThreadId:10

A.inner.Leaf. ThreadId:10

B start. ThreadId:11

B end. ThreadId:11

B.inner start . ThreadId:10

B.inner end. ThreadId:10

B.inner.Leaf. ThreadId:11

注意,根据NewThreadScheduler的测试,我们最初在一个线程上启动,但所有调度都在另外两个线程上进行。不同之处在于,我们可以看到第二调用(输出“B”)代码一部分在线程11上运行,而另一部分在线程10上运行。

Note, that as per the NewThreadScheduler test, we initially start on one thread but all the scheduling happens on two other threads. The difference is that we can see that part of the second run "B" runs on thread 11 while another part of it runs on 10.

TaskPool

TaskPoolScheduler非常类似于ThreadPoolScheduler,如果可用(取决于您的目标框架),您应该更喜欢它而不是后者。与ThreadPoolScheduler一样,嵌套的调度操作不能保证在同一个线程上运行。使用TaskPoolScheduler运行相同的测试会显示类似的结果。

The TaskPoolScheduler is very similar to the ThreadPoolScheduler and, when available (depending on your target framework), you should favor it over the later. Like the ThreadPoolScheduler, nested scheduled actions are not guaranteed to be run on the same thread. Running the same test with the TaskPoolScheduler shows us similar results.

Console.WriteLine("Starting on thread :{0}",

Thread.CurrentThread.ManagedThreadId);

Scheduler.TaskPool.Schedule("A", OuterAction);

Scheduler.TaskPool.Schedule("B", OuterAction);

Output:

Starting on thread :9

A start. ThreadId:10

A end. ThreadId:10

B start. ThreadId:11

B end. ThreadId:11

A.inner start . ThreadId:10

A.inner end. ThreadId:10

A.inner.Leaf. ThreadId:10

B.inner start . ThreadId:11

B.inner end. ThreadId:11

B.inner.Leaf. ThreadId:10

TestScheduler

值得注意的是,还有一个TestScheduler,它的基类是VirtualTimeScheduler和VirtualTimeSchedulerBase。后两者实际上不在Rx介绍的范围内,但前者是。我们将在下一章“Testing Rx”中介绍所有的测试相关内容,包括TestScheduler。

It is worth noting that there is also a TestScheduler accompanied by its base classes VirtualTimeScheduler and VirtualTimeSchedulerBase. The latter two are not really in the scope of an introduction to Rx, but the former is. We will cover all things testing including the TestScheduler in the next chapter, Testing Rx.

选择适当的调度器(Selecting an appropriate scheduler)

有了所有这些选项,就很难知道使用哪个调度器以及何时使用。这里有一个简单的检查列表来帮助你完成这项艰巨的任务:

With all of these options to choose from, it can be hard to know which scheduler to use and when. Here is a simple check list to help you in this daunting task:

UI应用程序(UI Applications)

  • 最终的订阅通常在表现层,需要控制调度时序
  • 在DispatcherScheduler上观察,比便于更新ViewModel
  • 在后台进行订阅,以防止UI阻塞
  •     如果订阅阻塞没有超过50ms
  •         使用TaskPoolScheduler(如果可用)
  •         或使用ThreadPoolScheduler
  •     如果任意一个订阅阻塞超过50ms,则应该使用NewThreadScheduler

 

  • The final subscriber is normally the presentation layer and should control the scheduling.
  • Observe on the DispatcherScheduler to allow updating of ViewModels
  • Subscribe on a background thread to prevent the UI from becoming unresponsive
    • If the subscription will not block for more than 50ms then
      • Use the TaskPoolScheduler if available, or
      • Use the ThreadPoolScheduler
    • If any part of the subscription could block for longer than 50ms, then you should use the NewThreadScheduler.

服务层(Service layer)

  • 如果服务正在从某种队列读取数据,请考虑使用专用的EventLoopScheduler。这样,可以保持事件的顺序
  • 如果处理一个很耗时的操作(>50ms或需要I/O),那么考虑使用NewThreadScheduler
  • 如果需要一个调度器用于定时,如Observable.Interval或Observable.Timer,优先选择TaskPool。如果TaskPool不可用则使用ThreadPool
  • If your service is reading data from a queue of some sort, consider using a dedicated EventLoopScheduler. This way, you can preserve order of events
  • If processing an item is expensive (>50ms or requires I/O), then consider using a NewThreadScheduler
  • If you just need the scheduler for a timer, e.g. for Observable.Interval or Observable.Timer, then favor the TaskPool. Use the ThreadPool if the TaskPool is not available for your platform.

ThreadPool(以及TaskPool)在增加它们使用的线程数量之前有一个时间延迟。这个延迟是500ms。考虑在双核CPU下,将安排四个动作。默认情况下,线程池的大小是CPU核的数量(2),如果每个操作需要1000ms,那么在增加线程池大小之前,队列中有两个操作需要等待500ms。不是同时运行所有四个操作(总共需要1秒钟),而是工作在1.5秒内没有完成,因为队列中有两个操作占用了500毫秒的时间。出于这个原因,您应该只将执行速度非常快的工作(准则50ms)调度到ThreadPool或TaskPool。相反,创建一个新线程并不是免费的,但是由于今天处理器的强大功能,为超过50ms的工作创建一个线程的成本很低。

并发性是很难的。我们可以选择利用Rx和它的调度特性使我们的生活更轻松。只要在适当的地方使用Rx,我们就可以进一步改进它。虽然Rx具有并发特性,但这些特性不应该被误认为是并发框架。Rx是为查询数据而设计的,正如第一章所讨论的,异步方法组合或并行计算更适合使用其他框架。

Rx通过ObserveOn/SubscribeOn方法解决了并发生成数据和消费数据的问题。通过适当地使用这些功能,我们可以简化源码,提高响应能力,并减少关注并发的点。调度器为并发处理工作提供了一个丰富的平台,而不需要直接使用线程原语。它们还有助于解决常见的并发问题,如取消、传递状态和递归。通过减少并发冲突面,Rx提供了一组(相对)简单但功能强大的并发特性,为成功铺平了道路。

The ThreadPool (and the TaskPool by proxy) have a time delay before they will increase the number of threads that they use. This delay is 500ms. Let us consider a PC with two cores that we will schedule four actions onto. By default, the thread pool size will be the number of cores (2). If each action takes 1000ms, then two actions will be sitting in the queue for 500ms before the thread pool size is increased. Instead of running all four actions in parallel, which would take one second in total, the work is not completed for 1.5 seconds as two of the actions sat in the queue for 500ms. For this reason, you should only schedule work that is very fast to execute (guideline 50ms) onto the ThreadPool or TaskPool. Conversely, creating a new thread is not free, but with the power of processors today the creation of a thread for work over 50ms is a small cost.

Concurrency is hard. We can choose to make our life easier by taking advantage of Rx and its scheduling features. We can improve it even further by only using Rx where appropriate. While Rx has concurrency features, these should not be mistaken for a concurrency framework. Rx is designed for querying data, and as discussed in the first chapter, parallel computations or composition of asynchronous methods is more appropriate for other frameworks.

Rx solves the issues for concurrently generating and consuming data via the ObserveOn/SubscribeOn methods. By using these appropriately, we can simplify our code base, increase responsiveness and reduce the surface area of our concurrency concerns. Schedulers provide a rich platform for processing work concurrently without the need to be exposed directly to threading primitives. They also help with common troublesome areas of concurrency such as cancellation, passing state and recursion. By reducing the concurrency surface area, Rx provides a (relatively) simple yet powerful set of concurrency features paving the way to the pit of success.

 

你可能感兴趣的:(ReactiveX)