Facebook 的自动化测试

 

最近Quora上有个讨论,原意是:“facebook是如何做自动化测试的,他们是怎样测试才能保证每周的升级都可以不出差错的呢?” 参见Link

来自Facebook的Steven Grimm很好地回答了这个问题,觉得还不错,这里以第一人称翻译了一下。

  • 对于PHP的代码,我们写了非常多的基于PHPUnit测试框架的测试类,这些测试类覆盖范围比较大,从简单的判读真假的单元测试到大规模的后端服务的集成测试。开发人员把运行这些基于PHPUnit的测试用例作为他们工作中的一部分,同时这些用例也在一些专用的设备上不停地被运行(注:持续集成模式)。当开发人员对一些代码做了比较大的修改时,在开发机器上的自动化工具会运行这些测试用例的同时也会生成相应的代码覆盖率数据,对于需要提交到代码库的diff,在做代码review的时候回自动地产生一份带有覆盖率的测试报告。
  • 对于前端的代码,我们使用Waitir(注:Waitir是前端UI的自动化测试框架)做了基于浏览器的界面自动化测试。这些测试用例涵盖了网站页面的功能,特别是针对隐私方面,比如:“用户X发布了Y,而Y应该/不应该被用户Z看到”,有着大量的基于浏览器级别的这种用例。(这些隐私规则当然也会使用一些更低级别的方法被测试到,但是这些规则的实现是必须要严格执行的,并有着非常高的优先级,因此这部分必须要有足够的测试用例来覆盖)
  • 除了一些使用watir的全自动化用例以外,我们也有一些半自动化的测试。这些测试也使用了waitir技术,这样可以使一些表格填充或者点击button来完成整改界面上的流程的测试不太单调乏味,而且我们可以很清楚地检查和验证当前的步骤或流程是否正确合理。
  • 我们也在尝试开始使用JSSpec (注:JavaScript单元测试框架)去做一些JavaScript代码的单元测试,但当前也是刚刚开始做。
  • 对于后端服务的测试,根据不同的服务特性我们采用了许多不同的测试框架与方法。对于一些需要开源发布的项目,我们会使用开源的测试框架,像Boost和JUnit测试框架(注:Boost是针对C++/JUnit是针对Java的测试框架);对于另外一些项目,可能永远都不会发布到外界,我们就是使用内部开发的可以很紧密地与我们build系统集成在一起的C++测试框架。还有少数项目会使用项目级别的测试工具。多数后端服务的测试都会紧紧地和持续集成/Build系统结合在一起,这些持续集成的build系统会不停地针对源代码自动地运行测试用例并生成测试结果,测试结果在存储在数据库的同时会发送到通知系统中去。
  • HipHop(注:HipHop for PHP是Facebook的PHP项目)有一套类似的持续集成系统,HipHop的单元测试和所有基于PHPUnit的测试都会被运行。所有的这些测试结果会和基于普通的PHP解释器的结果做对比,从而可以看到不同PHP上的行为的不同;

Facebook的测试工具将测试结果存储在数据库的同时会发送一份通知邮件,这个邮件会包含执行失败的信息并且邮件的接收范围是开发同学可以自己调整的。(例如,你可以选择只有在测试连续失败一段时候的时候才接收到通知邮件,或者当一个用力失败的时候立刻收到通知)。在浏览器UI上,测试结果和 缺陷/开发任务跟踪系统会结合在一起,可以很容易的将测试失败与开发任务关联起来。

测试中一个非常重要的现象是“导致阻塞”,也就是一个测试用例失败有可能会阻止发布(在Facebook,有发布工程师会来评估是否可以将带有问题的代码发布到生产环境,发布工程师在必要的情况下会得到授权去阻止产品的发布)。阻止产品发布上线的事情是被认为是非常严重的问题,因为在Facebook大家对于这种快速发布的模式是深深引以为豪的。

我所在的团队是测试工程部门,主要职责是打造通用基础工具,这些工具会被上述的所有人用到,同时我们也在维护测试框架,像PHPUnit和Watir。Facebook没有专职的测试团队,所有的工程师都需要为他们的代码写自动化测试用例,并维护这些测试用例,保证产品代码改变的同时这些测试代码可以正确地运行。

Facebook的测试还处于一个初期起步尝试阶段,上面的介绍都只是我们在当前运行的方法而已。


公直   2012/2/27

 

附录原文,

What kind of automated testing does Facebook do? How do they make sure they aren’t breaking things in their weekly pushes?

 

Steven Grimm, 2005-2012

 

We do several kinds of testing. Some specifics:

  • For our PHP code, we have a suite of a few thousand test classes using the PHPUnit framework. They range in complexity from simple true unit tests to large-scale integration tests that hit our production backend services. The PHPUnit tests are run both by developers as part of their workflow and continuously by an automated test runner on dedicated hardware. Our developer tools automatically use code coverage data to run tests that cover the outstanding edits in a developer sandbox, and a report of test results is automatically included in our code review tool when a patch is submitted for review.
  • For browser-based testing of our Web code, we use the Watir framework. We have Watir tests covering a range of the site’s functionality, particularly focused on privacy—there are tons of “user X posts item Y and it should/shouldn’t be visible to user Z” tests at the browser level. (Those privacy rules are, of course, also tested at a lower level, but the privacy implementation being rock-solid is a critical priority and warrants redundant test coverage.)
  • In addition to the fully automated Watir tests, we have semi-automated tests that use Watir so humans can avoid the drudgery of filling out form fields and pressing buttons to get through UI flows, but can still examine what’s going on and validate that things look reasonable.
  • We’re starting to use JSSpec for unit-testing JavaScript code, though that’s still in its early stages at this point.
  • For backend services, we use a variety of test frameworks depending on the specifics of the services. Projects that we release as open source use open-source frameworks like Boost’s test classes or JUnit. Projects that will never be released to the outside world can use those, or can use an internally-developed C++ test framework that integrates tightly with our build system. A few projects use project-specific test harnesses. Most of the backend services are tied into a continuous integration / build system that constantly runs the test suites against the latest source code and reports the results into the results database and the notification system.
  • HipHop has a similar continuous-integration system with the added twist that it not only runs its own unit tests, but also runs all the PHPUnit tests. These results are compared with the results from the same PHP code base run under the plain PHP interpreter to detect any differences in behavior.


Our test infrastructure records results in a database and sends out email notifications on failure with developer-tunable sensitivity (e.g., you can choose to not get a notification unless a test fails continuously for some amount of time, or to be notified the instant a single failure happens.) The user interface for our test result browser is integrated with our bug/task tracking system, making it really easy to associate test failures with open tasks.

 

A significant fraction of tests are “push-blocking”—that is, a test failure is potential grounds for holding up a release (this is at the discretion of the release engineer who is pushing the code in question out to production, but that person is fully empowered to stop the presses if need be.) Blocking a push is taken very seriously since we pride ourselves on our fast release turnaround time.

 

My team, Test Engineering, is responsible for building the common infrastructure used by all the above stuff, as well as for maintaining PHPUnit and Watir. Facebook has no dedicated QA team; all Facebook engineers are responsible for writing automated tests for their code and keeping the tests maintained as the underlying code changes.

 

Facebook’s test setup is still very much a work in progress, but the above is at least a taste of what we do in that area.

 

From http://www.quora.com/What-kind-of-automated-testing-does-Facebook-do

你可能感兴趣的:(Facebook)