[English]Automating tests vs. test-automation

Automating tests vs. test-automation

Wednesday, October 24, 2007 9:18 AM

Posted by Markus Clermont, Test Engineering Manager, Zurich
In the last couple of years the practice of testing has undergone more than superficial changes. We have turned our art into engineering, introduced process-models, come up with best-practices, and developed tools to support our daily work and make each test engineer more productive. Some tools target test execution. They aim to automate the repetitive steps that a tester would take to exercise functions through the user interface of a system in order to verify its functionality. I am sure you have all seen tools like Selenium, WebDriver, Eggplant or other proprietary solutions, and that you learned to love them.
On the downside, we observe problems when we employ these tools:

  • Scripting your manual tests this way takes far longer than just executing them manually.
  • The UI is one of the least stable interfaces of any system, so we can start automating quite late in the development phase.
  • Maintenance of the tests takes a significant amount of time.
  • Execution is slow, and sometimes cumbersome.
  • Tests become flaky.
  • Tests break for the wrong reasons.
Of course, we can argue that none of these problems is particularly bad, and the advantages of automation still outweigh the cost. This might well be true. We learned to accept some of these problems as 'the price of automation', whereas others are met by some common-sense workarounds:
  • It takes long to automate a test—Well, let's automate only tests that are important, and will be executed again and again in regression testing.
  • Execution might be slow, but it is still faster than manual testing.
  • Tests cannot break for the wrong reason—When they break we found a bug.
In the rest of this post I'd like to summarize some experiences I had when I tried to overcome these problems, not by working around them, but by eliminating their causes.
Most of these problems are rooted in the fact that we are just automating manual tests. By doing so we are not taking into account whether the added computational power, access to different interfaces, and faster execution speed should make us change the way we test systems.
Considering the fact that a system exposes different interfaces to the environment—e.g., the user-interface, an interface between front-end and back-end, an interface to a data-store, and interfaces to other systems—it is obvious that we need to look at each and every interface and test it. More than that we should not only take each interface into account but also avoid testing the functionality in too many different places.
Let me introduce the example of a store-administration system which allows you to add items to the store, see the current inventory, and remove items. One straightforward manual test case for adding an item would be to go to the 'Add' dialogue, enter a new item with quantity 1, and then go to the 'Display' dialogue to check that it is there. To automate this test case you would instrument exactly all the steps through the user-interface.
Probably most of the problems I listed above will apply. One way to avoid them in the first place would have been to figure out how this system looks inside.
  • Is there a database? If so, the verification should probably not be performed against the UI but against the database.
  • Do we need to interface with a supplier? If so, how should this interaction look?
  • Is the same functionality available via an API? If so, it should be tested through the API, and the UI should just be checked to interact with the API correctly.
This will probably yield a higher number of tests, some of them being much 'smaller' in their resource requirements and executing far faster than the full end-to-end tests. Applying these simple questions will allow us to:
  • write many more tests through the API, e.g., to cover many boundary conditions,
  • execute multiple threads of tests on the same machine, giving us a chance to spot race-conditions,
  • start earlier with testing the system, as we can test each interface when it becomes 'quasi-stable',
  • makes maintenance of tests and debugging easier, as the tests break closer to the source of the problem,
  • require fewer machine resources, and still execute in reasonable time.
I am not advocating the total absence of UI tests here. The user interface is just another interface, and so it deserves attention too. However I do think that we are currently focusing most of our testing-efforts on the UI. The common attitude, that the UI deserves most attention because it is what the user sees, is flawed. Even a perfect UI will not satisfy a user if the underlying functionality is corrupt.
Neither should we abandon our end-to-end tests. They are valuable and no system can be considered tested without them. Again, the question we need to ask ourselves is the ratio between full end-to-end tests and smaller integration tests.
Unfortunately, there is no free lunch. In order to change the style of test-automation we will also need to change our approach to testing. Successful test-automation needs to:
  • start early in the development cycle,
  • take the internal structure of the system into account,
  • have a feedback loop to developers to influence the system-design.
Some of these points require quite a change in the way we approach testing. They are only achievable if we work as a single team with our developers. It is crucial that there is an absolute free flow of information between the different roles in this team.
In previous projects we were able to achieve this by
  • removing any spatial separation between the test engineers and the development engineers. Sitting on the next desk is probably the best way to promote information exchange,
  • using the same tools and methods as the developers,
  • getting involved into daily stand-ups and design-discussions.
This helps not only in getting involved really early (there are projects where test development starts at the same time as development), but it is also a great way to give continuous feedback. Some of the items in the list call for very development-oriented test engineers, as it is easier for them to be recognized as a peer by the development teams.
To summarize, I figured out that a successful automation project needs:
  • to take the internal details and exposed interface of the system under test into account,
  • to have many fast tests for each interface (including the UI),
  • to verify the functionality at the lowest possible level,
  • to have a set of end-to-end tests,
  • to start at the same time as development,
  • to overcome traditional boundaries between development and testing (spatial, organizational and process boundaries), and
  • to use the same tools as the development team.

Permalink | Links to this post |

The comments you read here belong only to the person who posted them. We do, however, reserve the right to remove off-topic comments.

15 comments:
Renat Zubairov said...

Very nice post!
I've also blogged on my experience in this area
http://woftime.blogspot.com/2007/10/automated-acceptance-tests.html

October 25, 2007 3:46:00 AM PDT
Return-Path said...

Interesting reading.
I don't really agree when you suggest exposing internal APIs to UAT (user acceptance tests) or coupling tests to the database.
Schemas change almost as much as UIs do. If we allow UAT tests to read arbitrarily from the DB we will effectively be breaking any encapsulation we have put into our persistence components and other framework code.
There is also the fact that end-to-end UAT scripts are theoretically comprehensible by UI designers and customers. Coupling to the database would stop this.
You say not to test boundary conditions/edge cases through end-to-end UAT. I agree, you should use it to look for regressions of normal cases. In effect you are arguing for unit testing of APIs, which everyone should already be doing.
I think looking for races by multithreading tests is a good idea, but it applies to unit tests more readily than UAT.
Perhaps I've got the wrong end of the stick; could we have a more concrete example?

October 25, 2007 4:33:00 AM PDT
Patrick Copeland said...

In Reply to "Return-Path" Markus says...
Thanks for your valuable comments. I think you have a few valid points there, but I do not
agree to everything you said, either.
1) A schema change is a far bigger deal to everyone involved into system development
than a change to the UI- there is usually a whole lot of code (that the dev-teams own)
that depends on the database-schema. Additionally, those components that deal with
the data-store are usually done earlier than the ones that manage the UI. Needless to
say, that you can also encapsulate DB-dependencies in a layer of your testing framework
(which is something we also need to do for UI-automation).
I don't think that our tests should be entirely 'black box', a 'grey box' approach allows more
valuable insights into the system under test. That might sometimes entail breaking of encapsulation-
however, encapsulation was not introduced to separate test-code from the system under
test, but to allow modules to have 'secrets' from the depending modules. This doesn't need
to apply for tests. BTW, you can make the same point for the UI. Actions to deal with the UI
should be encapsulated (as MVC patterns teach us), and we still need to deal with the UI
from the outside... the difference is only the frequency of change.
As you write, the end-to-end scripts are 'theoretically' comprehensible. In practice, what is
comprehensible (if at all) is the DSL that is used for scripting. It is in the responsibility of
the designer of the DSL how he implements the semantics of the DSL commands, i.e. whether
'check balance' means to go to a function in the UI, or to do a look-up in the database. That
doesn't make a difference for the user of the system.
For the storage system there is also an additional difference (sorry for the bad example- right
now I don't have a better one that is fit for publishing). If you go through the UI cycle, how will
you ever be sure, that the new item has really been written back to the DB? Maybe it is just stored
internally in a cache (I have seen that before). Maybe it was written to the DB, but not to the
expected table? You might say, this is ok, as long as the system reads from the correct table.
But what if the same DB is used by different systems? The developer might not have been
aware of it, and hence never done a unit-test.
In the later case a change to the DB will brake the test. True. But it will also brake other systems
that depend on the DB... so if encapsulation was not fully adhered by the dev-team (and
there exists evidence that the older your product, the more likely this will be), our tests add
important warning signs.
I don't think that you usually 'unit-test' APIs. A unit-test is just that: executing an encapsulated unit
of code, to make sure that it works. To achieve these aims, we often use techniques like
Mock-Objects together with dependency injection, to get rid of external dependencies like
databases, 3rd party systems, ... In an integration level API test, on the other hand, you will
leave some of these dependencies in place, or inject faults into Mock-Behaviour, or ... I agree
that this is not the classical UAT that you have in mind- still it is something else than a typical
unit-test.
Mocking is one reason why it is sometimes hard to spot things like race-conditions or memory-leaks
in unit-tests. You make sure that the component works correctly, but you are not investigating whether
it is used correctly. A higher level API test can do that (as a UI test can - the question is only the
cost of running and maintaining each of them).

October 25, 2007 11:06:00 AM PDT
Sean said...

I agree with most of what is written here and like the approach to reducing time to find, fix and ultimately prevent QA issue impact. We take a similar approach from the developers perspective. We get developers to write unit tests as close to the actor boundary (for use cases) as possible. Of course if you have an MVC, or similar architecture, this helps with the decoupling but still goes as as possible to the user boundary. One problem in avoiding the presentation layer is that there may be eventing mechanisms that occur as part of presentation. Additionally one benefit if that this also then occurs as part of the developer/continuous build and doesn't require an explicit steps on the part of the QA team.
Anyway, overall, nice article, I'm in full agreement with you.

October 26, 2007 1:34:00 AM PDT
Shrini Kulkarni said...

>>>>Scripting your manual tests this way takes far longer than just executing them manually.
I am curious to know about a (any) way in which scripting (I mean a machine executable version of a manual test) takes LESS time than manual execution.
I believe that while *typically* it takes more time to script a manual test than mere execution, in some cases, if manual execution involves a great degree of observation of multiple parameters and analysis.
I am of the opinion that comparing time and effort taken for scripting and executing manual tests - is highly dependent on the context and testing mission.
Shrini

October 27, 2007 9:16:00 PM PDT
rmeindl said...

I for my part have a more pragmatic view of user acceptance tests: for one thing not only the GUI is target of user acceptance tests, also APIs and service interfaces as well as commonly used data formats have to be tested from this perspective. I also do not agree that it is useful to artificially expose interfaces if they, by themselves, do not provide any value to the customer. This should be also considered for the database as well as for other external resources. If the are not a shared resource, as example a data model which is used by more than one project, they should be only tested through the interfaces which are using the resource.
The other thing is, tests are tools: either to support and verify development, validating functionality or to help to simulate exceptional conditions. As a tool they have to match the purpose. So unit tests are useful as safety net or used as design tool (TDD-like), up to component tests. Integration and system tests broaden the scope and user acceptance tests validate the usability, either for people or other software systems. Every one of this tests should be automated as far as possible. It is true that scripting complex tests is cumbersome, but you have to balance it with the big advantage of automated tests: they are repeatable, manual tests are normally not and for that they can be easily measured.
Functional testing and user acceptance testing should be separated because functionality by itself can be unacceptable for the user.
With your summary I can agree with exception that you should not expose artificially internal interfaces and the addition that test goes through the whole project lifecycle, analysis to deployment and different test types have a unique view on the system (although most of them should be automated to a level which is reasonable possible). And yes, testing has a lot in common with development so most practices can be adapted.

October 28, 2007 4:28:00 PM PDT
Chris said...

I agree with every idea in the initial post. I recently gave a talk for our internal test organization and emphasized the exact same points.
Right now, our testing teams are really focused on UI testing, so our goal was to lower the cost of UI-script creation.
We were able to dramatically lower our scripting costs for one of our applications by doing two things:
+Implementing base classes for our test scripts (that manage environment info, data retrieval, etc.)
+Creating reusable screen objects to represent our application. This makes scripting incredibly fast and effective.
We created our own IE scripting tool in Java, (because or dev teams are using Java), that allows incredible flexibility in how tests are constructed, and also allows very simple access to UI components: e.g. button("id=submit").click.
This approach has allowed us to crank out MANY scripts quickly, and also isolates application changes in our reusable layer, making maintenance of our scripts much easier.
Now that we're getting our UI testing under control, our next step will be to work more closely with our dev teams to start "peeking under the covers," and looking for the right places to start testing the API.

October 29, 2007 9:18:00 AM PDT
DiegoCarpintero said...

Great post Markus!
> I figured out that a successful automation project needs:
> [...]
> to start at the same time as development
If possible and usually for unit tests I prefer to start with them BEFORE development :-)
Other points I would like to add:
- Good communication between project manager, development and testing. (e.g. new change requests)
> to use the same tools as the development team
- Better, if we also follow the rules and good practices of software development.

October 30, 2007 5:40:00 AM PDT
mguillem said...

Hi Markus,
many good points. Congratulation!
I disagree on a few things (or I think that more explanation is needed):
- "execution is slow":
tests are never fast enough nevertheless there are huge differences between tools here. Such a general statement is misleading.
- "test break for the wrong reasons" & "maintenance of the tests takes a significant amount of time"
isn't this a sign for badly written scripts? It's a common practice for companies to use the "bad" developers (ie the guys that shouldn't touch the production code) to write tests. The result is that you have often bad tests.
- I'm missing something concerning the application's "testability". A good coordination between developers and testers helps to make the application easier to test and therefore the tests easier to maintain.
Cheers,
Marc.

October 30, 2007 7:59:00 AM PDT
OfficeOfTheLaw said...

Couldn't agree more... I've been finding that many tests written against the domain layer run faster, are less prone to fragility, and expose more bugs.
Including the UI or persistence layer in the picture always muddies the waters a bit!

October 30, 2007 8:45:00 PM PDT
Renat Zubairov said...

I think we need to distinguish between automated acceptance tests and integration tests.
When our tests are accessing domain model/busienss logic directly without going through fragile UI then it's integration tests.
When tests are going through UI by clicking on buttons and/or links (not to mention AJAX functionality) then they are automated acceptance tests.
Integration tests definitely have some advantages but we can't replace one with another, I would rather say that we need both.
It's like unit and acceptance tests, they are working on different level and only together they are benifitial for the application testing in general.

October 31, 2007 12:15:00 AM PDT
Sachin Dhall said...

the whole article is worth reading. Good example and lastly the points mentioned in the summary of successful automation project at last are helpful.
Sachin

December 18, 2007 9:04:00 PM PST
Sachin Dhall said...

the whole article is worth reading, great work,specifically it is summarized in an excellent manner.
Sachin

December 18, 2007 9:06:00 PM PST
Mandar said...

This is correct approach. This article clears difference between Test Automation and Automated Testing. Although this is not alternative for UI Test or automation however can reduce problems in manual testing and its automation.
Manual testing is way to check how system behaves for human interactions. Automated tests execute a sequence of actions without human intervention. This approach helps to eliminate human error, and provides faster results. Since most products require tests to be run many times, automated testing generally leads to significant labor cost savings over time.
API testing is right approach specifically in SOA but whenever during any transitions if delegates are implemented or UI is tightly coupled with business logic API Testing is bit impossible. In such a case, we have to resort to conventional test methods and automation.
Regards,
Mandar Kulkarni

January 3, 2008 2:14:00 AM PST
Pete Schneider said...

I really appreciate this post.
I run a test automation team, and have been struggling with this very distinction. I want my team to do real test automation, but most of the development organization, including the CTO, is expected us to automate the existing tests.
You did an excellent job of laying out the differences. It's much clearer in my mind now what I need to communicate to the rest of the product development organization.

你可能感兴趣的:(automation)