Posted by Markus Clermont, Test Engineering Manager, Zurich
In the last couple of years the practice of testing has undergone more than superficial changes. We have turned our art into engineering, introduced process-models, come up with best-practices, and developed tools to support our daily work and make each test engineer more productive. Some tools target test execution. They aim to automate the repetitive steps that a tester would take to exercise functions through the user interface of a system in order to verify its functionality. I am sure you have all seen tools like Selenium, WebDriver, Eggplant or other proprietary solutions, and that you learned to love them.
On the downside, we observe problems when we employ these tools:
Permalink | Links to this post |
The comments you read here belong only to the person who posted them. We do, however, reserve the right to remove off-topic comments.
Very nice post!
I've also blogged on my experience in this area
http://woftime.blogspot.com/2007/10/automated-acceptance-tests.html
Interesting reading.
I don't really agree when you suggest exposing internal APIs to UAT (user acceptance tests) or coupling tests to the database.
Schemas change almost as much as UIs do. If we allow UAT tests to read arbitrarily from the DB we will effectively be breaking any encapsulation we have put into our persistence components and other framework code.
There is also the fact that end-to-end UAT scripts are theoretically comprehensible by UI designers and customers. Coupling to the database would stop this.
You say not to test boundary conditions/edge cases through end-to-end UAT. I agree, you should use it to look for regressions of normal cases. In effect you are arguing for unit testing of APIs, which everyone should already be doing.
I think looking for races by multithreading tests is a good idea, but it applies to unit tests more readily than UAT.
Perhaps I've got the wrong end of the stick; could we have a more concrete example?
In Reply to "Return-Path" Markus says...
Thanks for your valuable comments. I think you have a few valid points there, but I do not
agree to everything you said, either.
1) A schema change is a far bigger deal to everyone involved into system development
than a change to the UI- there is usually a whole lot of code (that the dev-teams own)
that depends on the database-schema. Additionally, those components that deal with
the data-store are usually done earlier than the ones that manage the UI. Needless to
say, that you can also encapsulate DB-dependencies in a layer of your testing framework
(which is something we also need to do for UI-automation).
I don't think that our tests should be entirely 'black box', a 'grey box' approach allows more
valuable insights into the system under test. That might sometimes entail breaking of encapsulation-
however, encapsulation was not introduced to separate test-code from the system under
test, but to allow modules to have 'secrets' from the depending modules. This doesn't need
to apply for tests. BTW, you can make the same point for the UI. Actions to deal with the UI
should be encapsulated (as MVC patterns teach us), and we still need to deal with the UI
from the outside... the difference is only the frequency of change.
As you write, the end-to-end scripts are 'theoretically' comprehensible. In practice, what is
comprehensible (if at all) is the DSL that is used for scripting. It is in the responsibility of
the designer of the DSL how he implements the semantics of the DSL commands, i.e. whether
'check balance' means to go to a function in the UI, or to do a look-up in the database. That
doesn't make a difference for the user of the system.
For the storage system there is also an additional difference (sorry for the bad example- right
now I don't have a better one that is fit for publishing). If you go through the UI cycle, how will
you ever be sure, that the new item has really been written back to the DB? Maybe it is just stored
internally in a cache (I have seen that before). Maybe it was written to the DB, but not to the
expected table? You might say, this is ok, as long as the system reads from the correct table.
But what if the same DB is used by different systems? The developer might not have been
aware of it, and hence never done a unit-test.
In the later case a change to the DB will brake the test. True. But it will also brake other systems
that depend on the DB... so if encapsulation was not fully adhered by the dev-team (and
there exists evidence that the older your product, the more likely this will be), our tests add
important warning signs.
I don't think that you usually 'unit-test' APIs. A unit-test is just that: executing an encapsulated unit
of code, to make sure that it works. To achieve these aims, we often use techniques like
Mock-Objects together with dependency injection, to get rid of external dependencies like
databases, 3rd party systems, ... In an integration level API test, on the other hand, you will
leave some of these dependencies in place, or inject faults into Mock-Behaviour, or ... I agree
that this is not the classical UAT that you have in mind- still it is something else than a typical
unit-test.
Mocking is one reason why it is sometimes hard to spot things like race-conditions or memory-leaks
in unit-tests. You make sure that the component works correctly, but you are not investigating whether
it is used correctly. A higher level API test can do that (as a UI test can - the question is only the
cost of running and maintaining each of them).
I agree with most of what is written here and like the approach to reducing time to find, fix and ultimately prevent QA issue impact. We take a similar approach from the developers perspective. We get developers to write unit tests as close to the actor boundary (for use cases) as possible. Of course if you have an MVC, or similar architecture, this helps with the decoupling but still goes as as possible to the user boundary. One problem in avoiding the presentation layer is that there may be eventing mechanisms that occur as part of presentation. Additionally one benefit if that this also then occurs as part of the developer/continuous build and doesn't require an explicit steps on the part of the QA team.
Anyway, overall, nice article, I'm in full agreement with you.
>>>>Scripting your manual tests this way takes far longer than just executing them manually.
I am curious to know about a (any) way in which scripting (I mean a machine executable version of a manual test) takes LESS time than manual execution.
I believe that while *typically* it takes more time to script a manual test than mere execution, in some cases, if manual execution involves a great degree of observation of multiple parameters and analysis.
I am of the opinion that comparing time and effort taken for scripting and executing manual tests - is highly dependent on the context and testing mission.
Shrini
I for my part have a more pragmatic view of user acceptance tests: for one thing not only the GUI is target of user acceptance tests, also APIs and service interfaces as well as commonly used data formats have to be tested from this perspective. I also do not agree that it is useful to artificially expose interfaces if they, by themselves, do not provide any value to the customer. This should be also considered for the database as well as for other external resources. If the are not a shared resource, as example a data model which is used by more than one project, they should be only tested through the interfaces which are using the resource.
The other thing is, tests are tools: either to support and verify development, validating functionality or to help to simulate exceptional conditions. As a tool they have to match the purpose. So unit tests are useful as safety net or used as design tool (TDD-like), up to component tests. Integration and system tests broaden the scope and user acceptance tests validate the usability, either for people or other software systems. Every one of this tests should be automated as far as possible. It is true that scripting complex tests is cumbersome, but you have to balance it with the big advantage of automated tests: they are repeatable, manual tests are normally not and for that they can be easily measured.
Functional testing and user acceptance testing should be separated because functionality by itself can be unacceptable for the user.
With your summary I can agree with exception that you should not expose artificially internal interfaces and the addition that test goes through the whole project lifecycle, analysis to deployment and different test types have a unique view on the system (although most of them should be automated to a level which is reasonable possible). And yes, testing has a lot in common with development so most practices can be adapted.
I agree with every idea in the initial post. I recently gave a talk for our internal test organization and emphasized the exact same points.
Right now, our testing teams are really focused on UI testing, so our goal was to lower the cost of UI-script creation.
We were able to dramatically lower our scripting costs for one of our applications by doing two things:
+Implementing base classes for our test scripts (that manage environment info, data retrieval, etc.)
+Creating reusable screen objects to represent our application. This makes scripting incredibly fast and effective.
We created our own IE scripting tool in Java, (because or dev teams are using Java), that allows incredible flexibility in how tests are constructed, and also allows very simple access to UI components: e.g. button("id=submit").click.
This approach has allowed us to crank out MANY scripts quickly, and also isolates application changes in our reusable layer, making maintenance of our scripts much easier.
Now that we're getting our UI testing under control, our next step will be to work more closely with our dev teams to start "peeking under the covers," and looking for the right places to start testing the API.
Great post Markus!
> I figured out that a successful automation project needs:
> [...]
> to start at the same time as development
If possible and usually for unit tests I prefer to start with them BEFORE development :-)
Other points I would like to add:
- Good communication between project manager, development and testing. (e.g. new change requests)
> to use the same tools as the development team
- Better, if we also follow the rules and good practices of software development.
Hi Markus,
many good points. Congratulation!
I disagree on a few things (or I think that more explanation is needed):
- "execution is slow":
tests are never fast enough nevertheless there are huge differences between tools here. Such a general statement is misleading.
- "test break for the wrong reasons" & "maintenance of the tests takes a significant amount of time"
isn't this a sign for badly written scripts? It's a common practice for companies to use the "bad" developers (ie the guys that shouldn't touch the production code) to write tests. The result is that you have often bad tests.
- I'm missing something concerning the application's "testability". A good coordination between developers and testers helps to make the application easier to test and therefore the tests easier to maintain.
Cheers,
Marc.
Couldn't agree more... I've been finding that many tests written against the domain layer run faster, are less prone to fragility, and expose more bugs.
Including the UI or persistence layer in the picture always muddies the waters a bit!
I think we need to distinguish between automated acceptance tests and integration tests.
When our tests are accessing domain model/busienss logic directly without going through fragile UI then it's integration tests.
When tests are going through UI by clicking on buttons and/or links (not to mention AJAX functionality) then they are automated acceptance tests.
Integration tests definitely have some advantages but we can't replace one with another, I would rather say that we need both.
It's like unit and acceptance tests, they are working on different level and only together they are benifitial for the application testing in general.
the whole article is worth reading. Good example and lastly the points mentioned in the summary of successful automation project at last are helpful.
Sachin
the whole article is worth reading, great work,specifically it is summarized in an excellent manner.
Sachin
This is correct approach. This article clears difference between Test Automation and Automated Testing. Although this is not alternative for UI Test or automation however can reduce problems in manual testing and its automation.
Manual testing is way to check how system behaves for human interactions. Automated tests execute a sequence of actions without human intervention. This approach helps to eliminate human error, and provides faster results. Since most products require tests to be run many times, automated testing generally leads to significant labor cost savings over time.
API testing is right approach specifically in SOA but whenever during any transitions if delegates are implemented or UI is tightly coupled with business logic API Testing is bit impossible. In such a case, we have to resort to conventional test methods and automation.
Regards,
Mandar Kulkarni
I really appreciate this post.
I run a test automation team, and have been struggling with this very distinction. I want my team to do real test automation, but most of the development organization, including the CTO, is expected us to automate the existing tests.
You did an excellent job of laying out the differences. It's much clearer in my mind now what I need to communicate to the rest of the product development organization.