Give a tester 3 features A, B and C, while feature A with 100 test cases, B with 50, C with 15. Ask him to schedule a testing cycle for all 3 features. Let's assume each test case take the same time to finish. Here's the schedule you'd probably get: 100 test cases -> 10 days, 50 test cases -> 5 days, 15 test cases -> 1.5 days.
What if C is a feature heavily used on production environment, even a single failure would empact thousands of users and cause millions of dollars finance loss? Does the tester get this information when he schedules testing? Does he even try to?
When we treat all features the same way, we're negelacting 1 thing - "risk". The simple definiton of risk is that in a way a program can fail, how possibile the program may fail in this way, and how serious the empact will be when the program fails in this way. As Rex Black called in his book, technical risks - how possible the program may fail in this way, and business risk - how serious the empact will be. Analysis risks of features while planning a testing cycle, because a feature with high technical risk and high business risk should occupy more testing resource than other features.
Now how do we label the level of risks for a feature? To a new start up project, this might need to involve developers, testers and sales. Usually the more complex the code structure is, the more complex the production environment is, the higher level technical risk is on. Sales or marketing fellows could help on business risks, because business risk is sometimes related to the numbers of stakeholders, sometimes to the positions of the stakeholders.
To a mature project that has been up for years, like mine. It's quite straight-forward to define risk levels for each feature.
* Line of Code - count code lines for each feature, more code, higher level of technical risk
* Bug Density - count bug numbers for each feature ,more bugs, higher level of technical risk
* Support Case - count support cases for each feature, more cases, higher level of business risk
* User Behavior - count user log for each feature, more usage, higher level of business risk
In my project, our bug density report shows that :
* features that need to interact with environment usually generate more bugs
* features that need to interact with other components usually generate more bugs
In Rex Black's book, he mentioned that after define the levels of technical risk and business risk for a feature, testers can then define the testing effort for each feature, whether do deep and broad testing, or broad testing, or trial, etc.
But when we get solid data including LC, BD, SC, UB, we can do more than that. With technical risk list of features, we can define our next step of automation, which feature should have more automation tests, which feature already had enough. With business risk list of features, we can define our next step of development, at least we know which bug is affecting more users, therefore the bug should obtain higher priority in the to-be-fixed list.
With these lists, we can stop working on priorities based on our hunches. I am tired of reading emails with , "this is our priroty, let's do it." or "this one looks urgent, do it first". The problem about hunches is that people usually choose the easier ones, intentionally or not. At least, I know I would. :)