Books for references:
Quality Before Design, Donald Gause and Gerald Weinberg
Extreme Programming: Embrace Change (The XP Series) by Kent Beck.
Software Defect Removal by Robert H. Dunn
Peopleware Tom DeMarco and Tim Lister
The Psychology of Computer Programming by Gerald M. Weinberg
Software Testing in the Real World, Ed Kit cites
STEP Architecture
Step 1 Plan the Strategy
P1 Establish the master test plan.
P2 Develop the detailed test plans.
Step 2 Acquire the Testware
A1 Inventory the test objectives (requirements-based, design-based, and implementation-based).
A2 Design the tests (architecture and environment, requirements-based, design-based, and implementation-based).
A3 Implement the plans and designs.
Step 3 Measure the Behavior
M1 Execute the tests.
M2 Check the adequacy of the test set.
M3 Evaluate the software and testing process.
Work Products of STEP
IEEE Std. 829-1998 Standard for Software Test Documentation Template for Test Documents Contents
1.Test Plan
Used for the master test plan and level-specific test plans.
2.Test Design Specification
Used at each test level to specify the test set architecture and coverage traces.
3.Test Case Specification
Used as needed to describe test cases or automated scripts.
4.Test Procedure Specification
Used to specify the steps for executing a set of test cases.
5.Test Log
Used as needed to record the execution of test procedures.
6.Test Incident Report
Used to describe anomalies that occur during testing or in production.
These anomalies may be in the requirements, design, code, documentation, or the test cases themselves.
Incidents may later be classified as defects or enhancements.
7.Test Summary Report
Used to report completion of testing at a level or a major test objective within a level.
Roles and Responsibilities
Role Description of Responsibilities
Manager Communicate, plan, and coordinate.
Analyst Plan, inventory, design, and evaluate.
Technician Implement, execute, and check.
Reviewer Examine and evaluate.
A latent defect is an existing defect that has not yet caused a failure because the exact set of conditions has never been met.
A masked defect is an existing defect that hasn't yet caused a failure, because another defect has prevented that part of the code from being executed.
Risk
Risk involves the probability or likelihood of an event occurring and the negative consequences or impact of that event.
Risk Management is the process of controlling risk and monitoring the effectiveness of the control mechanisms.
Risk Analysis is the process of identifying, estimating, and evaluating risk.
Risk Analysis can be separated into two key activities:
* software risk analysis
* analysis of planning risks and contingencies
Software Risk Analysis
Why?
The purpose of a software risk analysis is to determine what to test, the testing priority, and the depth of testing.
Who?
Ideally, the risk analysis should be done by an interdisciplinary team of experts.
When?
A risk analysis should be done as early as possible in the software lifecycle.
How?
Step 1: Form a Brainstorming Team
Include users (or pseudo-users such as business analysts), developers, testers, marketers, customer service representatives, support personnel, and anyone else that has knowledge of the business and/or product, and is willing and able to participate.
The purpose of Part One of a brainstorm session is to increase the number of ideas that the group generates. As a general rule of thumb:
* Do not allow criticism or debate.
* Let your imagination soar.
* Shoot for quantity.
* Mutate and combine ideas.
The purpose of Part Two of the brainstorming session is to reduce the list of ideas to a workable size. As a general rule of thumb, the methods for doing this include:
* Voting with campaign speeches
* Blending ideas
* Applying criteria
* Using scoring or ranking systems
Step 2: Compile a List of Features
Examples of attributes to consider may include:
* accessibility
* availability
* compatibility
* maintainability
* performance
* reliability
* scalability
* security
* usability
Step 3: Determine the Likelihood
Assign an indicator for the relative likelihood of failure. H,M,L.
Step 4: Determine the Impact
The users are particularly important in assigning values for impact, since the impact is usually driven by business issues rather than by the systemic nature of the system.
Especially in larger systems, many users may only be experts in one particular area of functionality, while experienced testers often have a much broader view. It is this broad view that is most useful in determining the relative impact of failure.
Step 5: Assign Numerical Values
In this step of the risk analysis, the brainstorming team should assign numerical values for H, M, and L for both likelihood and impact.
Step 6: Compute the Risk Priority
The values assigned to the likelihood of failure and the impact of failure should be added together.
The overall risk priority is a relative value for the potential impact of failure of a feature or attribute of the software weighted by the likelihood of it failing.
Step 7: Review/Modify the Value
Step 8: Prioritize the Features
Step 9: Determine the "Cut Line
Step 10: Consider Mitigatio
Risk mitigation helps reduce the likelihood of a failure, but does not affect the impact.
Planning Risks and Contingencies
The only possible contingencies that exist are:
* reduce the scope
* delay implementation
* add resources
* reduce quality processes
+++++++++++++
Chapter 2 Master test plan
+++++++++++++
=========
Levels (Stages) of Test Planning
=========
The IEEE Std. 829-1998 Standard for Software Test Documentation identifies the following levels of test: Unit, Integration, System, and Acceptance.
Some other levels (or at least other names) that we frequently encounter include beta, alpha, customer acceptance, user acceptance, build, string, and development.
Pic 1.
=========
Audience Analysis
=========
=========
Sections of a Test Plan
=========
IEEE Std. 829-1998 Standard for Software Test Documentation
Template for Test Planning
Contents
* Test Plan Identifier
* Table of Contents
* References
* Glossary
* Introduction
* Test Items
* Software Risk Issues
* Features to Be Tested
* Features Not to Be Tested
* Approach
* Item Pass/Fail Criteria
* Suspension Criteria and Resumption
* RequirementsTest Deliverables
* Testing Tasks
* Environmental Needs
* Responsibilities
* Staffing and Training Needs
* Schedule
* Planning Risks and Contingencies
* Approvals
Alpha and Beta Testing
======================
At best, the terms "alpha testing" and "beta testing" are highly ambiguous or at least mean very different things to different people. We'll try to offer the most common definitions of both.
Alpha testing is an acceptance test that occurs at the development site as opposed to a customer site. Hopefully, alpha testing still involves the users and is performed in a realistic environment.
Beta testing is an acceptance test conducted at a customer site. Since beta testing is still a test, it should include test cases, expected results, etc.
Smoke Test
==========
A smoke test is a group of test cases that establish that the system is stable and all major functionality is present and works under "normal" conditions.
Black-box test:
===============
Equivalence Class Partitioning
Boundary Value
Inventories/Trace Matrix
Invalid Combinations and Processes
Decision Table
Domain Analysis
State-Transition Diagrams
Orthogonal Arrays
Ad Hoc Testing:
===============
Random testing: (monkey test)
Semi-Random Testing
Exploratory Testing
Test Execution
==============
Before Beginning Test Execution
- Deciding Who Should Execute the Tests
- Deciding What to Execute First
- Writing Test Cases During Execution
- Recording the Results of Each Test Case
IEEE Template for Test Incident Report
======================================
1.Incident Summary Report Identifier
2.Incident Summary
3.Incident Description
3.1 Inputs
3.2 Expected Results
3.3 Actual Results
3.4 Anomalies
3.5 Date and Time
3.6 Procedure Step
3.7 Environment
3.8 Attempts to Repeat
3.9 Testers
3.10 Observers
4.Impact
5. Investigation
6. Metrics
7. Dispositio