Chapter 10 - Test Manager's Perspective
Some organizations have a separate Test Manager role while others have the testers reporting directly to the Development Manager or even the Business Lead or Product Manager. When the Test Manager role exists, the test manager is usually responsible for
planning the bulk of the testing activities and managing/coordinating their execution. The test manager may or may not act as the gate keeper – that is, make the acceptance decision on behalf of other stakeholders. It is important for all parties to understand
the role of the Test Manager in this regard.
See the Test Manager persona in the company stereotype called “A Product Company” in Appendix X – Reader Personas for an example.
The Test Manager’s role can be a difficult one to do well because so many project factors are outside your control. Test planning is essential for all but the most trivial projects; what the test plan specifies depend heavily on the role of testing as it relates
to the product development team and the product manager.
Test Manager’s Role in Acceptance Decision
The role of testing seems simple enough: use the product in various ways and report any bugs that you find. But this is not the only responsibility of the test organization in many enterprises. Make sure you understand whether you and the testing team are doing
readiness assessment, acceptance testing or making the acceptance decision.
Testing as Acceptance Decision Maker
In some circumstances. The test organization is expected to collect data to help the product owner decide whether or not the software is ready to be released to users. Make sure you have a clear understanding of whether you are the gate-keeper (acceptance decision
maker) or are supplying information to someone else who will be making the decision (preferred.) Recognize that the acceptance decision is a business decision. If you are going to make it, you had better understand the business factors well enough to make
a good decision that you can explain to everyone. Otherwise, you’ll just be the bad guy holding up the show.
If you are to act as the Acceptance Decision Maker, you must have access to all the business-relevant information to make a good decision. That is, you must understand the business consequences of:
- accepting the software given the nature of the bugs that are known to exist and the level of confidence that all serious bugs have already been found, and,
- not accepting the software thereby delaying the delivery.
Each of these choices can have serious negative business consequences and the acceptance decision cannot be made without a full understanding of them. The Test Manager should not accept responsibility for making the acceptance decision lightly. In most cases
it is better for this business decision to be made by a business person, typically the product manager, business lead or operations manager based on information provided by the test manager.
Testing as Acceptance Testers
When you are providing information to another party to make the acceptance decision, make sure you have common understanding of what information that party (or parties) will require to make the decision. Ensure the testers understand the users for whom they
are acting as a proxy so they can define test that reflect realistic (not necessarily always typical) usage of the software. User Models (either user roles or personas) can be an effective way to gain understanding of real users.
Testing as Readiness Assessors
If you are doing readiness assessment to help the Dev Manager decide whether the software is ready for acceptance testing by the business users, you’ll want to build a good relationship with the dev team so that you can help them understand the quality of what
they have produced and how they can improve it. Embedding testers with the development team (often used in conjunction with a technique called pair testing) can be a good way to help them learn how to build quality in by testing continuously rather than tossing
untested software over the wall to the test team. (Kohl describes benefits of pair testing in this experience report [KohlOnPairTesting])
You will probably be expected to co-ordinate all testing activities and keep track of the test environments used, bugs logged, and versioning related to the fixes performed on the source code, all the way from readiness assessment through to acceptance testing.
This work typically goes by the name Test Planning. You’ll need to communicate this plan to all interested parties; especially those who are expected to execute parts of the plan and those who are interested in knowing what kinds of testing will be done by
whom, where and when. This communication often takes the form of a Test Plan document but it can also be done or complemented orally through targeted presentations, discussions or workshops. Many of the best test plans come about by inviting the interested
parties to participate in the test planning process either in one-on-one sessions or via workshops.
You will likely end up being the de facto owner of the overall test strategy. The only other contender would be the supplier organization and if they want to be involved, probably on the test automation side, by all means encourage their involvement because
it will make test automation much easier.
The test strategy includes the major decisions such as whether testing will be done in a final test phase or incrementally, whether testing will be primarily script-based testing or exploratory testing or a combination of the two. Exploratory testing is a very
effective way of finding bugs fast Script-based testing better and be automated to allow rapid regression test execution.
Another key decision is regarding the role and degree of use of automated tools including automated execution of functional and para-functional tests, the use of model-based test case generators and the use of automation as power tools (including the use of
automated comparators or verifiers) to support manual test activities. Some of these decisions require long lead times to decide whether to choose & acquire or build tools or to hire the appropriate resources and therefore must be made relatively early
in the project. This doesn’t imply that you won’t refine them over time as more detailed information comes to light.
The test strategy will be heavily influenced by the nature of your relationship with the organization developing the software. The ideal case is when they are prepared to collaborate with you by doing incremental delivery of functionality to support incremental
acceptance testing. Collaboration on test automation and design for testability is also a key factor in reducing test execution time and cost. If this level of collaboration is not available you may need to hire test automation engineers and test toolsmiths
with strong programming skills to support your test automation activities.
Another area that greatly benefits from collaboration with the product development team is the area of test automation. A key success factor is design for testability of the system. Only the supplier organization can do this so you need to work with them to
build it in. A good way to incent them to collaborate is to provide them with access to the 1-button automated execution of the acceptance tests you define. This provides the supplier organization with an instant scorecard that tells them how they are doing
at delivering quality software. Everyone hates surprises and nothing annoys a development team more than working hard to deliver quality software and being told after the fact that it wasn’t good enough.
While the development team needs to make the system amenable to testing, the test automation tools can be built by either the supplier organization or the test team. The latter will require technical testers, sometimes called test automation engineers. Either
way, the tests should be developed primarily by the testers or business people/analysts to ensure they reflect requirements rather than design decisions.
Test automation is not appropriate in all situations, where forms of manual testing are more effective or economical. See Section 17.2 in Chapter 17 (Planning for Acceptnace) in Part III for a discussion of where automation is appropriate. It’s the job of the
test manager to determine the test automation strategy.
Agreeing on Expectations/Requirements
One purpose of testing is to verify the system meets the requirements. Unfortunately, the requirements are often vague or ambiguous and this makes verifying them a difficult proposition. One person’s bug can be another person’s feature. Testers think differently
from analysts or business people and this is both a blessing and a curse. They’ll come up with all manner of test scenarios that the requirement specifiers never imagined. This results in a de facto divergence between the requirements and the specifications
that need to be managed. The product owner (Product Manager, Business Lead, etc.) needs to agree that these additional test scenarios are in fact requirements.
The requirements need to address the needs of all stakeholders, not just the users. Operational requirements need to be tested. If the test team doesn’t have the skills to execute the para-functional tests then make sure that the product development team executes
them and provides test plans to review ahead or time and test results as part of the handover to testing. The acceptance decision maker will to know that the para-functional testing was done and may even want to see the results.
The preferred solution is to treat test scripts as an extension of the requirements by using them to illustrate how specific requirements play out in the product. This implies that the product manager or business lead (or their team members) need to agree that
the tests interpret the requirements correctly. Ideally, the test cases would be articulated before the software is built so that the development teams can use the test cases to understand how the system should behave. This is known as Acceptance Test Driven
Agreeing on Done
Are we done yet? That is a question that needs to be asked and answered continuously. And done doesn’t always mean the same thing. One person’s definition of done may be another person’s definition of “not even close to done!” It is critical to get agreement
on how done software needs to be before it goes through each of the quality gates of the gating model. When is software considered “ready for acceptance testing”? When is it considered acceptable? These need to be agreed upon between the various parties involved.
The supplier and testing need to agree on the minimum quality requirement of software before it is ready for testing. This becomes the exit criteria for the readiness acceptance phase that the supplier team must ensure is met. A prominently posted Done-Done
Checklist is a good way for the supplier team to keep this quality criteria front and center.
Estimating Test Effort
Estimating the amount of time it will take to get through one complete cycle of testing is a challenge but one that can be overcome with experience. The real challenge, however, is guessing how many test&fix cycles will be required before the software is
of good enough quality to release to acceptance testing or to the market. The need to guess can be avoided by delivering software incrementally and testing each increment as soon as it is available. This helps in several ways:
- It reduces the amount of software that hasn’t gone through the test&fix cycle at least once when the final round of testing occurs. Here untested software can be thought of as inventory and incremental testing and delivery as inventory reduction, where
inventory is considered a form of waste (see [PoppendiecksOnLean])
- It provides data on how many test&fix cycles are required for software delivered by this supplier organization early enough to allow the test plans to be adjusted to ensure on-time delivery of software.
- It provides feedback to the supplier team on the quality of the software they are delivering early enough to allow the supplier team time to learn how to deliver better quality software that requires fewer test&fix cycles. The Done-Done Checklist may
need to be updated based on what was learned.
Testing will invariably result in a number of concerns being raised. Some of these concerns will be based on failed test cases and others may be based on general impressions about the product. All concerns, however, are based on expectations and it is important
to ensure that the expectations are correct. Test cases that don’t map to actual requirements can result in bugs that will be closed as “By Design”. Some failed test cases may point out inconsistencies between how the system operates and how the tester expects
the system to operate. These may point to legitimate usability issues if the tester’s mental model of the system matches that of real users.
Part of the role of many test organizations is to act as the second opinion, the house of sober second thought, about the software being built. It is important to balance this form of requirements elaboration with the need to get the product out in a timely,
and even more importantly, predictable fashion. Discovery of such requirements issues during a final testing phase makes the product manager’s job harder because the information comes too late to address without compromising the schedule. Therefore, the test
manager should strive to find such issues as early as possible so that the product manager can make decisions without their back to the wall.
The Concern Resolution Model describes a generic way to think about all manner of concerns including bugs, requirements issues, project issues and other mismatches between expectations and actual or perceived behaviors.