This project is read-only.

Chapter 20 - Fine-Tuning the Acceptance Process

The previous chapters of Part III introduced the practices involved in planning and executing the acceptance process. By reading those chapters the reader should be able to get a basic understanding of what is involved in accepting software. This chapter describes how we can use the information we obtain while executing the acceptance process as clues for how to improve the product development processes of our organization to reduce defect levels and to minimize the elapsed time and resources consumed while making the acceptance decision.

The acceptance process is a necessary but potentially wasteful exercise. We should strive to keep it as simple and value-adding as possible. The longer it takes, the more it costs in wasted effort and delayed delivery of business value. We can take steps to streamline the acceptance process by reorganizing how we do readiness assessment and acceptance testing. But the acceptance process is just one part of the product development process. The issues we find while executing the acceptance process are consequences of the choices we have made in how we structure our products, markets, and workflows. We can user these symptoms to motivate a better understanding of the underlying problems in how we work.

Debugging the Acceptance Process

Issues encountered during the acceptance process can be valuable clues about problems in how our organization and workflows are structured. These clues provide hints about how we should restructure our organization and its processes to work more efficiently. The following are a list of common symptoms and possible solutions.

Overly Long Duration of Acceptance Process

An acceptance process that takes a long period of time hints at issues with how the organization is structured. It can be caused by:
  • Too many groups each needing to their own specialized work and too many handoffs between them. Consider using value stream mapping to identify which steps add real value and which could be eliminated entirely (preferable) or done in parallel (second choice.) The root cause is likely the way the organization is structured; changing the structure may be a necessary though high risk/effort endeaver though one with a potentially huge payback.
  • Too many test&fix cycles being needed. This is described below.

Too Many Defects found during Acceptance Process

If the acceptance process finds many defects, this is likely a sign that The Product Development Teams():
  1. Don’t know how the finished product should behave.
  2. Are not held accountable for delivering a finished product to the Product Owner acceptance testing. A lack of understanding of what “done” looks like is a sign that the Product Owner has not done an effective job of communicating the nature of the product to the Product Development Team. This could be because the product owner doesn’t know either, or it could be ineffective communication such as occurs when the primary means of communication is written requirement specifications. The Product Owner can learn more quickly what they really want by doing Incremental Acceptance Testing. This gives the PDT more time to address any changes than if the acceptance testing were to be done in a final Acceptance Test Phase. In either case, the joint understanding of the acceptance criteria can be improved by including acceptance tests as part of the requirements process. These tests can be provided by the Product Owner or developed jointly through collaboration between the Product Owner and the Product Development Team.
Lack of accountability occurs for several reasons. One is when the product is decomposed into components that are far removed from the end-user functionality. The Product Development Team often doesn’t understand how their component supports the business goals and therefore builds functionality that may actual be at cross purposes to it. This often results in bugs found only after the components are integrated. This is one of the downfalls of the Component Team approach to organizing the work.
Another reason for lack of accountability is when management values schedule over quality. Of course we say quality is important but its management’s actions that really count. When management pressures developers to meet unrealistic timelines we are saying “schedule trumps quality”. A clear definition of the Minimum Quality Requirement (MQR) is crucial. We need to encourage the Product Development Team to improve its processes to deliver better quality by mistake-proofing the processes as much as possible. Test-Driven Develoment and automated tests are both examples of how to do this. Once they can build software with fewer defects, they will be able to deliver faster as well; the inverse is not true.

Too Many Test&Fix Cycles Needed

When the product requirements have been clearly communicated and the Product Development Team has done an effective readiness assessment, the acceptance testing does not find many defects. Those defects that are found should be fairly minor and easily fixed. But what if there are a lot of defects? Then the product will need to go back through the construction, readiness assessment and acceptance testing process another time. When this cycle has to be repeated several times before the product is good enough to consider releasing, it may be due to one of several root causes:
  1. New bugs are being introduced by many of the fixes for the existing bugs. This could be because the software has become brittle due to age and excessive internal coupling (or maybe it wasn’t designed very well even when it was new.) Or it could be due to rushing the work and a lack of a safety net to catch the defects being introduced. The latter can be addressed by incorporating automated unit testing and possibly test-driven development to avoid inserting the defects.
  2. The product Development Team is not doing effective readiness assessment of each release candidate. This may be because they are being rushed or because they are relying primarily on manual regression testing and cannot hope to retest all the affected software. Consider introduceing automated regression testing at various levels (unit, component, system) to reduce the length of time it takes to do an effective regression test cycle.

Lots of Debate Between Bugs vs. Change Requests

When is a bug a defect and when is it a change request? If this discussion is occurring a lot during the acceptance process it may be an indication that the Product Owner and the product Development Team are not striving to solve the same problem. Some possible root causes are:
  1. Fixed price contracts encourage the product Development Team to classify everything as a CR even if it is legimately a bug. Solution is to avoid creating dysfunctional relationships that come out of fixed price contracts. Consider using Target-Price contracts instead; these motivate both customer and supplier to minimize the changes while maximizing value. There is no distinction between a bug and a CR; the only distinction is between changes that are worth making and those that the customer can live without.
  2. Vauge requirements caused by a lack of understanding of the Product Owner as to what they wanted. See
  3. Vague requirements leading to lack of clarity about what PO really asked for. The gap between what the PO thought they asked for and what the product Development Team interpreted the request caused the bug and the subsequent debate. The solution is to improve the communication between the PO and the product Development Team. This is best achieved through collaboration rather than simply writing more copious requirement documentation. The communication can be supported by detailed examples which can be used as sample acceptance tests. Consider including readiness assessors and acceptance testers in the product design discussions as this will usually unearth interesting scenarious that the Product Owner may need to consider.

Debugging the Acceptance Process

Issues encountered during the acceptance process can be valuable clues about problems in how our organization and workflows are structured. These clues provide hints about how we should restructure our organization and its processes to work more efficiently. The following flowchart suggest possible solutions based on common symptoms:
Psuedo-code for a flowchart:
  • If Acceptance Taking Too Long
  • If too many Test&fix cycles
  • If too many regression bugs introduced
  • See 19.1.111Use Automated Test Execution to Reduce Regression Bugs
  • else
  • Else if too long per cycle
  • Consider streamlining the process using Value Stream Mapping
  • Else (not taking too long)
  • If too man new bugs found
  • If found by end-users
  • If Users find system hard to use
  • See 19.1.61Use Incremental Usability Testing to Discover Design Defects Earlier
  • If bugs found in a few specific areas
  • See 19.1.131Increase Breadth of Acceptance Testing
  • Else if bugs found everywhere
  • See 19.1.141Increase Depth of Acceptance Testing1
  • If found by acceptance testers
  • If PDT is not doing through readiness assessment
  • If PDT is organized around components
  • See 19.1.121Use Feature Teams to Improve Accountability of PDT for End-User Functionality
  • Else If PDT has access to acceptance tests
  • Run them more frequently
  • See 19.1.111Use Automated Test Execution to Reduce Regression Bugs
  • Else
  • See 19.1.101Use Acceptance Test-Driven Develpment to Help PDT Understand Requirements Better
  • is being rushed by management or PO or deadlines
  • See 19.1.151Focus on Quality to Get Speed of Delivery
  • Else if a
*
  • If found by Readiness Assessors
  • If found by independent testers
  • Use ATTD to improve communication of “done”
  • Seealso “If found by acceptance testers”
  • Else if too many accumulated bugs to fix thereby requiring “Bug Triage”
  • If due to high bug arrival rates
  • see “If too many new bugs found”
  • Else if Bugs aren’t being fixed due to lack of time
  • Educate Product owner on deciding between Bugs vs New Features
  • If due to fear of introducing new bugs due to fragility of code base
  • Find ways to get code base under automated test and refactor
  • Else if disagreements between Bugs and Change Requests
  • If Fixed Price Contracts
  • See 19.1.51Use Target Price Contracts to Align Interests of PO and PDT
  • Else
  • See 19.1.91Use Acceptance Test-Driven Develpment to Help PDT Understand Requirements Better
*
  • Else
  • Is there a problem?
The solutions referenced in this flowchart are elaborated below.

Possible Remedies

Based on the results of the analysis of acceptance process issues encountered, one or more of the following remedies may be useful:

Use Target Price Contracts to Align Interests of Product Owner and Product Development Team

Use Incremental Usability Testing to Discover Design Defects Earlier

  • Incorporate usability testing of early version or prototype (including paper prototypes)

Use Incremental Acceptance Testing to Discover Defects Earlier

Use Incremental Acceptance Testing to Help Product Owner Discover Requirements Earlier

Involve Testers During Requirements Defiintion to Ensure Completeness of Requirements

Use Acceptance Test-Driven Develpment to Help Product Development Team Understand Requirements Better

  • Product Owner should provide acceptance tests to Product Development Team
  • Collaborate with Product Development to develop them

Use Automated Test Execution to Reduce Regression Bugs

Use Feature Teams to Improve Accountability of Product Development Team for End-User Functionality

Increase Breadth of Acceptance Testing

  • Include more varied functionality/scope within the scope of testing.

Increase Depth of Acceptance Testing

  • Use more/different test design (e.g. Scenario-based testing) and execution techniques (e.g. Exploratory Testing) to get better test coverage.

Focus on Quality to Get Speed of Delivery

  • Focus on Quality; speed will follow due to less time spent finding and fixing bugs.

Streamlining the Acceptance Process

  • The acceptance process is a necessary but potentially wasteful exercise. We should strive to keep it as simple and value-adding as possible. The longer it takes, the more it costs in wasted effort and delayed delivery of business value. it It can become a serious impediment to being responsive to our users (the product owner’s customers.) Before we can take steps to streamline it we must first understand it.

Use Value Stream Mapping to Understand the Acceptance Process

  • We can improve our understanding of an existing or proposed acceptance process through an exercise called Value Stream Mapping. This is a form of business process modeling that focuses our attention on the ratio of elapsed time and the amount of value added. Figure 1 is a value stream map of a hypothetical acceptance process.
Figure 1
Figure 1 As-Is Acceptance Process
This process takes an average of 211 days to execute and provides only 60 days of actual value resulting in a cycle efficiency of only 28%. Some of the factors that make this process take so long to execute are:
  • Sixty days of software development output is sent through the process in a single batch. Therefore, there is a large inventory of untested software which results in many bugs being found during the readiness assessment and acceptance testing activities. The fixing of these bugs is done entirely on the critical path of the project.
  • There are several cycles in process each of which are typically executed several times. For example, Readiness Assessment sends the code back, on average 4 times for an additional 5 days elapsed time which adds zero days of value because it is pure rework. Each cycle takes 7.5 days (2.5 days fixing and 5 days readiness assessment) therefore this feedback loop adds 30 days of elapsed time.
Some tasks take longer than necessary because the resources are not dedicated. For example, the readiness assessors have other responsibilities and it takes them 5 days to do 2.5 days of testing.
  • Customer acceptance is done in two separate phases and is followed by three other forms of acceptance decision making, two of which can send the software back all the way to bug fixing. The software needs to be retested each time the software is sent back to fix bugs.
  • The security team is overworked resulting in an average wait time of 10 days for the security review. The preparation of the security documents takes an average of 10 days as developers discover what is required but only 1 day of that effort adds real value.
  • The Change Management Board (CMB) meets monthly resulting in an average wait of 10 business days.

Streamline the Acceptance Process

This acceptance process can be streamlined by changing how the work passes through the various readiness and acceptance activities. Figure 2 illustrates the result of having applied a number transformations on the process. These transformations are inspired by Lean Thinking which focuses on eliminating waste. See the sidebar “Waste in Software Development” for a description of the seven common forms of waste. Which forms of waste we try to eliminate first depends on what is important to us. If time-to-market is the key consideration, we may want to tackle the queuing delays first and then look for ways to reduce the amount of processing. If cost/resource constraints are holding us back, we may want to look for ways to reduce the amount of processing first.
Figure 2
Figure 2 Streamlined Acceptance Process
This streamlined version of the process is the result of the following changes:
  1. We have broken the project into 4 two week (10 business day) iterations. Each iteration is sent through readiness assessment, feature acceptance, integration acceptance and operational acceptance. Bugs found in the first three rounds of testing are fixed in the subsequent iteration. After the fourth iteration, the software is retested and the resulting bugs are fixed in a single bug fixing iteration. Contrast this with the 4 rounds of bug fix as a result of readiness assessment and three rounds due to feature acceptance testing. Each round of testing takes roughly 30% of the time it took in the As-Is process because the backlog of untested software is much smaller. This is an example of reducing waste associated with inventory.
  2. The acceptance tests are provided to the development team along with the requirements. This allows the developers and the readiness assessment testers to verify the functionality is working correctly before handing it off to the acceptance testers thereby reducing the number of bugs found in acceptance testing. The readiness assessment shown as occurring after the iteration actually happens continuously (as soon as the developer says the feature is “ready”) therefore any bugs found in readiness assessment are fixed before the developer has shifted their focus on the next fixture. This reduces the cost of fixing the few bugs that are found in readiness assessment by a factor of 4. This is an example of reducing the waste associated with defects.
  3. The acceptance review for security has been changed from an acceptance activity to a readiness activity. This allows it to be done in parallel with development. It has also been changed from being a “push” model where the team prepares a document to be reviewed to a “pull” model. This starts with a consultation with the security specialist who helps the development team understand the appropriate security requirements and design and what needs to be in the security document. This provides input into the software construction process resulting in a security compliant design as well as more appropriate content in the security document. This reduces the turn-back rate from 30% to 10% and the total effort from 10 days to 3 days without any reduction in value. The effort to produce the document is reduced and the effort to read the document is also reduced; definitely a win-win solution. A final security review is held prior to the final iteration to review the finished document as well as any late-breaking security-related design considerations. This is an example of eliminating waste by reducing delays and by reducing extra processing.
  4. The feature acceptance testing, integration acceptance testing and operations acceptance testing are done in parallel with an understanding that any showstopper bugs found in any of the parallel testing activities can result in the software being rejected. This reduces the elapsed time to the longest of the three instead of the sum of the three activities. This eliminates waste in the form of delays incurred while one group waits for another group to finish their work.
  5. Because only 25% of the functionality is being tested anew in each iteration, the acceptance testing can be finished in 1 day. This is a short enough period that the testers can focus on the one project and finish it 1 day elapsed time. This isn’t a form of waste reduction as much as it is an example of improving “flow” by making the outputof the process less bursty via smaller batch sizes. Achieving flow is another of the key principles of lean development.
  6. The Change Management Board documentation is prepared in parallel with the final bug-fixing iteration and can be submitted to the CMB as soon as the three parallel acceptance decisions are positive. To further reduce the delay, the CMB now meets weekly for 1 hour rather than monthly for a half day. This reduces the average wait from 10 business days to 2.5. This is an example of reducing waste associated with waiting. The collected impact of these changes to the acceptance process has reduce the average elapsed time to 94 days to execute 110 days of value-added processing that delivers 58 days of value for a cycle efficiency of 53%.

Summary

A significant portion of the elapsed time between the finish of development of software and when the software can start providing value to the customer is consumed by the acceptance process. The elapsed time can be reduced significantly by building software incrementally and doing incremental readiness assessment and acceptance testing as each increment of software is finished. Different kinds of inspection and testing activities can be done in parallel with development of the later increments reducing the amount of work that needs to be done on the critical path between completion of development and final acceptance. The types of the issues discovered during acceptance process can be used to diagnose organizational, cultural and process issues in the organizations involved. Changing the processes to avoid the issues is usually more beneficial than improving the efficiency of addressing the issues once found.

What’s Next?

Volume I has introduced a number of tools you can use for reasoning about how you accept software. It has described when to use a large number of acceptance-related planning, requirements, inspection, review and testing practices. You may want to research some of these practices in more detail. Volume 2 in this series describes many of these practices in more detail while Volume 3 provides examples of the artefacts that might have been produced on a fictional project.
Note:
Sidebar: Forms of Waste in Software Development
The seven common wastes of manufacturing can be remembered using the acronym TIM WOOD. In software, there is an equivalent of each form waste as exemplified by the list provided by Mary & Tom Poppendieck in their book [Implementing Lean Software Development"]}. Unfortunately, the software-specific names don’t form a nice acronym so we’ve included both names here. The software-specific names have a * next to them.
*T = The waste of Transport (Handoffs{"*)*
Transportation is waste because it doesn’t add any value to the end product but it increases the cost and the elapsed time.
In software, transport corresponds to handoffs between parties usually via documents. The requirements document handed by specifiers to software developers is one example, the design document handed by architects to developers is another. The preparation of these documents takes large amounts of effort – often much more than communicating the same information verbally. Handoffs usually result in loss of information and this is typically worse when the handoff is asynchronous (e.g. documents) rather than face-to-face. See the sidebar Using Whole Teams to Build Products in Chapter 1 for ways to reduce the number of handoffs.
I = Inventory (Partially Done Work*)
Inventory is bad because it costs money to produce the inventory and often costs money to store or manage the inventory. Inventory also masks issues by delaying when defective parts are discovered. Just-in-time manufacturing is all about reducing inventory to the lowest levels possible.
In software, inventory is any artifact that has taken effort to produce but which is not yet providing the customer with the value expected to be provided by having the software in use. Some common forms of inventory include:
- Untested software – Software that has been written but not yet tested
- Software with a long list of bugs untriaged or triaged but not fixed
- Requirements documents – Documents that provide detailed descriptions of functionality that won’t be built right away.
M = Movement (Task Switching*)
Movement is bad because while a worker is moving they can’t be producing. This adds no value to the product but reduces productivity of the worker thereby increasing cost.
In software development, the equivalent of movement is task switching. This is caused by asking people to work on several things at the same time. Every time they switch between one task and another task, time is wasted while they re-establish their working context.
W = Waiting (Delays* or Queuing)
Whenever work stops while waiting for something to happen, it is a form of waste. Common causes of waiting in software include waiting for approvals, waiting for clarification of requirements and waiting for slow computers or tools to finish their processing.
O = Overprocessing (Lost Knowledge* or Process Inefficiency)
Overprocessing is waste caused by doing unnecessary steps or doing a step longer than necessary. This adds no additional value but it does add cost.
In software, overprocessing is any steps in the production process that are required to produce high quality software. The most common example is the production of any documentation that no one will ever read. Another common form is having to rediscover information that was lost somewhere before it could be used.
O = Overproduction (Extra Features*)
Overproduction is producing too much. It is like inventory except that it is finished product while inventory is work-in-progress. In software, overproduction is the development of unnecessary features. It has been reported REF that on average 80% of features developed are rarely or never used. This is clearly overproduction!
D = Defects
Defects, bugs, problem reports, usability issues all require extra work to analyse, understand and address. This extra work is pure waste. It is exactly the same in software.

Resources

[Poppendieck, M] Lean Software Development – An Agile Toolkit, Addison-Wesley Professional # ISBN-10: 0321150783 ISBN-13: 978-0321150783,
[Poppendieck, M., Poppendieck, T.] Implementing Lean Software Development – From Concept to Cash, Addison-Wesley, …. (See ref in another chapter.)

Last edited Nov 7, 2009 at 2:02 AM by rburte, version 2

Comments

No comments yet.