Chapter 1 - The Acceptance Process
This chapter defines the process by which software is deemed acceptable by the Product Owner. It introduces the three major constituencies who must make decisions at key points in the process and how these decisions relate to, and influence, each other.
We start by drilling into the Accept phase of the software development lifecycle to examine the key decisions and how release candidates flow through the process. Then we examine how more complex scenarios such as multiple releases, complex organizations and
complex products influence the process. Sidebars examine how the decision process relates to Stage-Gate processes and Quality Gates. We finish this chapter with techniques for streamlining the acceptance process. Subsequent chapters describe the roles and
responsibilities of various parties involved in making the decisions and how this process ports from sequential to highly iterative and incremental agile projects.
Acceptance as Part of the Product Development Lifecycle
Software-intensive systems go through a number of stages on their journey from concept, a half-formed idea in someone’s mind, to providing value to its users. This process is illustrated in Figure 1A Sequential Product Development Lifecycle with the stages
placed into “swim lanes” [OMGOnSwimlanes] based on which party has primary responsibility for the stage.
Figure 1A A Sequential Product Development Lifecycle
A common scenario is for a business person to have an idea for how to solve a business problem with software; this is the concept. The business person (herein called the
) elaborate on the concept to produce a specification of the product that once built will deliver value. The
Product Development Team
builds software to satisfy the specification even though focussing on the specification instead of customer satisfaction may often lead to the wrong-product-built phenomenon. The Product Owner then assesses how well it satisfies
the needs of the intended users and if satisfied, accepts the software. When the software is deemed acceptable, the software is made available to the users who then use the software to realize the intended value that solves the originally identified business
problem. The Product Development Team is usually obligated to provide support for a warranty period.
Agile product development process blurr the lines between the timing and responsibilities of the Elaborate, Build and Accept activities of the product development process as illustrated in Figure 1B – An Agile Product Development Lifecycle.
Figure 1B An Agile Product Development Lifecycle
Agile product development is characterized by face-to-face collaboration between the Product Owner and the Product Development Team. While each has their own responsibilities, they don’t hand-off artifacts but rather collaborate on producing them. The development
team is more likely to be directly involved in the elaboration of the requirements and product design and the product owner is available to assess the product as it is being built rather than waiting for a final acceptance phase of the project. Elaboration,
Building and Acceptance is done incrementally in small chunks rather than in a single pass. As a result, testing the product for both acceptance and quality assurance starts early in the process, thereby spreading the testing activity throughout the development
lifecycle rather pushing it to the very end. Incremental development and incremental, early testing allow a lot more opportunity for learning about what is truly needed and truly possible and typically results in products with better fitness for purpose. See
the sidebar Using Whole Teams to Build Products for more on the motivation behind this.
There are several different ways to describe the role of testing and acceptance in the development lifecycle of a software-intensive product. Some of the well known models are:
- The Stage-Gate TM Process.
- The V Model.
The first is a product development process that is not specific to software-intensive systems. It describes how transitions between the different stages of the product development lifecycle can be managed. The latter is a classical model specific to software
development. It emphasizes equally both the front end and back end of the product development lifecycle by explaining the relationships between different kinds of stages and artefacts commonly associated with front-end of the lifecycle and different types
of testing and validation activities that traditionally take place at its back end.
Sidebar: Using Whole Teams to Build Products
One of the key forms of waste in software development is handoffs between highly-specialized roles. A key practice in agile and lean product development is the elimination of handoffs whenever possible through
the use of the Whole Team
approach. The Whole Team approach brings every skill needed to develop the product onto a single Product Development Team which works with the Product Owner to design the most suitable product based on the needs and constraints.
The team collectively commits to delivering increments of potentially shippable product functionality. This eliminates dependencies between teams and allows the Product Owner to work directly with the team to design the best possible solution to the business
needs. As an example, the Scrum method advocates the use of the Whole Team approach. Scrum proponents claim typical improvement in productivity of 5x to 10x (500% to 1000%)
. Including everyone needed to deliver the product on a single team allows the team to continually improve its process by eliminating waste. Organizational boundaries do not get in the way of this process streamlining
because everyone is on the same team striving to reach the same goal: delivering valuable product to the Product Owner.There is an ongoing debate in the agile community whether the Product Owner is part of the team or separate from it. There is no single answer
to this question as it depends on the nature of the organization(s) and people involved. The pragmatic answer is that there are often two levels of team at play: The Whole Team including the product owner and anyone helping them define, build and verify the
product.The Product Development (sub)Team which builds the product and the Product Owner (sub)Team which accepts the product. These two subteams must work closely together and often sit in the same team room.<maybe a graphic showing the two subteams and
the skills which might live in each and where they overlap>The exact breakdown of which skills are on each subteam varies from case to case. Developers usually belong to the Product Development Team and analysts usually belong to the Product Owner Team.
Testers, documentation writers, interaction designs may all belong to either subteam. The important thing is that all the skills are present and accountable for working together to deliver the best possible product with few if any external dependencies.
Processes with Stages and Gates
Many product development projects go through their lifecycle by adopting an essentially sequential process to evolve an idea into a deployed product. If the process’s distinct stages are guarded by decisions that control progression from one stage to another,
the process can be represented as a stream of alternating stages and gates, as proposed in the Stage-Gate Process™ [CooperOnDoingItRight]. The gates are associated with go/no-go or scope change decisions with associated resource commitment implications. In
a strictly sequential process, the stages as mutually exclusive: the project can only be in one stage at a time although within a stage many kinds of activities might be happening at the same time. : Figure 1c -
A Sequential Process with Stages and Gates -
depicts such a process.
Figure 1c A Sequential Process with Stages and Gates
Stage-gate processes need not be strictly sequential
in the sense of every stage and every gate being distinct. An example of an iterative process with stages and gates is the Incremental Funding Method (IFM) [DenneHuangOnIFM] whose stages and gates involve the same activities and decisions
that are repeated. In contrast to the strictly sequential depiction in Figure 1c and similar to the IFM, the acceptance process that we describe in this guide has the same activities that are executed across stages and decisions that are made many times over
the lifetime of a project. For example, the product development team may build many release candidates for testing but only a few may be accepted. The project could partially be in the Deploy stage (a previous release having passed through the gate after the
Accept stage --Gate 5 in Figure 1c--) even though the product development team is working on providing another release candidate for acceptance testing. The result of such dynamics is the introduction of many staggered parallel process streams with their own
stages and gates, interactions between these parallel streams, and loops within and across the streams (see the Sidebar titled Recasting a Sequential Processes through Workflow-Schedule Independence and Parallelization). The divergence from the traditional
single sequential flow is in line with the more modern and flexible interpretations and adaptations advocated by
. However the acceptance process will feature intermediate stages that directly feed into subsequent stages without explicit gates in between. In addition, the gates, rather than explicitly controlling funding decisions for subsequent stages,
primarily dictate control flow. These variations may be considered as departures from the original Stage-Gate Process™ [CooperOnDoingItRight]. The stage-gate-like model underlying the acceptance process is expounded in another sidebar titled Representing the
Acceptance Process with Stages and Gates in Chapter 2.
Sidebar: Recasting a Sequential Processes through Workflow-Schedule Independence and Parallelization
Corey Ladas [LadasOnScrumban] provides two powerful insights on software processes based on Kanban systems [HiranabeOnKanban] and lean development [PoppendiecksOnLean]
that allows us to see them in a new light. The first insight explains how workflow, the order and interdependence of steps inherent in a software process, can be separated from how the work that flows through the process is scheduled. This is called
. The second insight allows reorganizing a sequential workflow as parallel streams with merging points where work from the streams can be integrated. When both ideas are combined, software processes, regardless of whether
their essential workflow is sequential can be made iterative, incremental, and parallel. For any process, the essential workflow is the sequencing of steps, from beginning to end, applied to a working system, to implement a new improvement, whether a new piece
of functionality or a new non-functional requirement. This key idea behind workflow-schedule independence is independence of requirements. Independence of requirements in turn results from how work is divided up in smaller chunks in the beginning of a work
flow, often by an elaboration or requirements analysis activity. If the chunks, the resulting low-level requirements that the product development team transforms into working functionality, are small and can individually be completed all the way to integration,
acceptance testing, and deployment, they are independent. The extent to which this condition is satisfied dictates whether the chunks can be scheduled to flow through the process individually, in groups of smaller batches, or as a single big batch. The more
independent the chunks are, the smaller the batches can get, down to the level of the individual requirements. If they form independent groups of interdependent chunks, then the groups can be scheduled independently, but not the chunks themselves. Regardless
of the granularity of the batches that flow though the process, the underlying essential workflow stays the same, but the workflow executed again for each batch. The second insight, which Ladas refers to as
, leverages any independence of the steps of the essential workflow itself instead of the chunks of work that flow through it. The steps that don’t share resources can be parallelized. The result of such parallelization is workflow with staggered
branches and minimal unused capacity. Parellelization increases the efficiency of the process provided that the outputs of parallel streams can be integrated in such a way the capacity gain introduced by the parallelization exceeds the extra overhead of integration.
Thus workflow-schedule independence and parallelization together may allow a seemingly strictly sequential process to be executed in a highly iterative, incremental, and parallel manner, making it more efficient, flexible and responsive to change. They apply
to various degrees to many software processes and lifecycle models with sequential depictions. In particular workflow-schedule independence and parallelization explain how the various development processes on the process continuum discussed in Introduction
can be derived from the essentially sequential workflow underlying the classical waterfall model. Ladas’s insights also apply to two additional related process models that we discuss in this chapter. Both of these models -- processes expressed in terms of
stages and gates and the V Model -- have sequential depictions not too dissimilar to that of the classical waterfall process discussed in Introduction. And both of these models, like the classical waterfall process, are amenable to recasting in an iterative,
incremental, and parallel manner by applying workflow-schedule independence and parallelization. Such recasting may occur at different granularities: at the level of whole releases, iterations within releases, feature sets, or individual features.
The V-Model of Software Development
In the development of software-intensive systems the V Model is commonly used to describe the relationships between activities focusing on building the system and activities for system verification]. A variant of the V Model adapted from [FewsterGrahamOnTestAutomation]
is shown in Figure 1d – The V-Model of Software Development.
Figure 1d The V-Model of Software Development
In Figure 1d, activities and artefacts that are associated with building the system are shown on the left side of the V-shape. Testing activities are associated with system verification, and these are shown on the right side of the figure. Tests specific to
a layer tie together the activities and artefacts of that layer shown on the right side of the layer to the verification related activities of the same layer shown on the left. Tests for each layer can be defined early during the activities that are associated
with building the system, but they can be executed only after development has progressed sufficiently to the right-side of the V shape into the corresponding verification activity. Note that this interpretation of the V Model can accommodate both sequential
style verification and incremental or test-as-you-go
style verification. The principles described in the Sidebar titled Recasting a Sequential Processes through Workflow-Schedule Independence and Parallelization apply to the V Model.
Therefore the underlying workflow can be executed iteratively at different levels of granularity, on a feature by feature, iteration by iteration, or release by release basis.
After coding has begun and the unit tests for the implemented pieces are in place, unit testing may start. As a component’s units are implemented, certain APIs of the component may become functional. If the corresponding component tests for those APIs are in
place then component testing for that may begin. Continuing this progression, we climb up the right side of the V shape: the units are rolled into components and ultimately components are composed into product features such that the business requirements are
ready to be exercised, thus reaching the acceptance testing activity at the pinnacle of the right side. The acceptance process describes what happens at this final activity of the top layer. We make the decision to accept or reject the software based on how
well it meets the business requirements shown on the top left side of the V Model.
When applied in a strictly sequential manner, we associate the Accepting Testing activity with a distinct Acceptance Test Phase. Contrasted with practices such as Acceptance Test-Driven Development and Incremental Acceptance Testing advocated in agile software
development, the Acceptance Test Phase undertaken toward the end of the development lifecycle represents the
approach to acceptance testing.. The specific practices commonly used in the agile software development context are described later in this chapter in section The Acceptance Process for Highly Incremental Development.
Parties of the Acceptance Process
The acceptance process describes the interactions between two key parties: the Product Owner and the Product Development Team. Each has very specific responsibilities in the product development process. While the names of these parties may coincide with names
used in specific methods (such as Scrum), we provide our own definitions for the purposes of this book.
Responsibilities of the Product Owner
The Product Owner is the party who has commissioned the construction of the product but doesn’t have the time or skills to build it themselves. The Product Owner is responsible for clearly communicating their expectations of the product to the Product Development
Team and for deciding whether those expectations have been satisfied by the product delivered to them. The detailed responsibilities of the Product Owner include:
- Understand the potential users of the product or idea.
- Determine what capabilities the product needs to have to address the users’ needs.
- Determine the amount of resources they are prepared to invest into building the product.
- Specify the design of the product (not the design of the software inside the product).
- Help the Product Development Team understand the potential users and their needs and how the product will address them.
- Define the acceptance criteria that must be met before the product can be accepted. These include:
- Examples of how the product will be used by users and how the product should react in each example.
- Criteria regarding non-functional properties of the product including response time, availability, accessability, configurability, etc.
- Define any constraints on the product including technical constrainsts (what technology can or must be used).
- Assess the cost and delivery date estimates provided by the Product Development Team.
- Make the decision as to whether the product is worth building.
The Product Owner may carry out these responsibilities themselves or they may delegate to members of their team (known as the Product Owner Team.) They may also enlist the help of people outside the Product Owner Team to help them carry out their responsibilities.
This includes asking the Product Development Team to do some of the work. But neither delegation nor collaboration absolves the Product Owner from ultimate responsibility for them.
A good Product Owner has a clear vision of what “done” means and shares that vision with the Product Development Team as early as possible. A good Product Owner also recognizes that communication is inherently flawed and that you need to regularly verify that
what you intended to communicate has been understood. Therefore, a good Product Owner is prepared to try out and provide feedback on the software under development at frequent intervals; a good Product Owner doesn’t simply “throw a specification over the wall”
and say don’t bother me until you are ready for acceptance testing. They take an active interest in the emerging software and encourage the Product Development Team to provide frequent opportunities for feedback.
Responsibilities of the Product Development Team
The Product Development Team is the person or persons who have accepted the task of building the product commissioned by the Product Owner. The Product Development Team is responsible for delivering a working product that meets the Product Owner’s acceptance
criteria. The detailed responsibilities of the Product Development Team include:
- Determine how the requested functionality should be achieved including:
- Hardware vs. firmware vs. software partitioning.
- Determine what technologies will be used to build the product (unless these were part of the requirements or constraints provided by the Product Owner)
- Build the product and verify that it meets the Product Owner’s expectations to the best of their ability. The “product” includes:
- Working software and hardware
- Whatever user documentation is required (unless this is provided by the Product Owner as part of the requirements.)
- Whatever artifacts are required to maintain the software including software and hardware design documentation, unit and component tests, test tools required to test the product, etc.
The Product Development Team may carry out these responsibilities themselves or they may enlist the help of people outside the Product Development Team to help them carry out their responsibilities. But neither delegation nor collaboration absolves the Product
Development Team from ultimate responsibility for them.
Responsibilities of Other Specialities
The nature of the product being built often dictates the kinds of specialists that need to be involved in the project. These specialists may be engaged by either the Product Owner Team or the Product Development Team. Regardless of which party they work with,
they need to understand their own role in delivering a product that satisfies the potential users.
The Basic Acceptance Process
The basic acceptance process at its highest level subsumes the Build-Accept-Use cycle depicted in Figures 1A and 1B. Each of the high-level stages shown in Figure 1A and 1B involves multiple sub-stages or activities and an exit decision.
The Build stage is composed of the product’s construction, its readiness assessment, and a subsequent readiness decision, all performed by the product development team.
The Accept stage is composed of acceptance testing overseen by the product owner (and sometimes performed in collaboration with the product development team), and a subsequent acceptance decision made by the product owner. Normally, the readiness decision and
the acceptance decision are made on the same set of criteria. The preceding activities, readiness assessment and acceptance testing, may involve overlapping activities, although readiness assessment may be more internally focused than acceptance testing. However
the two decisions differ in their goals. Readiness decision determines whether the product is ready for the product owner to evaluate and the acceptance decision determines whether the product meets all its requirements and is ready to be used. The last stage,
the Use stage, may involve a final, user-facing evaluative activity in either a production or end-user environment to determine whether it is usable and deployable to its intended audience.
Figure 2 - The Acceptance Process - drills down into each of the Build, Accept, and Use stages to express them in terms of their lower level components.. The figure shows only the path associated with the positive outcomes of the underlying decisions.
Figure 2 The Acceptance Process
In the ideal world this sequence of activities would be done exactly once and the assessment could be done entirely by the Product Owner; in practice, this could be a recipe for disaster as untested software can contain large numbers of bugs per thousand lines
of code. It usually takes many tries to build a product that the Product Owner finds acceptable therefore the acceptance process is traversed, at least partially, many times. To ensure that a quality product has been built and thereby minimize the number of
times the Product Owner is asked to accept the same product, most Product Development Teams will include some level of self-assessment of each release candidate before providing the software to the Product Owner for making the acceptance decision. In this
book we call the testing and other verification activities that are performed prior to asking the Product Owner to accept the software
and the decision to hand off the software is called the
. The testing done by the Product Owner or their proxy after receiving the software from the Product Development Team is called
and the decision whether to accept the software in its current state is called the
. The decision each user makes whether or not to actually use the product once it is available to them is called the
Each decision can result in a positive or negative outcome. Only the positive outcomes are shown in Figure 2; the negative outcomes are shown in Figure 3 -
Paths through the Acceptance Process.
The transitions from the Build stage to the Accept stage and from the Accept stage to the Use stage typically require the movement of software (or software containing products) from one environment to another. For the purposes of this discussion, the specific
steps involved in making the software available, though important for acceptance, are not considered a part of the acceptance process.
Assessing Release Candidates using the Acceptance Process
The version of the software that is put through the acceptance process is often referred to as a
. It goes through the readiness decision and acceptance decision processes step by step and decision by decision until it meets one of the following requirements:
- It passes through all the decisions and is deemed ready for use by users.
- It is deemed insufficient in some way; at which point, it is sent back to an earlier phase.
A negative acceptance decision or readiness decision can cause the release candidate software to be “sent back” to an earlier point in the process.
Figure 3 -
Paths through the Acceptance Process illustrates the possible paths through the acceptance process when the readiness or acceptance decisions cannot be made and require additional capabilities or information. In the figure, Quality Data refers
to the information about the quality of the system obtained from readiness assessment or acceptance testing.
Figure 3 Paths through the Acceptance Process
From the readiness decision, a release candidate can be sent back to readiness assessment if the quality data obtained from the readiness assessment is insufficient to make a well-informed readiness decision. If the data from the acceptance testing phase is
sufficient to determine that critical functionality is missing or the product has severe deficiencies, the release candidate can be sent back to development for rework.
From the acceptance decision, a release candidate can be sent back to acceptance testing if the quality data from acceptance is insufficient. If the acceptance decision also depends on data collected during readiness assessment and it is deemed to be insufficient,
the release candidate can be sent back to readiness assessment. If critical functionality is missing or has sufficiently severe deficiencies, the release candidate can be sent back to development for rework.
Normal practice is to log any bugs found in the bug management system so that the Product Development Team can start the process of remediation but to continue testing until all the tests have been run or testing is blocked by a bug that prevents further testing
from occurring. At this point testing stops until a new release candidate is received. After remediation of one or more bugs, a new release candidate is built and passed through the acceptance process. Each release candidate should be a distinctly named version
of the software (and should be tagged as such in the source code management system.)
When a large number of bugs is found during the acceptance process, the Product Owner may need to decide which bugs must be fixed, which require further investigation and which can be deferred to a future release, a process known as
. Bug triaging involves assigning a priority and severity to each bug. While this is a common practice, it is typically a symptom of deeper issues in the organizational culture and structure of the enterprise. Rather than focusing on becoming
better at fixing the bugs, most organizations would be better served by understanding the root causes of their bug backlog and changing how they build their products to reduce the number of bugs found. The section Trouble-shooting the Acceptance Process in
Chapter 20 – Fine-Tuning the Acceptance Process offers some possible root causes of a large bug backlog and possible avoidance strategies. Refer to the Sidebar Incremental Acceptance and Bug Tracking on Agile Projects in Chapter 8.
Although the acceptance process outlined here appears to be sequential with the software moving from one stage to another, this is not always the case. On well-run projects, following the practice of
, each feature may traverse this process individually. Therefore, it is possible for one feature to be in the concept stage, with the development team working on another set of features in the development stage, and another set
of features with the Product Owner in the acceptance stage. That is at least three instances of the process running in parallel.Even on classical waterfall projects we may have more than one release candidate going through the process at the same time. For
example, the Alpha release may be In Use, the Beta is in Acceptance Testing and developers may be working on additional functionality for the general release while also fixing bugs in the next release candidate for this release. That is four instances of the
process running in parallel. Such parallelism however is not without implications due to increased integration and coordination overhead. In particular, source code management becomes complex due to branching and subsequent merging of source code trunks (for
proven branching strategies in TFS, see [MSOnTFSBranchingGuide]). Bug tracking becomes complex due to the need to coordinate the issues from the releases under acceptance testing with the fixes and additions in the releases under development.A more complete
discussion of overlapping releases follows later in this chapter under the headings
The Acceptance Process for Alpha & Beta Releases
. For the general principles governing incrementality and parallelism, refer to the Sidebar Recasting a Sequential Processes through Workflow-Schedule Independence and Parallelization.
Why Separate Readiness Assessment from Acceptance Testing?
The process of verifying and validating the software-intensive system against the requirements and expected usage can be viewed as a single monolithic activity or as very fine-grained steps. The primary reason for grouping these fine-grained steps into two
major buckets is to separate the activities that should be carried out by the Product Development Team before turning the system over to the Product Owner from those that the Product Owner does as part of deciding whether to accept the system. Readiness assessment
is primarily about the professionalism of the Product Development Team while acceptance testing is about validating that the system as delivered will suit the purposes of the users and other stakeholders and confirming compliance to contractual agreements.
There may be other reasons to divide the activities into various categories. We mention them only briefly because they are beyond the scope of this guide:
- Deadline Pressure – Some managers might believe that having an earlier, separate milestone before handing over software to the product owner is an effective way to motivate developers.
- Accounting – The initial construction of the software may be treated differently than fixing of bugs from an accounting perspective (especially work that may be considered as a change request may affect project accounting differently than work that may
be considered as a bug fix).
- Early Validation – It is useful to have real users try using early versions of the product to validate that the product, once finished, will fill the niche for which it was targeted. This clearly isn’t full-on acceptance of the product so it
could be considered readiness assessment. Prior to doing this type of testing with users, the Product Development Team would want to do their own due diligence to ensure the software was working well enough to allow the users to try it. Therefore, we
would consider this a form of Alpha/Beta release or possibly incremental acceptance testing or even conditional acceptance.
- End User Training – Exposing the software to users before acceptance can be an effective way to start the training process. If the quality of the system is high enough, it can also be useful as a form of viral marketing. Both of these uses fall into the
category of Alpha/Beta releases rather than readiness assessment.
Readiness Assessment or Acceptance Testing?
Given a particular testing activity, should it be considered part of readiness assessment or acceptance testing? There are several factors that play into this decision:
- Who’s doing the assessing: Acceptance is performed by the Product Owner or their proxy (someone acting directly on the Product Owner’s behalf) while readiness assessment can be done by anyone involved in the project. Where the Product Development
Team and Product Owner are distinct organizational entities, the roles should be fairly clear. Things get murkier when there is no real user or end-customer involved, common in product companies, or when there is a separate organization charged with testing.
These scenarios will be covered in more detail in Chapter 2 - The Decision Making Model.
- Formality of the testing: Acceptance is a more formal activity because the Product Owner is involved. This implies more formal record keeping of any concerns that were identified. Readiness assessment can be much less formal; it need only be as formal
as dictated by the project context. Product Development Teams that need to be able to pass formal audits will keep much more formal records. Agile projects tend to be very informal; during readiness assessment they just fix the bugs immediately rather than
defer them for fixing later.The lines can get somewhat blurred when there are more than two different stages of testing; these scenarios are described in Acceptance in Complex Organizations and Accepting Complex Products in Chapter 2 – Elaborating on the Acceptance
Process. In the end, it is less important to decide which label to apply to a particular testing activity than it is to ensure that the right testing activities get done and that each party knows for which activities they are responsible. It can be useful
to list all the potential testing activities and for each one decide whether it is mandatory, optional, or not required, and to assign responsibility for it to a specific person or organization. A sample spreadsheet is provided <online at http://testingguidance.codeplex.com>.
The Acceptance Process for Multi-Release Products
Thus far, we’ve focused on the acceptance process as it applies to a single release candidate. How does the acceptance process get applied when a product will have multiple releases?
For the most part, long-lived multi-release systems can be thought of as simply a sequence of individual products, where each product is being individually assessed for readiness and acceptance. Each release goes through the entire decision making process.
Figure 4 - The Acceptance Process with Multiple Releases illustrates an example of this process.
The Acceptance Process with Multiple Releases
The set of functionality required for each release, which we call the Minimum Marketable Functionality or MMF, is unique based on the goals of the release determined by the product owner. The set of quality
criteria, which we call the Minimum Quality Requirement or MQR, is somewhat more consistent from release to release but it may evolve as the product matures and product context evolves. The specific criteria would be selected from the set of criteria in effect
at the time of the project (which may vary from those that were in effect for earlier releases.) An example of this is that the Sarbanes-Oxley Act (SOX) was enacted in 2002, so all subsequent releases required compliance with this act as a readiness and/or
acceptance criteria. Subsequent releases may also have backward compatibility requirements that did not exist for earlier releases. Ideally, the MMF and MQR used by the Product Development Team in making the Readiness Decision should be the same as the MMF
and MQR used by the Product Owner to make the Acceptance Decision. In practice, this is hard to guarantee without extensive collaboration between the Product Development Team and the Product Owner Team.
The Acceptance Process for Alpha & Beta Releases
Alpha and beta releases are ways to use end users as field testers before a production-quality general release to gather more data about the product as it might be used "in the real world." The end users may be internal to the organization, or external
but friendlier (meaning more bug-tolerant) or having a stake in the outcome (wanting to see the product earlier for their own benefit.) In some organizations internal alpha testing is called “dogfooding” so named because when someone produces dog food, they
should be prepared to feed it to their own dog.
The final outcome of alpha and beta testing is as likely to result in changes to the MMF and MQR of the product (in effect, new functionality or quality requirements) as it is to be a collection of bug reports that describe failure to meet the existing MMF
Each alpha release and beta release can be considered a separate release with its own release decision and acceptance decision. That is, the development organization needs to deem it to be ready for an alpha (or beta) release and the product owner needs to
accept it as being ready for alpha release as in: (“I accept this alpha release as having sufficient functionality and quality to warrant releasing to users to collect feedback …” This is illustrated in Figure 5 – The Acceptance Process with Alpha and Beta
Figure 5 The Acceptance Process with Alpha and Beta Releases
Note that every release, whether alpha, beta or final has a readiness decision and an acceptance decision. The functionality (MMF) and quality (MQR) for an alpha release are typically lower than that needed for a beta release, which is lower than needed for
a general release. For example, the MMF for the Alpha release may be a core subset of functionality; not all features need be present. The MQR may be "no severity 1 bugs" and "responds in under 2 seconds 95% of the time (versus the 1 second
response time required in production)." Typically, the MQR criteria for the Beta release will be based on the Alpha criteria with additional criteria based on improvements previously planned as well as feedback from the Alpha testing as shown in Figure
6 – Alpha or Beta Feedback.
Figure 6 Alpha (or Beta) Feedback
Similarly, the MMF and MQR for the general release would be affected by feedback from users of the Beta release.
Soaking, Field Trials and Pilots
New products are often tested with pilot groups before being rolled out to larger groups of users. A “pilot” is typically the first production quality release with functionally picked to satisfy a particular target audience. Therefore, the MMF bar is lower
but the MQR needs to be sufficient to satisfy the users. Unlike an Alpha or Beta release, this is real production software. A pilot user group will often receive several dot releases based on issues they find and report. The initial pilot may be followed by
a larger pilot group, who may require additional features, or the product may go straight to general availability. Each of the pilot releases would be subject to the acceptance process with both readiness and acceptance testing preceding deployment.
A similar strategy is known as field trial or “soaking” (think of the washing metaphor: just like clothes you let the software “soak” for a while, to make sure it comes out “clean”). The software is deployed to friendly customers/users for an extended period
(more than just a test cycle) to see how it behaves in a real customer/user environment. As with a pilot, readiness assessment and acceptance testing would be done to ensure that the MMF and MQR are satisfied. The outcome in all cases is to gather user feedback
that may cause the product owner to adjust the MMF and MQR expectations of subsequent releases, not to revisit the acceptance decision of the pilot or field trial release.
Any time software needs to be maintained (such as when small changes are made to the software and those changes are deployed), you are, in effect, creating a minor interim, or dot release of the software that needs to go through the entire decision-making cycle
yet again. It is common to look for ways to reduce the cost of gathering the data to support the acceptance decision. Some ways of doing this increase the risk of possibly missing newly created bugs (also known as "regression bugs") by reducing the
amount of testing (for example, risk-based test planning) while others simply reduce the effort to get similar test coverage (for example, automated regression testing).
Another unique aspect of software maintenance relates to the warranty period on a software release. Any changes that need to be made to the software should be made in the source code management (SCM) system. When building multiple releases, there may be ongoing
development for the next release that should not, under any circumstances, be inserted into the production system along with the warranty bug fixes. This requires managing separate code streams or branches during the warranty period and ensuring that all warranty
fixes are also applied to the new development code stream. SCM is also needed during acceptance testing if the development for the next release is moving forward in parallel. For practical strategies for using source code management systems, see
Frequently, the acceptance decision maker accepts a product with conditions. Accepting a product with conditions is a short-hand way of saying,
"The product is not acceptable yet, but it is close to meeting our criteria for functionality (MMF) and quality (MQR). If you address the following concerns (and we find nothing new in the subsequent round of acceptance testing), we intend to accept the
product in the next pass through the decision making process."
Conditional acceptance brings the process back to the construction/development phase of the acceptance process, but this time with a much better idea of exactly what must be done to make it through both the readiness decision and the acceptance decision on
the next round.
The Acceptance Process for Highly Incremental Development
The acceptance process as described thus far looks very sequential in nature but it can also be used in a highly incremental fashion on agile projects by applying workflow-schedule independence (see Sidebar Recasting a Sequential Processes through Workflow-Schedule
Independence and Parallelization). Each chunk of functionality (often called a “feature” or a “user story”) can be passed through the acceptance process independently as shown in Figure 7 - The Acceptance Process Applied to Incremental Development.
Figure 7 The Acceptance Process Applied to Incremental Development
The individual MMF for each feature or user story may be:
of the MMF of other features or user stories; or
when it may build on the MMF of prior features.
The MQR may be different for different increments, although it’s more consistent across increments. For example, the MQR for an alpha release is typically lower than the MQR for a regular, general release. When MQR relates to non-functional requirements, it
applies across functional features.
At some point, a product-level readiness and acceptance decisions are made to ensure that everything works properly together. Refer to Accepting the Output of Feature Teams in Chapter 2 – Elaborating on the Acceptance Process for a more complete discussion.
In incremental development, the developers may pre-execute the acceptance tests When all the tests pass for that feature or user story, they may turn over the functionality to the Product Owner (or other customer proxy such as a business analyst or acceptance
tester) for immediate "incremental acceptance testing." Therefore, acceptance testing at the feature level may start immediately after the product development team decides that all or most of the functionality for that feature is built and working
correctly, allowing increments to overlap. This is an instance of increasing efficiency by introducing parallelism into the workflow when resources may be separate (see Sidebar Recasting a Sequential Processes through Workflow-Schedule Independence and Parallelization).
There may also be a round of acceptance testing performed at the end of the iteration, as illustrated in Figure 4 of the Introduction by the medium-length vertical bars representing iteration-wide testing. The activities conducted for each feature within the
iteration are illustrated in Figure 8.
Figure 8 Agile Development with Incremental Acceptance Testing
Development is done one feature at a time by either a single developer or a small team of developers, documentation writers, testers and user experience people, depending on the size of the feature. (See the sidebar Whole Team for more information.) In Figure
8 – Agile Development with Incremental Acceptance Testing, Dev 1 and Dev 2 could each be a single developer or a small team. When the development work is complete, the developer, possibly assisted by a tester or business analyst conducts readiness assessment
on the feature by running all the known unit, component and business acceptance tests against the software. If they find any problems, they fix them before proceeding. These tests are often automated and act as a form of mistake-proofing or bug repellent.
They are consistent with Shingo’s adage “Inspection to find defects is waste; inspection to prefent defects is essential [ShingoDillonOnToyotaProductionSystem].”
When they are satisfied that the software is working properly, they ask the Product Owner (or a member of the Product Owner Team) to run their acceptance tests for that one feature. If the Product Owner finds any problems, they show the problems to the team
who then fixes the problems immediately and repeats the readiness assessment on the revised code before giving it to the Product Owner for further acceptance testing. When the Product Owner is satisfied, they accept the feature and the developer (or team)
starts working on their next feature. This practice, called Incremental Acceptance Testing,
has two key benefits. First, any concern found by the Product Owner during acceptance testing can be discussed with the developers while they still remember
the details of how they implemented the functionality. Second, the defects or deficiencies can be addressed immediately before the developer moves on to the next feature instead of being stockpiled for a "bug-fixing phase."
The Role of Bug Fixing in the Acceptance Process
Finding bugs during acceptance testing is normal. How we react to the bugs tells us a lot about our overall approach to product development. We can use bugs to motivate improvements to our development process or we can just fix the bugs without understanding
the root cause. The latter approach often results in a large number of bugs being found and having to be fixed. We can either fix these bugs right away or we can stockpile them for a bug fixing phase. Such a stockpile of bugs is considered a form of waste
in lean methodologies because we are building up an inventory of partially finished software. According to Tom and Mary Poppendieck, thought leaders of the Lean Software Development approach, “it is irresponsible to organize the work in such a way that Triage
is necessary”. They argue that “it is primarily the decision to test late rather than to mistake-proof every step that escalates the cost of repair to the point that one would deliberately choose to leave known defects in the product.” Thus, the need for bug
triage can be considered as a process smell There are, however, circumstances when your development is organized in a way when a bug backlog and, therefore, triage activities are necessary. This tends to happen with large and/or distributed teams or when product
owner is not readily available. In this case, it is imperative to keep in mind the following points:
Triage should only look at new bugs – re-examing old bugs repeatedly is waste. The purpose of triage should be to decide whether a) we need to fix the bug for the software to be considered “done”, the issue is not really a bug and therefore requires no action,
or b) the bug relates to future functionality and can be addressed when that functionality is built.
To avoid waste, we need to keep the depth of the bug backlog reasonably small. “Reasonably small” is a function of our development and bug fixing capacity. If on a large project our development team can fix 200 bugs per iteration on average, then the backlog
of 200 bugs is not a lot. However, if our development capacity is such that we can only fix 10 bugs per iteration, clearly a backlog of of 100 should be considered as a red flag.
The key goal is to make the bug fix latency (measured as a Mean Time To Fix) to a minimum by performing regular triages and scheduling bug fixes to prevent the bug backlog from exploding. Average time to fix bugs should be no more than a few iterations. Long-lived
bugs is a smell indicating “inventory” (a form of waste.) This can be avoided by ensuring we get to done-done on each feature before moving on to next feature.
Batching related bugs for fixing or including them in a planned future user story/feature can be considered as a possible strategy (for example, we know that the bug is related to a feature that we will be working on in the next iteration and which will require
a serious update or deprecation of some related functionality; in that case it may make sense to wait till that work is completed), we need to make sure we are cognizant of the impact of such batching on our latency.
For a detailed discussion of bug backlog analysis, bug triage and the use of bug management systems, refer to Chapter 19 Managing the Acceptance Process.
Using Acceptance Tests to Drive Development
The acceptance process works best when the Product Owner supplies the acceptance tests to the development team early enough to help them understand “what ‘done’ looks like”. Ideally, this is before development even starts but it must definitely be before development
is finished at the very latest. The best results are achieved when these acceptance tests are not handed off, but rather developed together, for this way the acceptance tests are not only used for finding bugs in the product, but rather clarify requirements
and improve conversations between the Product Owner and the product development team [MelnikEtAlOnExecutableAcceptanceTests]. The focus is on collaboration not handoff. This practice is known as
Acceptance Test-Driven Development (ATDD)
or Storytest-Driven Development (STDD).
When used in conjustion with incremental development, the team will typically automate much of the testing to keep the cost of repeatedly doing readiness assessment
low. The sidebar “What it takes to do Incremental Acceptance” describes other success factors for doing highly incremental acceptance testing.
This chapter introduces the concept of the acceptance process as a way to think about the activities related to making the decision to accept software. The acceptance decision is the culmination of a process that involves several organizations each with specific
responsibilities. Each release candidate is passed through the decision points of the acceptance process on its way to making the final acceptance decision. There are three major decision points between software construction by the Product Development Team
working with their Product Owner and software usage by the end user. First, the Product Development Team must decide whether the software-intensive product in its current form, known as the release candidate, is ready for acceptance testing. If it is, the
Product Owner must decide whether the release candidate meets their acceptance criteria before the product can be made available to the end users. Ultimately, each end user must decide for themselves whether they want to use the software. This usage decision
has very little influence on the current acceptance decision although it may influence the acceptance criteria for future releases of the product.
While each user potentially makes their own usage decision, the readiness and acceptance decisions are normally each a single decision. Each decision point or "gate" should have well-defined criteria to guide the decision making. These criteria should
be known in advance by both the Product Development Team and the Product Owner.
The acceptance described in this chapter applies to both sequential and agile projects but in slightly different ways. The phased nature of sequential projects means that all testing is done within a separate testing phase and the acceptance process describes
what goes on within that phase. Agile projects traverse the entire development lifecycle for each feature or user story. Therefore, the acceptance process is executed at several levels of granularity with the finest grain execution being at the individual
feature level and the largest being at the whole product (or feature integration) level.
In Chapter 2 – Elaborating on THE Acceptance Process we go into more detail about how the acceptance decision is related to the usage decision and how the process is implemented in complex organizations and when building complex products.
- [CooperOnDoingItRight] Robert G. Cooper , “Doing it Right: Winning with New Products”, Ivey Business Journal, July/August 2000. Also available as a Product Development Institute Reference Paper # 10 as at
Cannot resolve macro, invalid parameter 'input'..
- [CooperOnStageGateProcess] Robert. G. Cooper, “The Stage-Gate Idea-to-Launch Process–Update, What’s New and NexGen Systems,” J. Product Innovation Management, Volume 25, Number 3, May 2008. A modified version also available as as a Product Development Institute
Reference Paper # 30 “Perspective: The Stage-Gate Idea-to-Launch Process – Update, What’s New and NexGen Systems” at
Cannot resolve macro, invalid parameter 'input'..
- [FewsterGrahamOnTestAutomation] Mark Fewster and Dorothy Graham , Software Test Automation, 1999, ACM Press
- [ShingoDillonOnToyotaProductionSystem] Shingo, Shigeo and Dillon, Andrew, “A Study of the Toyota Production System: From an Industrial Engineering Viewpoint”: Productivity Press, 1989.
- [HighsmithOnAgileProjectManagement] Highsmith, Jim “Agile Project Management: Creating Innovative Products” AWP 2004 ISBN-13: 978-0321219770
- [BerczukAppletonOnSCM] Berczuk, Steve and Brad Appleton, Software Configuration Management Patterns: Effective Teamwork, Practical Integration, Addison Wesley (2003) ISBN: 0-201-74117-1
- [MSOnSCM] Patterns & Practices “Team Development with Microosft Visual Studio Team Foundation Server”, Ch.3, 4, 6: Microsoft Press, 2008.
- [OMGOnSwimlanes] Activity Partition Notations. In UML Superstructure Specification, pp.342-345
http://www.omg.org/spec/UML/2.2/Superstructure/PDF/, Oct 18, 2009
- [PoppendiecksOnLean] Poppendieck, Mary & Tom “Lean Software Development” Addison Wesley (2003) ISBN: 0-32-115078-3
- [DenneHuangOnIFM] Mark Denne and Jane Cleland-Huang, "The Incremental Funding Method: Data-Driven Software Development," IEEE Software, vol. 21, no. 3, pp. 39-47, May/June 2004
- [LadasOnScrumban] Corey Ladas, “Scrumban: Essays on O Systems for Lean Development,” Modus Cooperandi Press, 2008
- [HiranabeOnKanban] Kenji Hiranabe, “Kanban Applied to Software Development: from Agile to Lean”, published online at <http://www.infoq.com/articles/hiranabe-lean-agile-kanban>,
January 14, 2008
- [MSOnTFSBranchingGuide] TFS Branching Guide 2.0,
http://tfsbranchingguideii.codeplex.com/, Aug 15, 2009
- [PrimaveraOnScrum] “Primavera Scrum Case study”.
http://controlchaos.com/download/Primavera%20White%20Paper.pdf, Oct 19, 2009
- [KnibergOnScrumFromTrenches] Henrik Kniberg, “Scrum and XP from the Trenches”,
http://www.infoq.com/minibooks/scrum-xp-from-the-trenches, Oct 19, 2009
- [DenneHuangOnSoftwareByNumbers] Mark Denne and Jane Cleland-Huang, Software by Numbers: Low-Risk, High-Return Development , Prentice-Hall, 2003
- [MelnikEtAlOnExecutableAcceptanceTests] Grigori Melnik, Frank Maurer, Michael Chiasson, "Executable Acceptance Tests for Communicating Business Requirements: Customer Perspective", Proc. Agile 2006, IEEE Computer Society Press : 35-46, 2006