This guide is about accepting software. Accepting software involves acceptance testing, but it is much more than that. The concept of acceptance testing means different things to different people. In simple terms, acceptance testing is the set of activities we perform to gather the information we need to make the decision “Is this software ready for me (or my customer) and does it fullfill my (and my customer’s) requirements?” This decision is usually composed of several decisions, each with supporting activities. Therefore, to define acceptance testing, it may be useful to understand the process by which the decision(s) are made. This process may involve several organizational entities, each with one or more decision-makers. The software is typically passed between the organizational entities for them to decide whether the software is ready to go through the next step. This process is introduced in more detail in the section "Acceptance Process Model" and followed up with a more detailed description of the decision-making process in the section called "Decision Making Model."

Software Acceptance and Acceptance Testing

Acceptance refers to the act of determining whether a piece of software or a system meets the product owners’ expectations. It includes both the final decision to accept the software and any activities, including acceptance testing, required to collect the data on which the acceptance decision is based. Both the acceptance testing and the acceptance decision can be relegated to a separate acceptance phase of the project or they can be done throughout the project, which is known as Incremental Acceptance Testing.

Mental Models for Acceptance Testing

While writing this guide, we struggled to determine a suitable definition of acceptance testing that would make sense to a broad range of readers. It seemed like there were many different vocabularies in use by different communities such as consumer product companies, information technology departments of large businesses, and data processing divisions of telecommunication service providers, to name just a few. To assist us in describing “acceptance”, we came up with several mental models of various aspects of acceptance testing. We tested the models against numerous examples from our collective project experiences at Microsoft patterns & practices, telecommunication product companies, IT departments and beyond. Then we tested the models with the people on the Advisory Board for the project. This was an iterative process. We also tested these through the public review process by releasing early drafts of this guide to the community and soliciting feedback.
It is important to note that our early models failed their acceptance tests!! That was a great lesson about the need to get feedback incrementally, a practice we advocate for acceptance testing. Based on feedback from our advisors we refactored the models and came up with additional models to fill the gaps. The key breakthrough was when we came up with the Decision-Making Model, which ties together most of the concepts around accepting a system. It builds on the Acceptance Process Model which describes the key steps and activities as the system-under-test moves from requirements, through development and into testing and finally production; it also describes how the decision to accept the system is made. The Decision-Making model describes who makes the decisions and who provides the data for those decisions.
The decisions are not made in a vacuum; there are a number of inputs. These include the project context, the nature of the system being built and the process being used to build it. The latter is important because it affects how we define “done”.
Figure1 illustrates the relationships between the key models.
Figure 1
Figure 1 The Key Mental Models of Acceptance
These models are the focus of Part I – Thinking about Acceptance but here’s a short introduction to each model to get us started:
  • The Acceptance Process Model. This model defines the overall stages of software development and the "gates" that must be passed through on the journey from construction to software-in-use.
  • Decision-Making Model. This model describes how to decide whether software can go through a gate to the next stage and who makes the decision. It also defines the supporting roles that may help the decision maker gather the information needed to make the decision.
  • Project Context Model. This model describes the business and project factors that influence the decision, including timeframes and deadlines, resource availability, budget, and anything contributing to project risks.
  • System Model. This model describes the attributes of the software-intensive system that may factor into the decision. This includes both functional and non-functional attributes. The system model may arise out of requirements gathering activities or a more formal product design process. Techniques for capturing functional requirements include simple prose, use case models, protocol specifications and feature lists. The non-functional requirements are typically captured in documents or checklists.
  • Risk Model. This model introduces the concepts of events, likelihood/probability, and consequence/impact. It helps everyone involved understand what could go wrong and, thereby, prioritize the acceptance criteria and the kinds of information to gather to help make the acceptance decision. It also describes several different risk mitigation strategies, including the following:
    • Do something earlier to buy reaction time.
    • Do additional activities to reduce likelihood of something occurring.
  • Doneness Model. This model describes different dimensions of “doneness” that need to be considered in a release decision and how they are affected by the process model.

The chapters in Part III – Accepting Software introduce other models that build on this core model:
  • Test Lifecycle Model. This describes the stages that an individual test case goes through how to gather information for making readiness and acceptance decisions.
  • Concern Resolution Model. This describes how to handle any concerns that are raised during the acceptance testing process.

Development Processes

The software development process has a significant impact on how acceptance testing is performed. Throughout the rest of this volume and the others to follow we found ourselves saying “On sequential projects…” and “On agile projects …” but many people have their own definitions of what these terms mean. We wanted to make sure all readers understood what we meant by these terms. We feel that these names refer to points on a process continuum with other labelled points possible. This section describes the process continuum with two distinct process stereotypes on the opposite ends of the scale.

Sequential Processes

Sequential processes organize work based on the kinds of activities involved and the interdepencies between the activities. The classic waterfall approach involves a single release while incremental waterfall projects have multiple releases.

Classic Waterfall

The waterfall approach (so named after the diagrams used in a paper by Winston Royce [Royce]) involves organizing the project into a series of distinct phases. Each phase contains a specific type of work (such as requirements analysis) and has specific entry and exit criteria. In the classic or pure waterfall approach the phases do not overlap. The entry and exit criteria require the outcome of a previous phase to be complete and correct (validated) before the next phase can start. This pushes the delivery of the product’s functionality to the end: the product as a whole is deployed in a big-bang approach. Figure 2 illustrates the major phases of a waterfall project.
Figure 2
Figure 2 A Classical Waterfall Project
This process is usually implemented by breaking down each phase into hierarchically organized units of work appropriate to the type of work involved. For example, within the requirements phase, the work may be divided between analysts by requirement topic, but during the construction phase, work may be divided among the developers by module. The handoffs between phases are usually in the form of documents, except that the handoff from construction to testing also involves the code base. Readiness assessment is done by the supplier organization, which we refer to as the Product Development Team, after all the construction is completed; acceptance testing is performed by the product owner after the software is deemed to be ready.

Incremental Waterfall

It is commonly accepted that the longer a project goes before delivering software, the higher the probability of failure. If the context or requirements change before the project is completed, the pure waterfall cannot succeed. One way to combat this is to break the project into increments of functionality. These increments can be tested and in some cases even deployed. Figure 3a illustrates a waterfall project with two increments of independent functionality each of which is tested and deployed. This type of project is also called checkpointed waterfall).
Figure 3a
Figure 3a Multi-release waterfall with independent functionality
In this approach, the planning phase, requirements analysis phase, and design phase are performed once early in the project while the construction phase, test phase, and deployment phase are repeated several times. The work within each phase is decomposed the same way as for single-release projects. If the functionality built in the second release overlaps the functionality in the first release, the testing and deployment must encompass the entire functionality. Figure 3b illustrates multiple releases with overlapping functionality. Note how the test activity must span the functionality of both releases.
Figure 3b
Figure 3b Multi-release waterfall with overlapping functionality
In the multi-release waterfall process, sometimes the test phase of the a release may overlap with the construction phase of the subsequent release if the construction and testing teams are separate and features across the the two releases are sufficiently independent.

Agile Processes

Most agile methods use an iterative and incremental approach to development. After an initial planning period, the project duration is broken into development iterations that deliver increments of working software. Figure 4 illustrates a project with two iterations; most projects would have many more iterations than this.
Figure 4
Figure 4 Iterative & Incremental Development
Figure 4 illustrates two iterations, each of which starts with an iteration planning session and ends with some acceptance testing. In each iteration the work is broken down into features or user stories, each of which independently goes through the entire software development life cycle. The product owner, “onsite customer” or customer proxy, who is readily accessible to the Product Development Team, is responsible for describing the details of the requirements to the developers. It is also the product owner’s responsibility to define the acceptance tests for each feature or user story. They may prepare the tests themselves, delegate the job to requirements or test specialists within th Product Owner Team or prepare them in collaboration with the Product Development Team. The tests act as a more detailed version of the requirements description in a process known as “Acceptance Test Driven Development” or “Storytest-Driven Development.” This allows the developers to execute the acceptance tests during the development cycle.
When all the tests pass for a particular feature or user story, the developers turn over the functionality to the product owner (or proxy) for immediate "incremental acceptance testing." If the product owner finds any bugs, the developer fixes it as soon as possible. It is the Product Owner’s discretion to insist that it be fixed right away or to consider it part of another feature to be implemented in a later iteration. If they want it fixed and the developer has already started working on another feature, the developer typically puts the other feature on hold while they address the Product Owner’s concerns; the feature isn’t considered “done” until all the concerns are addressed. The product owner concerns are not stockpiled in a bug database for fixing during a bug-fixing phase of the project.

Multi-Release Agile Projects

Most agile methods advocate "deliver early, deliver often." In theory, the result of any development iteration could be determined, after the fact, to be sufficient to be put into production. This would lead directly to the deployment activities. In practice, most agile projects plan on more than one release to production and the iterations are then planned to deliver the necessary functionality. Figure 6 - Multi-Release Agile Project illustrates an agile project with two releases.
Figure 5
Figure 5 Multi-Release Agile Project.
Note how there is a testing cycle for the second release which includes regression testing of the functionality delivered in the first release. Most agile methods emphasize test automation so the regression testing cost is minimized.

Kanban-based Agile Process

Some agile methodologies dispense with iterations in favour of allowing a fixed number of features in progress at any time. This is designed to emphasize the concept of a continuous flow of working code for the on-site product owner (or proxy) to accept. From an acceptance testing perspective, these Kanban-based methods still do incremental acceptance testing at the feature level and formal/final acceptance testing before each release, but there is no logical point at which to trigger the interim acceptance testing that would have been done at iteration's end in iteration-based agile methods. Figure 6 – Kanban-based Agile Project illustrates this style of development. Note the lack of iterations.
Figure 6
Figure 6 Kanban-based Agile Process
In the example, it is important to note here that there are never more than three features (in this example, one udergoing design, a second undergoing construction, and athird undergoing testing) in progress at any one time. In other words, there are only three development "slots," and a slot becomes available for another feature only after it has finished its incremental acceptance testing. This is similar to how Kanban are used to control the inventory in lean factory production lines. In theory, the product owner can decide at any time that there is enough functionality to warrant deploying the product following a short period of regression testing.
Kanban-based software proceses [Scrumban] are implementations of a more general philosophy known as Lean software development [LSD].

Process as a Continuum

"Agile" and "waterfall" are examples of two high-level project streotypes consisting of certain combinations of characteristics. It is easy to imagine the decision on each of these characteristics as being the setting of a process “slider control”. For example, the “Number of releases” slider might have stops at 1, 2, 3, and so on. The “Number of iterations” slider could have values of 1 and so on, which indicate whether there are intermediate checkpoints within a release. The “Maximum number of features in progress” slider similarly may take on values depending on the number of development slots available in a Kanban-based system. Another dimension might be ”Integration frequency”, with settings of Big Bang, Major Milestone, Quarterly, Monthly, Biweekly, Weekly, Daily, and Continuous.
The following table summarizes the positions of these sliders for what is considered to be a stereotypical project of each kind. These positions are not definitive or complete, but they challenge you to create your own sliders and settings for your context.

Type of Process

Project Attributes Classic Waterfall Incremental Waterfall Agile (Iteration) Agile (Kanban)
Number of releases 1 1 2 or more 2 or more
Number of iterations 1 2–6 4 or more 1
Iteration length Not applicable Many months 1-4 weeks Not applicable
Maximum number of features in progress No maximum No maximum 1 iteration’s worth Less than the number of team members
Integration frequency Big Bang Quarterly Daily or hourly Daily or hourly
Requirement-to-test duration Months or years Months Days Days
Test timing Separate phase Separate phase Mostly incremental Mostly incremental
Release criteria Scope-based Scope-based Time-boxed Time-boxed
Average Requirement task effort Person months Person months Person days Person days
Average development task effort Person days or weeks Person days or weeks Person hours Person hours
Culture Hierarchical Hierarchical Collaborative Collaborative
Skills Highly specialized Highly specialized Generalists Generalists or Specialists
Determining progress Work completed relative to plan Work completed relative to plan Delivery of working code Delivery of working code
Work remaining Estimate duration of remaining tasks Estimate duration of remaining tasks Estimated time for remaining features Estimated time for remaining features

  • [Royce] Winston W. Royce, “Managing the Development of Large Software Systems“, Proceedings of IEEE WESCON, Aughust 1970, pp 1-9.
  • [LSD] Poppendieck, Mary & Tom “Lean Software Development” Addison Wesley (2003) ISBN: 0-32-115078-3
  • [Scrumban] Corey Ladas, “Scrumban: Essays on Kanban Systems for Lean Development,” Modus Cooperandi Press, 2008

Last edited Nov 5, 2009 at 1:15 AM by rburte, version 5


No comments yet.