Chapter 3 - Decision-Making Model
Chapter 1 introduced the acceptance process and the three key decisions that need to be made as software is assessed for acceptability by whoever makes the acceptance decision: the Readiness Decision
, the Acceptance Decision
and the Usage Decision
Figure 1 illustrates this simplified version of the decision making model.Figure 1 Simplified decision-making model
This section elaborates on how
the first two decisions are made and who
makes them in a variety of business models. The decisions are not made in a vacuum; they require information that must be made available through activities. Figure 2 illustrates this process for a single decision:Figure 2 Decision-making model sample activities
The diamond on the right side of Figure 2 represents the decision to be made based on the test results (the decision can be either the readiness decision or the acceptance decision). The test results are based on the testing and assessment activities, which assesses the system-under-test against the expectations. The expectations of the system-under-test were defined based on the users’ requirements. All of these activities are executed within the context of a test plan.
Many of the practices in Volume II describe how to do the assess activity, and other practices in Volume II describe ways to define the expectations based on the needs. That is one of the reasons this guide has a number of requirements-related practices—it is not about testing, it is about acceptance, and acceptance is based on expectations and requirements.
The Six Abstract Roles
The job titles of the decision makers vary greatly from business model to business model and across business domains and organizations, so this guide uses abstract role names to describe the roles within the decision making model. This guide also provides a list of common aliases. However, be aware that many of the names are highly overloaded and that your ”customer“ (to pick just one example) may be an entirely different role than the one mentioned as an alias here. To see how the abstract role names map to job titles within organizations in specific business models, see the sidebar “Decision-Making Model Stereotypes.”
The readiness decision maker makes the final readiness decision based on input from others. When a single person performs this role, the job title might be something like Chief Engineer, Project Manager, Development Manager, or VP of Engineering. This role could also be played by a committee, as is common in enterprise deployment with many stakeholders, or made by several people or committees in parallel as described in Complex Organizations in the previous chapter.
Product Development Team
The product development team builds the software. Generally, this team may include user experience researchers and designers, graphic artists, requirements/business analysts, software architects, software developers, middleware and integration specialists, system administrators, DBAs, release/deployment specialists, testers and technical writers. In other words, this team includes anyone who is involved in any way in the actual construction, customization, or integration of the software.
The readiness assessors, as their name suggests, assess the readiness of the software for acceptance testing. They provide information that is used to make the readiness decision. The job titles involved depends very much on the nature of the project and the organization, but it typically includes roles such as developers, testers, and documentation writers. In effect, a readiness assessor can be anyone who might be asked to provide an opinion on whether the software is ready. In some cases, this opinion is based on formal testing activities, but it might also be based on technical reviews or even qualitative inputs.
The acceptance decision-maker is the person or committee who decides whether to accept the software. In a product company, a job title for this role might be Product Manager, but in an information technology (IT) environment, this role is typically filled by a customer, product owner, business lead, or business sponsor.
Acceptance testers provide data on acceptability of the product. They perform activities to assess to what degree the product meets the expectations of the customer or end user. Acceptance testers may include two teams – one focusing on functional acceptance of the system, while another focusing on operational acceptance. They provide information to the acceptance decision maker. They may be dedicated testing staff, end users asked to do testing, or anywhere in between in skill set.
Users make individual usage decisions. Each user decides whether to use the product as it is when it is shipped or deployed. Their feedback might be used to adjust the requirements for the next release or to do usability testing of the beta versions of the current release. People who are users may also be involved as acceptance testers or beta testers. Similar vital feedback might come from operations staff, support staff, trainers, field engineers, who are exposed to issues of consumability and usability.
Making the Three Decisions
This section describes how the preceding six abstract roles are involved in making the three decisions.
Making the Readiness Decision
The readiness decision is made by the readiness decision maker(s). The readiness decision is an exit gate with a decision about whether to let the product be seen beyond the boundaries of the supplier organization. The decision is based on readiness assessment (which is based on the features included and the quality of those features) done by the readiness assessors. The decision can be made by a single person (such as a Chief Engineer) or by a committee (such as engineers, architects, or other project stakeholders). When it is not a single decision as described in Chapter 1 – The Acceptance Process, each decision may be made by a single person or a committee. From the point of each decision maker the software system is either ready or it is not ready. If it is not ready, there may be a list of concerns that need to be addressed before it will be considered ready. For more information, see How Will We Manage Concerns section in Chapter 16 – Planning for Acceptance.
There may have been a number of earlier decision-making checkpoints as part of the development process (such as "requirements complete," "design complete," or "code complete"). These are beyond the scope of this guide because they are neither directly part of the readiness decision nor are they easily tested.
Making the Acceptance Decision
The acceptance decision is made by the person (or persons) playing the Acceptance Decision Maker role. The decision is summarized by the question ”Should we accept the software and put it into use delivering value to our organization?”
There may be additional contractual consequences for making the acceptance decision, such as a commitment to pay the supplier, the start of a predefined warranty period, and so on. While in theory these should not be the primary considerations when making the decision, in practice they often are. The decision should be whether the software is “complete” or “done” enough to be deployed or shipped. For more information about the definition of "done," see Chapter 7 – Doneness Model. For more information about the complete definition of the system attributes that may be considered when making the acceptance decision, see Chapter 5 – System Model.
The definition of "done" is influenced by several factors, including the following:
- Minimum Marketable Functionality (MMF) for the product. What features or functions must the product support to be worth releasing? This is based on whatever criteria the Product Owner decides are important, derived from product plans, market surveys, competitive analysis, or economic analysis. While the Product Owner is accountable for the decision this does not remove the need for the Product Development Team to understand the problem domain in general and the needs of potential users. Wherever possible, they should assist the Product Owner in defining the product. For a definition of the responsibilities of the Product Owner, see Chapter 1.
- Minimum quality requirement (MQR) for the product. What is the level of quality that must be achieved before the product can be released? Quality has several dimensions. The presence or absence of bugs/defects in functionality is just one dimension of quality. Non-functional requirements, also known as quality attributes is another dimension. The MQR encompasses the latter while MMF encompasses the former.
- Hard deadlines. By what date must a particular version of the product be released to have any value. These can include trade show dates, regulatory deadlines, or contractual obligations. Each deadline could be satisfied by a different version (or “build") of the product. For more information, see "Project Context Model."
The acceptance decision is made based on data acquired from a number of sources and activities. Acceptance testing generates much of the data needed to make the acceptance decision. This data includes the following:
- Pass/fail results of all tests that were performed as part of the acceptance testing. This could verify both functional requirements and parafunctional requirements.
- Feature completeness – As implied by the pass/fail results of functional tests
The acceptance decision may also use data gathered during readiness assessment. The most common example of this is data related to system performance (capacity, availability, etc.) which many customer organizations would not be capable of gathering themselves but which they are capable of understanding once it has been gathered.
The acceptance decision is all about maximizing value derived from the product by the customer (represented by Product Owner) and minimizing risk. Time has a direct value in that time spent collecting more data through testing has a direct cost (the cost of resources consumed in gathering the data) and an indirect cost (the deferral of benefit that can only be realized once the system is accepted). Risk has cost that could be calculated as the sum of the cost of all possible negative events multiplied by the probability of their occurrence. Though this kind of literal calculation is not frequently done our perceptions of risk are inherently based on an intuitive interpretation of the circumstances along these lines. The intent is for the acceptance decision maker to understand the trade offs and decide on whether we need more data, or we have done enough and we can make the decision. In doing this it is important to be aware of the two extreme negative possible outcomes, one in time, the other in risk. Examples of costs of unmitigated risk might include the following:
- cost of patching software in the field;
- cost of manual workarounds for bugs;
- cost of maintaining specialized resources for software maintenance;
- losing customers that need specific features that are missing;
- opportunity cost of unrealized or delayed business benefits if product fails usage assessment or end up not being used after deployment.
The cost of time can be non-linear in that if a deadline or need is missed, all benefits may be lost and punitive action taken. The risk the calculation is different for 'black swan' [TalebOnImpactOfImprobable] events of extremely low probability but extremely high impact such that again all value may be lost. These cannot be ignored in the acceptance of new software. In concept, when the cost of risk exceeds the cost of delay, more testing should be performed. When the cost of more testing exceeds the risk-related cost that would be reduced (by reducing probability of one or more events occurring or by reducing the expected cost given the event does occur), you can decide to accept the product without further testing.
Making the Usage Decision
Each potential user of the system has to make a personal decision about whether to use the software. This decision is different from the acceptance decision in that it is made many times by different people or organizations. In fact, there may be several tiers of these decisions as companies decide whether to adopt a product (or a new version thereof) and departments or individuals decide whether to comply with the organizational decision. The important consideration from the perspective of this guide is that these decisions happen after the acceptance decision and do not directly influence the acceptance decision. They may indirectly influence it in one of the following two ways:
- Proactively. Usage decisions may indirectly influence the acceptance decision for an upcoming release by future users communicating the individual acceptance criteria to the product owner in response to market research or surveys. This type of criteria may also be submitted to the product owner through unsolicited inputs, such as feature requests or bug reports. Also, user feedback (including lack of usage) from alpha and beta releases may influence the acceptance criteria for the general release. See section 1.2.3 The Acceptance Process for Alpha & Beta Releases in Chapter 1 – Introducing the Acceptance Process for a more detailed discussion.
- Retroactively. Usage decisions may indirectly influence the acceptance decision by providing feedback on the released product indicating a lack of satisfaction in either functionality or quality. This may influence the acceptance decision criteria of a future release, but it rarely causes the acceptance decision already made to be revisited. The notable exception would be the discovery of "severity 1" bugs in critical functionality that might result in a recall of the release software.
The usage decision often involves deciding whether or not the business or user is ready to start using the software. This is sometimes called “business readiness” and it falls in the area of change management. On internal information technology projects, the Product Owner is typical responsible for business readiness. This might include training of users, transforming legacy business data into formats compatible with the new system or preparation of new data ready to be loaded when the new system becomes available. While it is an important topic, it is, strictly speaking, not part of “accepting software” and is beyond the scope of this guide. Note:Sidebar: The Perils of Ignoring the Usage Decision
A company one of the authors worked for implemented a popular tool for managing leads. The Product Owner Team consulted with the key stakeholders who would be consuming the reports on potential future business in the sales pipeline and configured the system with the field needed to generating the reports. Most of the fields were marked as mandatory.When the system was ready for acceptance testing, the Product Owners team tested the system and found it to be working to their satisfaction. They conducted extensive user trainging of the sales force to ensure that everyone knew how to use their wonderful new tool. They accepted and deployed the system. When the system went live, the user uptake was poor. Several years after deploying the system, most users were still choosing to not use the tool because it was providing them with very little value while imposing considerable overhead on their day-to-day work. Most of them developed their quotes in spreadsheets or in a custom-built pricing application that directly addressed their needs. The lead management tool didn’t provide the equivalent functionality therefore it added additional steps re-entering data but in different formats. To make matters worse, it forced the users to pick through a long list of possible customers each time a user needed to create a quote. As a result, the system languished and much of the potential value was a lost opportunity cost.The Product Owner of a subsequent project to replace the custom pricing tool took the lessons to heart and included functionality to address the shortcomings by feeding the price quotes into the lead management system and synchronize the customers automatically. This satisfied the needs of the report-generation stakeholders by providing them with all the data needed to calculate the present value of leads in the sales pipeline while not burdening the users with duplicate data entry. As a result, usage of the lead management tool soared. A happy though belated ending.
Roles vs. Organizations
The roles described in this decision-making model may be played by people in several different organizations. The primary value of discussing organization here is in making it easier to map terminology from various organization models to better understand who plays which decision-making role. If the organizational model does not help in this endeavour, it can be ignored.
When the software is being built by a different organization than the one who commissioned its construction, the organization that commissioned the software is often referred to as the customer, and the organization that is building the software is the supplier. This is true whether the organizations in question are separate, unrelated companies or simply departments within a single company. For example, the IT department is typically a supplier of systems to the core business departments (such as Transportation or Manufacturing) and supporting departments (such as Human Resources or Finance.)
When acceptance testing is outsourced to a third-party test organization, it is often referred to as the (third-party) test lab. The test lab is a supplier of services as distinguished from the supplier of the software.
An organization that buys and deploys shrink-wrapped software can also be referred to as a customer, and the organization they buy it from may be referred to as the vendor or supplier. The fact that the vendor contracts the work to an outsourcer (another vendor of which they are the customer) illustrates the problem with using the term "customer" to describe the acceptance decision maker (the product owner) as advocated in extreme programming - which customer is it referring to?
Figure X illustrates this problem. The Purchaser (in the role of customer) buys shrink-wrap from S/W Vendor (in role of supplier). S/W Vendor (in the role of customer) outsources development to S/W Developer (the supplier). S/W Developer (as customer) outsources readiness assessment to Test Lab (as supplier). So we have three separate customer-supplier relationships in this scenario making it hard to tell which party we are talking about when we refer to the “customer”. Figure X Multiple Customers and Suppliers
Who Plays Which Roles?
Thus far the discussions have centered on the abstract roles involved in the decision making process. But who actually plays the roles.
Who Plays the Readiness Decision Making Role?
The Readiness Decision Maker role is typically performed by someone in the product development organization. They are the person who ultimately decides whether the software is ready to be shown to the acceptance testers. Typical job titles include Development Lead, Project Manager, Director of Development, Chief Engineer, etc.. On agile projects where this decision is made separately for each feature, the feature-level Readiness Decision Maker is often the developer who signs up for the responsibility to implement the story/feature. In the terminology used in this guide, the Readiness Decision Maker is a member of the Product Development Team.
Who Plays the Acceptance Decision Making Role?
The Acceptance Decision Maker role is typically performed by someone in the business side of the organization when development is done internally or by someone in the customer organization when development is outsourced. They‘re the person who ultimately decides whether the software is ready to be shown to the acceptance testers. Typical job titles for internally developed software include Business Lead, Product Manager, or some kind of business title. On agile projects where this decision is made separately for each feature, the feature-level Acceptance Decision Maker is often the specific business subject matter expert who provided the detailed description of the functionality to be built. In the terminology used in this guide, the Product Owner is the Acceptance Decision Maker.
Who Does What Testing?
Readiness assessment is the responsibility of the Product Development Team But does this mean that developers are the only ones doing readiness assessment? By the definition used in this guide, the Product Development Team often includes people who would describe themselves as testers. Developers would do at a minimum, unit testing. Either testers and/or developers would do whole system testing (functional and non-functional). Other parties who are also considered part of the Product Development Team may do other forms of readiness assessment. For example, data architects might do data schema design reviews. Security architects might do security code reviews.
Ideally, acceptance testing should be done by real users of the software since they are the ones who are best suited to deciding whether it is “acceptable.” But what if the real users are anonymous? Who can and should do acceptance testing on their behalf? This is a case where testers often act as the proxy for the end user. In this situation at least some of the testers would be part of the Product Owner Team rather than the Product Development Team.
In Figure X2 – Who Does What Testing by Type of Organization, each cell describes who does which testing in a common project context. The top left triangle in the cell indicates who is doing readiness assessment and the bottom right triangle indicates who is involved in acceptance testing.Figure X2 Who Does What Testing by Type of Organization
When software is being built by an organization for its own use, the users should be available for participating in the acceptance testing. If there is no testing function, the business typically expects development to test the software thoroughly before handing it over. If there is a testing function, they would typically participate in a 2nd phase of readiness assessment, often called “system testing” and “integration testing” before handing the software over to the users for “user acceptance testing” (UAT) or “business acceptance testing” (BAT).
When software is being built for sale to anonymous customers, real users typically don’t take part in acceptance testing (except perhaps as users of alpha or beta releases.) Instead, the testing organization does acceptance testing on their behalf acting as a proxy (or surrogate) user. If there is no specialized testing function, the acceptance testing could be done by the product specifiers (members of the Product Owner Team) or the acceptance testing could be outsourced to a 3rd party test lab.
In the terminology used in this guide, testers doing Readiness Assessment are members of the Product Development Team while testers doing acceptance testing are members of the Product Owner Team.
The Readiness Decision is made by someone or some committee from the supplier playing the role of Readiness Decision Maker based on information gathered by Readiness Assessors. Readiness Assessors typically include developers, architects and sometimes testers. The goal of the Readiness Decision Maker is to ensure that the product has enough functionality and is of good enough quality to expose to the Product Owner . The RDM is typically a senior person in the product development team’s organization.
The Acceptance Decision is made by the Product Owner (which may be a single person or a committee of several key stakeholders) playing the role of Acceptance Decision Maker based on information gathered by people playing the role of Acceptance Tester as well as any Readiness Assessment information shared by the supplier. The ADM is usually a key person in the customer organization.The acceptance testers may be end users, representatives of the end users such as the specifiers of the requirements, or testers) when end users are not available to do acceptance testing. When the customer organizations has commissioned the production of a product and they have no internal test capability, they may choose to outsource some of the acceptance testing to a third-party test lab.
[TalebOnImpactOfImprobable] Taleb, Nassim Nicholas. "The Black Swan: The Impact of the Highly Improbable"
, Random House. 2007