This project is read-only.

Chapter 11 - Development Manager's Perspective

The development manager may go by many official titles but for our purposes, this is the person in the supplier organization who is accountable for delivering software to the product owner (PO) whether the PO is the internal business sponsor of an IT project or the product manager in a product company. Where a separate test organization (often called Quality Assurance, Independent Verification or some variation on these) exists for the purpose of doing acceptance testing, the development manager is responsible for making the readiness decision, that is, the decision to deliver the software to whoever is responsible for making the acceptance decision. For larger products the development manager may be assisted by a solution architect; for smaller products the development manager should understand the solution architects responsibilities and either carry them out themself or delegate to some in the development team.
See the Development Manager persona in the two company stereotypes in Appendix X – Reader Personas for an example.

As the Dev Manager your key responsibility is to manage the development of the software in such a way that the product owner (product manager, business lead, etc.) will accept the software. You should do this in a way that maximizes the value (or utility) that the software will provide to the product owner (by means of value provided to their users or customer) while minimizing the cost. The best way to minimize cost is to build the right software and to build it the right way the first time and deliver it to the acceptance testing organization and/or product owner as soon as possible. This avoids expensive and time consuming test&fix cycles in which someone else tests the software, finds bugs which they then ask you to fix. This delay (a form of waste) contributes significantly to churn and cost. Whether this is the product owner or someone looking out for their interests, the fewer bugs they find, the less rework you need to do, the lower the overall cost and the more predictable the schedule becomes.

The Role of Readiness Assessment

Most development teams want to do good work. They don’t want to deliver substandard software to their customer. Most customers want to receive good quality software.
Delivering good quality software is what the customer expects from the supplier. This is a reasonable expectation, one that a professional software development organization should be prepared to meet. Delivering good quality working software isn’t easy or trivial; if it were, the product owner probably wouldn’t need a development manager!!
So, what does it take to deliver good quality software, first time, every time? What does it take to be a professional software developers? The complete details of a sound software engineering process are beyond the scope of this book. But, the relationship between the software development organization and the parties involved in acceptance the software are not. Part of the development process needs to be an honest self-assessment of the software the team has produced. If the team says it is ready, give it to the test team. If the team says it isn’t ready, ask them what still needs to be done to make it ready. A good development team will always be able to say something is ready and should be able to clearly articulate to the test team what is ready and worth testing.
Robert C. Martin argues that it is irresponsible for a developer to ship a single line of code not covered by a test and tests must be kept to the same level of code quality as the production code [MartinOnProfessionalismAndTDD]. Although this argument applies to different degrees depending on what is being built, the development manager is responsible for making sure that the code is adequately secured by solid suite of tests before the product owner evaluates it.

Effective Readiness Assessment

Who’s Job is Quality?

How would you feel about being the first person to try a brand new product that no one, not even the builder had tried before. How many people you know would be comfortable in that situation? If we apply the same logic to the software we build, how many people do you know that would want to be the first one to try a piece of software that no one else has tried before? Yet this is exactly what many development teams do when they throw untested or poorly tested software “over the wall” to the test organization or the customer. Many people would argue that this is just plain unprofessional.
All software should be tested by the development organization before being handed to anyone else for testing.

Building Quality In – Start with the End in Mind

Building software should not feel like a guessing game where developers guess what’s required and testers or users shout out “wrong!” The development organization needs to have a clear understanding of what success looks like before it starts building the software. Part of the problem is that development teams often don’t have a good sense of how the software will actually be used. Developers do not think like a user because most developers thrive on complexity while most users abhor it. This makes it challenging for developers to do a good job of readiness assessment without outside help. Acceptance tests provided by the product owner or testing done by testers can bridge this gap. The former is proactive, positive input that helps development teams understand “what done looks like” before they build the software. The latter is negative, reactive feedback that tells the development team “you haven’t done a good job.” What kind of guidance would you prefer?
A project charter is one way to start the process of developing this common understanding. Requirements documents, user models, use cases and user stories are all ways that we try to develop a common understanding between the supplier and the customer. But these static documents written in natural language typically contain many ambiguous statements and in most cases do not provide enough to sufficiently describe what success looks like. The designs and work estimates provided by the development team therefore likely contain assumptions, many of which are either wrong or will result in a product that is more complex and expensive to build. It is critical to clarify these potential misunderstandings as quickly as possible.
As the development manager you should work with the product owner and acceptance test team to ensure that everyone on the team has access to the definition of success in the form of acceptance tests before the software is built. This should include anything that is used as an acceptance criteria including both functional tests that verify the behavior of the system feature by feature and para-functional quality criteria such as security, availability, usability, operational criteria etc. This process is known as Acceptance Test Driven Development (aka Example Driven Development or Storytest Driven Development). This may require that product owner or testers prepare acceptance tests earlier in the project than they might have traditionally done. It may also require them to be more involved for the duration of the project rather than just at the beginning and the end so that they can answer any questions that come up during the interpretation of the requirements and also do incremental acceptance.

Defect Prevention before Defect Detection

Traditional approaches to testing focus on defect detection. That is, the emphasis is on finding bugs rather than preventing their occurrence in the first place. How can the emphasis be changed to prevention? It isn’t enough to define the tests ahead of time; we must also run them frequently. By doing so, we always know the score. The test results tell us how far from “done” we are. They provide a very clear indication of the progress towards done, one that doesn’t require a lot of extra work to calculate and one that is hard to fudge. The test results, available throughout the project, provide the stakeholders with visibility into the project in a much more transparent fashion than traditional metrics measuring progress against a phased-activity project plan.
To prevent defects we define the tests before building the software, automate them, and run them frequently while building the software. We institute simple team norms to avoid regressing: tests that have passed before must continue to pass as new functionality is added. No exceptions!! Either the test has to be changed (changes to functional acceptance tests may require the customer’s agreement) or the code has to be changed to return the test to passing status.
What role do the various kinds of tests play in this process? Functional tests defined by the acceptance testers define what done looks like and the software cannot be delivered to them when any acceptance tests are failing. Automated unit and component tests define the design intent of the software; writing these first and writing just enough code to make them pass ensures we don’t build unnecessary software and that the software we do write satisfies the design intent. It also ensures that the software you build is designed for testability, a critical success factor for test automation. Another benefit is that they act as a large change detector that will inform the team of any unexpected changes in behavior of the software. This helps the team catch regression bugs before they can sneak through to the users.

Reduce Untested Software

In lean thinking such as exemplified by the lean manufacturing paradigm used by Toyota [ShingoOnToyotaProductionSystem], unfinished work (inventory) is considered a form of waste. In software development, software that has been written but has not been accepted is unfinished work. We should strive to finish this work as soon as possible by doing acceptance testing as soon as possible after the software is written and readiness assessment is completed. This incremental acceptance testing requires collaboration between development, testing and the product owner as all parties must be prepared to work feature by feature rather than waiting for the whole system to be available before any acceptance testing is started.
The minimum quality requirement (MQR) should be agreed upon ahead of time with the testing organization or the product owner. Ideally, any and all tests that will be run as part of the acceptance testing should be run by the supplier organization before making the software available for acceptance testing. Known defects should be identified ahead of time to avoid wasting people’s time testing broken functionality.

What Kinds of Tests Are Required?

Functional tests should be run as soon as the corresponding functionality is built. This should include business workflow tests that verify end to end business processes, use case tests that verify the various scenarios of a single use case or business transaction, and business rule tests that ensure that business rules are implemented correctly.
Operational requirements also need to be verified by ensuring that acceptance tests provided by the operations stakeholders are run and results are analyzed regularly.
Para-functional testing should be done on a regular basis as soon as enough software is built to allow them to be run. The earlier they are run, the more time the supplier has to correct any deficiencies that are discovered. This is especially important with para-functional tests because changing the para-functional attributes of the system may require a change in architecture, a proposition that may get more expensive the later the change is made.
Many of these tests can be done much earlier in the project if the early focus of the project is to build a walking skeleton of the application. The walking skeleton implements the full architecture in a very minimalist way. For example, all the logic might be hard-coded thereby supporting only a single highly-simplified business transaction or workflow. But all the major architectural components would be present to ensure the runtime characteristics were truly representative of the finished product even if the functional behavior was not implemented.

Sharpening the Saw

Any bugs that are found during acceptance testing should be a surprise and should prompt the question “how did that slip through our readiness assessment? Clearly, the software wasn’t truly ready.” If the answer is “because they used it a different way from what we expected” then the supplier organization has to do a better job understanding the users of the software. Therefore, every bug found becomes a learning opportunity for the team by prompting them to look for ways to improve how they build software. For example at Mircrosoft’s patterns & practices group, during the Web Service Software Factory: Modeling Edition project, the team did just that on several occasions, modifying their CI system to prevent issues from reoccurring.
As manager of the development organization you need to create the conditions in which the development team can produce the right software built the right way. This may require overcoming organizational and cultural hurdles. You need to work with your counterparts in the test organization to break down the traditional adversarial relationship (if one exists) and work together to create a more collaborative relationship. As Ade Miller, patterns & practices development lead, eloquently put it: “I think one of the key things a Dev Manager can do here is try to send the clear message that testing and the test organization are important. In far too many cases testers are seen as some sort of lesser function because they “lack” the technical skills of developers.” The test organization has the skills to help your team build better quality software, not just to tell you when you haven’t.

Last edited Nov 5, 2009 at 11:18 PM by rburte, version 1

Comments

No comments yet.