Chapter 9 - Product Manager's Perspective

A product manager in a product company plays a very similar role to that of the Business Lead on an IT project with a few exceptions. While the business lead is typically engaged only for the duration of the project to deliver the new functionality, a product manager is typically responsible for the profitability of the product over its entire lifetime. They are more inclined to think in terms of multiple releases of the software/product. Their strategy may also be focused on a product line, not individual products, in which case the integration and interoperability among these products would be of primary concern. Companies that build products are also more likely to have dedicated testing resources in a separate test department. Many of the factors discussed in The Business Lead’s Perspective also apply to the product manager since the Product Manageer also plays the role of a Product Owner, but in a product company. This section focuses on the differences.
See the Product Manager persona in Appendix X – Organization Stereotypes and Reader Personas.

As a Product Manager, you should feel total ownership of the success of the product in the marketplace. You survey the market for what it needs and define the product to be built. You decide how much revenue the product is likely to generate and how much money it is worth investing to build the product. It is your job to ensure that the supplier team understands what you want built and the relative priorities of the pieces of functionality.

Defining Done

A key part of the relationship between the product manager and the product development team is having a commonly understood definition of what constitutes “done”. This comes in two dimensions: functionality and quality.

Defining MMF

MMF stands for Minimum Marketable Functionality (also known as Minimum Marketable Product (MMP) or Minimum Marketable Features (MMF)); it is the smallest amount of functionality that you could deliver to your customers without losing credibility. Anything over and above the MMF adds value but should not delay the release if it cannot be completed by the delivery deadline. It takes self-discipline and a good understanding of the market to define the MMF; it is so much easier to just throw in everything any customer has every asked for and call it a release but this is almost sure to result in spending too much and taking too long to deliver the product.
You may need help to translate the high level requirements into a detailed product definition. Business analysis expertise may be needed to understand the business processes used by the potential users. Product design expertise (e.g. interaction designers and graphic designers) may be needed to help design the functionality. Product development expertise may be needed to understand what is technically possible and consistent with how the product already works. You may want to include people with these skills as part of the product management team or you can enlist the help of other organizations. Whichever way you choose to deal with the issue you’ll need to ensure that everyone understands your vision for the product and how it satisfies specific market needs.

Defining MQR

You don’t like receiving unsatisfactory software and the supplier team hates being told that their carefully crafted software is not very good. Establishing the minimum quality requirement (MQR) is an important step in building a trusting, productive relationship. A jointly crafted Done-Done Checklist posted in the supplier organization’s office is a great way for the supplier team to keep the focus on quality front and center. A release-level checklist as shown in Figure 1 is a good way to set the overall expectations of what the supplier should be providing before acceptance testing can start.

Sample Done-Done Checklist for Release 1.0

  • All MMF features are included in the RC build.
  • A security review has been conducted and a sign off obtained.
  • Test team is confident that none of the included features has a significant risk of causing problems in the production environment, i.e. MQR met, including:
    • config testing, side-by-side
    • performance
    • localization
    • globalization
    • user experience.
  • Business compliance achieved.
  • There are clear, concise deployment and rollback instructions for the operations team.
  • There are clear trouble-shooting scripts and knowledge base articles for use by the help desk representatives.
  • All included features have been demo’ed to and accepted by the customer.
Figure 1
Sample Done-Done Checklist for a Release

The quality criteria described in a release –level “done-done checklist” as shown in Figure 1 may require several readiness assessment cycles before the criteria are satisfied. To reduce the number of readiness assessment test&fix cycles we need to influence the level of quality delivered by individual developers to the readiness assessors. This can be affected by a feature-level “done-done checklist” as shown in Figure 2.
  • The acceptance criteria are specified and agreed upon
  • The team has a test/set of tests (preferably automated) that prove the acceptance criteria are met
  • The code to make the acceptance tests pass is written
  • The unit tests and code are checked in
  • The CI server can successfully build the code base
  • The acceptance tests pass on the bits the CI server creates
  • No other acceptance tests or unit tests are broken
  • User documentation is updated
  • User documentation is reviewed
  • The feature is demoed to the customer proxy
  • The customer proxy signs off on the story
Figure 2 Sample Done-Done Checklist for individual features
Done-done checklists such as these are commonly used on agile projects.

Managing Your Own Expectations

The most effective way to ensure that poor quality software gets delivered is to ask the supplier to over commit. Software requirements are notoriously hard to pin down and just as hard to estimate. Forcing a supplier to deliver to an unrealistic schedule is sure to backfire as you will likely get a late delivery of poor quality software that you then need to invest extra time into bringing the quality up to barely sufficient level. This is not a recipe for success!
It is far better to define the level of quality required and manage the scope of functionality to ensure it can all be finished in the time and money allotted. Time-boxed incremental development is a proven technique for achieving on-time delivery of quality software. It is to your advantage to work closely with the supplier organization to select the functionality to be implemented during each iteration. You’ll have much better visibility of progress towards the release and much earlier warning if the full slate of functionality cannot be completed by whatever delivery date you have chosen. This gives you time to prune the functionality to fit into the time available rather than trying to cram in too much functionality and thereby sacrificing quality.

Who Makes the Acceptance Decision?

As product owner, you need to be responsible for making the business decision about whether to ship the product as-is or to delay delivery to allow more functionality to be added or to improve the quality by fixing known bugs. You will likely rely on data from other parties to make this decision but delegating this decision to one of those parties is at your peril because they are unlikely to understand the business tradeoffs nearly as well as you do.
It is reasonable for you to expect that the parties providing information about the quality of the release candidate do so in terms that you understand. The impact of every missing feature or known bug should be described in business terms, not technical jargon. Ideally, this should be easily translated into monetary impact (lost or delayed revenue) and probability (likelihood of occurrence).

Operational Requirements

Most software products have operational requirements. These are the requirements of whoever will have to provision the runtime environment, install the software, monitor its performance, install patches and upgrades, start up and shut down the software during server maintenance windows, and so on. Even the simplest shrink-wrap product has operational requirements (such as automatic updating) and complex server products often have very extensive operational requirements. Ensure that you have the appropriate people engaged during the product definition and product acceptance process to ensure that these requirements are not missed. Make sure you have the right people involved in the decision making process to ensure that the requirements are satisfied. Operational requirements may require specialized testing and test tools.
Some examples of operational requirements for Software-as-a-Service are similar to those of an IT shop:
  • The system needs to integrate with the specific systems monitoring software in use at your company.
  • The system needs to have upgrade, startup and shutdown scripts that can be used during automated server maintenance windows.
  • The system needs to be built using an approved technology stack. The operations department may not have the skills to support some technologies. E.g. a .NET shop may not have the skills to support an application built in Java or PHP. Some technologies may be incompatible with technologies used at your company.
  • The operations group may have specific windows during which they cannot install or test new versions of software. This may affect the release schedule you have devised and it may make on-time delivery of software by the supplier even more critical as small schedule slips could result in long in-service delays.
*
The operational requirements of a software product include:
  • The software will need to integrate with a variety of systems monitoring software frameworks.
  • The software may need to work in a variety of hardware plus software configurations for both server and client (possibly browser) components. This may require extensive compatibility testing. It may also constrain the features to be delivered due to cross-platform issues that restrict the functionality to a lowest-common denominator.
*

Dealing with Large Products

When a product is too large, complex or includes too many different technologies, it can be difficult to commission a single team to develop it for you. Another challenge is the integration of several existing products each of which has its own supplier organization. If you have this issue you have several ways to deal with it. The more obvious solution is to break down the product requirements into separate requirements for each subcomponent and commission component teams to do the work. The main issue with this from a product management perspective is that it makes product architecture and the subsequent subcomponent integration your problem. See the sidebar Component Teams for a more detailed discussion.
The other alternative is to commission feature teams that work across components and divide the product requirements into subset of product requirements for each feature team. This has the advantage of dealing with any integration issues within the feature team. The supplier organization may need to provide a mechanism to ensure consistency of approach across the feature team such as a virtual architecture team or standards team. See the sidebar Feature Teams on Microsoft Office for a more detailed discussion. Regardless of which approach you choose, you’ll want to have some mechanism for coordinating the actions of the teams. Two common approaches are the use of project reviews and the Scrum-of-Scrums approach.

Sidebar: Component TeamsA large client had organized their teams around the components of their architecture which consisted of many components organized in several layers. The bottom layer is called “The Platform”. Built atop The Platform are many reusable components and services we can simply refer to as A, B and C. These components are used by the customer-facing components X, Y and Z. When marketing receives a new feature request the product manager decomposes the requested functionality into requirements for each of the customer-facing components X, Y and Z. The architects of the corresponding teams are each asked to do a feasibility study of the functionality and to provide an estimate of the effort. This may require requesting new capabilities from one or more of components A, B or C which may in turn require new capabilities from The Platform. Once the feasibility study was completed and an estimate was available from each of the affected teams, marketing would decide whether to build the feature and if so, which release to include it in. In the appropriate release, each of the teams involved in the design would do the work required in their individual components. When all the components were finished, “big bang” integration would start. This would often discover serious mismatches between expectations and what was built.
Other issues associated with the component team approach include balancing work load among the teams and difficulties implementing multiple features simultaneously. Larman and Vodde stress that these issues lead to delays and increased overhead due to frequent handoffs. An alternative to component teams is organizing development around features, discussed in the sidebar Feature Teams at Microsoft. See [VoddeLarmanOnScalingAgile] and [LeffingwellOnFeatureVsComponentTeams] for a comparison between component and feature teams.

Sidebar: Feature Teams at Microsoft A feature is an independently testable unit of functionality that is either directly visible to a customer (customer-facing feature) or is a piece of infrastructure that will be consumed by another feature. Feature teams are small interdisciplinary teams (5-12 people) that focus on delivering specific product features. A sample feature team could be the one in charge of Ribbon in MS Word. Another one is the one in charge of Spellchecker. A third one is in charge of Address Labels.
Microsoft implements the Feature Teams strategy using a methodology called Feature Crews. It extends the general strategy with specific guidelines on how to manage the code base is such a way as to avoid partially finished features from affecting the stability of the main branch. The idea is for all disciplines (development, PM, UxD, testing) to work closely together on private builds (in their own, isolated private feature branches) and only add the feature to the product (the main branch) when it is “Feature Complete” (similarly to “Done-Done” described earlier). The feature in “Feature Complete” state is expected to be fully implemented, sufficiently tested and stable enough for either a) dog-fooding (using one’s own product in-house) and sharing with customers in a CTP (Community Technology Preview) for customer-facing features, or b) coding against by another feature crew – for features that deal with underlying infrastructure.
The typical duration for feature delivery is between three to six weeks – the idea is to ensure a steady flow of features and regular, frequent feedback from the customers. Importantly, each feature team is free to decide what development approach/process to use, as long as their output meets all quality gates (common for all feature crews) and doesn’t destabilize the product. (For a case study on the use of the Feature Crew methodology at Microsoft Visual Studio Tools for Office product unit and details on integration of features into parent and main branches, see [MillerCarterOnLargeScaleAgile])

Bug Tracking and Fixing

Users and testers will report bugs no matter how good a job you do building the software. Some of these “bugs” will be legitimate defects introduced into the software accidently. Others will be requests for new functionality you consciously chose to leave out of the release. Except from a contractual (who pays) perspective, bugs need not be treated any differently from feature requests. Either way, you need to decide what timeframe you want the functionality of the system changed and therefore in which stream of software you want to include it. The Concern Resolution Model provides a common way of thinking about bugs, feature requests and other project issues.

Maintaining Multiple Versions of the Software

There will be times that you may choose to maintain several versions of the software at the same time. A common situation is after release, during the warranty period, if you are also developing a subsequent release at the same time. Another situation is when you choose to support multiple version of software in production and need to apply any bug fixes to all supported versions. Be forewarned that there is an extra cost to maintaining versions in parallel as any bug fixes done in the support/warranty stream will need to be propagated into the development stream at some point and retested there.

Summary

The Product Manager is typically the Product Owner of a software-intensive product and is responsible for making the decision about how much money or resources to invest in building or extending the product functionality.
A summary of the responsibilities of the Product Owner can be found in Chapter 1 The Basic Acceptance Process.

Last edited Nov 5, 2009 at 10:04 PM by rburte, version 3

Comments

No comments yet.