In spite of what your client may tell you, there's always a problem.
-- Gerald Weinberg
Software is a scarce resource, in that the demand for software greatly
outstrips the supply. We hear about huge shortages of IT staff
required to meet to this demand. Costs are rising, too. Some people
believe the way we can increase output is to outsource
development to places where qualified labor is cheap and plentiful.
However, the problem with software development lies elsewhere, and
increasing the number of programmers and separating them from the
customer only makes the problem worse.
A programmer's job is getting the details exactly right, exactly once.
This isn't at all like physical manufacturing where the brunt of the
cost is in the process of making the exact copies of a product.
Outsourced manufacturing works well, because the details have already
been decided in the design phase. The manufacturing process
merely replicates this fixed design. With software, the cost of making
copies is almost free, and it's the efficiency of design phase that
governs its cost. Cheap and abundant labor improves manufacturing
efficiency, but this economy of scale does not make software
development more efficient.
The cost of programming is directly related to network effects. As
you add programmers to project, the communication costs increase
proportionally with the square of the total number of programmers.
There are that many more links over which the details must be
communicated. And, as the customer and programmers drift farther
apart, the cost of the most important link increases. Reducing the
cost of communication between the programmers and the customer is
crucial to getting the details right efficiently. A time lag along
this link multiplies the cost. To improve efficiency, the customer
needs instantaneous communication with the programmers, and
programmers need immediate feedback from the customer.
This chapter differentiates software development from physical
manufacturing. We explain why traditional, plan-driven development
methodologies increase project risk, and how fostering failure
reduces risk. The chapter ends with a parable that shows the way to
reduce both requirements and implementation risk is to bring the customer
closer to development.
According to the Business Software Alliance, the software industry is
growing rapidly to meet a seemingly unlimited demand. From 1990 to
1998, the U.S. software industry's revenue grew by 13% per
Despite, or perhaps as a result of this rapid growth, the software
industry remains highly inefficient. While sales grew by 18% in
1998, an astounding 26% of U.S. software projects failed and another
46% were labeled as challenged by the Standish Group
They also estimate 37% of our resources are wasted on failed and challenged
We need to understand why software development is so inefficient and
why projects fail.
Risk Averse Methodologies
It's not that failure is all bad. People learn from mistakes.
However, we don't want to be driving on the mistake bridge builders learn
from. We don't want to be flying in an engineering error, to live near a
nuclear plant failure, or even to stand near a pancake griddle
To reduce risk, engineering methodologies are
plan-driven. The plans help us ensure we catch
mistakes as early as possible. The planning process involves many
redundant steps. Emerging plans must pass reviews and consistency
checks during the numerous phases of the project. The public is
protected by layers of methodology and in some cases government
regulations and laws.
Although public safety is certainly a concern, business probably
evolved these risk mitigation methodologies for another
reason: to reduce the risk of production failures. When you are
manufacturing physical widgets, you don't want to find an error after
you have produced one million widgets, or even a thousand. The cost
of the raw materials plus the time to fix the error, to retool, and to
rerun the job is usually high in comparison to the cost of the extra
procedures to catch errors during the planning and design phases.
Software development is quite different from manufacturing.
The cost of
producing the physical software package is nominal, especially
considering most software is developed for only one
Today, automated updates via the Web further reduce the cost of
software delivery. The cost of software production is borne almost
entirely by research, development, and maintenance.
While software lacks the characteristics of physical products, we
still develop most software with the same implementation
risk averse methodologies. We are told "If [a
requirements] error is not corrected until the maintenance phase, the
correction involves a much larger inventory of specifications, code,
user and maintenance manuals, and training
Mistakes are expensive, because we have "inventory" to
update. Plan-driven software development is firmly
grounded in avoiding production failures, which slows development in the
name of implementation risk mitigation.
Implementation risk mitigation is expensive. The most obvious cost is
the bookkeeping material (documents defining requirements,
specifications, architecture, and detailed design) in addition to the
code we need to maintain. Less risk averse methodologies lower the
cost of software production. Reducing redundancy in the planning
process means there is less to change when a requirements error is
inevitably discovered. By not creating inventory in the first place
we further reduce our overhead and inefficiencies.
When we improve efficiency in one part of the process, we gain
flexibility in other areas. We have more resources and time to correct
errors in all phases of the project. The fewer errors, the better the
chance the project will succeed.
Implementation risk aversion is costly in other ways. We avoid change
later in the project even if that change is justified. The cost of
change is proportional to the amount of inventory. In plan-driven
methodologies, change is increasingly costly as the project
progresses. Not only do we have to update all the bookkeeping
material, but it must pass the same manual reviews and consistency
checks that were used to validate the existing plan and design.
And possibly the most important cost is risk aversion
itself. Failure is a natural part of creation. We don't like to
fail, but when we do, we usually learn from the experience. According
to management gurus Jim Collins and Jerry Porras, "What looks
in retrospect like brilliant foresight and
preplanning was often the result of 'Let's try a lot of stuff
and keep what works.'"
An interesting side-effect of reducing the cost of correcting errors is
that we reduce the risk associated with trying new and innovative solutions.
Get Me a Rock
Reducing the cost of correcting errors is one part of the problem.
One reason projects fail is that they do not satisfy the end-users'
needs. To help ensure a project's success, we need to mitigate
requirements risk. The following
story about a manager and his subordinate
demonstrates the difficulty of specifying and satisfying requirements:
Boss: Get me a rock.
Peon: Yes, sir.
...a little while later...
Peon: Here's your rock, sir.
Boss: This rock is all wrong. We need a big rock.
Peon: Here ya go, boss.
Boss: We can't use this rock. It's not smooth.
...yet another delay...
Peon: [panting] Smooth, big rock, sir.
Boss: The other rocks you brought were black,
but this one's brown. Get a black one.
And the story goes on and on. We've all been there. Both roles are
difficult. It is hard to specify exactly what you want when you're
not sure yourself, or even when you are sure, you may have difficulty
explaining to another person what you want. On the flip side, the
subordinate probably doesn't speak the language of
rocks, so he can't elicit what the manager wants in
terms the manager understands.
The plan-driven lesson to be learned is: Customers must give precise
instructions (specifications). Programmers should not be expected to
be mind readers.
Most software projects are as ill-defined as the requirements in this
The plan-driven approach is to spend a lot of time up front defining
the requirements in order to reduce the cost of the implementation.
The theory is that planning is cheap, and programming is expensive.
Once we get through the specification phase, we can ship the spec off to a
source of cheap labor whose job it is to translate the spec
into working code. That would work fine if the specification were
exactly right, but it most likely is missing a lot of important
detail, and the details it identifies probably aren't exactly
right either. The Rock example doesn't do justice to the amount of
detail involved in software. Large programs contain
hundreds of thousands and sometimes millions of details
that must be exactly right, or the software contains faults.
The cumulative effect of software faults is what causes projects to
fail. It's easy to fix a few faults but not thousands. When users
throw up their hands and scream in exasperation, they're saying the
program misses the mark by a mile. It's insufficient to tell them the
specification was right or that the programmers simply misunderstood
it. It's the code users are frustrated with, and it's the code
that is just plain wrong.
Planning and specification does not guarantee end-user satisfaction.
Plan-driven methodologies ignore requirements risk, that is, the risk
that details may be incorrect, missing, or somehow not quite what the
customer wants. When we gather requirements, write the specification,
ship it off, and only check the program against user expectations at
the end, we are setting ourselves up for failure. Requirements change
in this scenario is very expensive. This is what we see in the Rock
example. The requirements risk is proportional
to this time lag. Given the predominance of plan-driven software
development, it's likely that a large number of project failures are
directly attributable to too little requirements risk mitigation.
Let's Rock And Roll
Fortunately, there is an alternative version of the Get Me a Rock
story, which solves the ill-defined requirements problem
with greater efficiency:
Boss: Get me a rock.
Peon: Sure, boss. Let's go for a ride to the quarry.
...a little while later...
Boss: Thanks for pointing out this rock.
I would have missed it if I went by myself.
Peon:You're welcome, boss.
The moral of this story is: to increase efficiency and quality, bring
the customer as close as possible to a project's implementation.
Business Software Alliance, Forecasting a Robust Future: An
Economic Study of the U.S. Software Industry, Business
Software Alliance. June 16, 1999.
The Standish Group conducted a study of 23,000 software projects
between 1994 and 1998. Failed means "The project was canceled
before completion." Challenged means "The project is
completed and operational, but over-budget, over the time estimate and
with fewer features and functions than initially specified."
See CHAOS: A Recipe for Sucess, The Standish Group
International, Inc., 1999.
Our breakfast suddenly turned into splattered, molten metal one Sunday.
Fortunately, no one was hurt.
The Business Software Alliance report estimates 64% of software sales
is in the customized software and integrated system design services.
This does not include internal IT budgets.
Economics, Barry Boehm. Prentice-Hall, Inc. 1981,
pp. 39-40. This classical reference is old but unfortunately not
outdated, viz., Get Ready for Agile Methods with
Care, Barry Boehm. IEEE Software. Jan, 2002, pp. 64-69.
Built to Last, Jim Collins and Jerry Porras,
HarperBusiness. 1997, p. 9.
On page 310 of Software Engineering Economics,
Barry Boehm states, "When we first begin to evaluate alternative
concepts for a new software application, the relative range of our
software cost estimates is roughly a factor of four on either the high
or low side. This range stems from the wide range of uncertainty we
have at this time about the actual nature of the product."