To increase efficiency, follow a software process… NOT

Software processes promise to increase efficiency, so the software can be developed cheaper, quicker, or with a better quality.

For a classical example of how it can be achieved, let’s examine the waterfall process. One of its ideas — to gather as complete and detailed requirements as possible upfront and to freeze them after the customer’s sign off, — appeals immediately to any engineer’s heart.

But why do we like this idea?

Requirements are the basis for the software architecture, and the software architecture is the basis for the source code. Small changes in the requirements can lead to substantial changes in the architecture, and changes in the architecture can sometimes require catastrophic changes of the source code (sometimes at the level of “we have to throw it away and re-create the software anew”).

And if a small change request is implemented in a dirty way without an update of the architecture, the quality of the source code diminishes (i.e. the number of bugs increases and it is harder to change in the future).

So, by freezing the requirements, we can eliminate corresponding technical risks, develop software cheaper (as requirements are frozen we know we never have to remove already developed and tested software feature, thus we can reduce the risk buffer in our cost estimations) and also increase code quality.

Right? Yes, it sounds logical, but there is a small “but” here. Business customers don’t give a shit how good the software corresponds to its requirements (i.e. the quality of the source code). The only thing they care about is how good the software helps them to acheive their business goals (i.e. the quality of the software product). These both qualities do in fact correlate, but only in the time span when the requirements accurately reflect the business goals.

Now, it is in the very nature of businesses (as in: free market) to change the goals as quick as possible, according to the daily market situation. Therefore, this correlation time span tends to be very small, and often it ends even before the software hits the market.

And if the requirements analyst did his job poorly, the requirements might never have been correlating to the business goals good enough.

Now, this idea smells not so good any more. But is it wrong? Does it mean we shouldn’t gather and freeze requirements before implementation?


On the one hand, there are enough situations when software is developed not for businesses (like NASA projects, but also various other military and scientific projects, or also eGovernment projects), or it is developed for highly regulated markets like some parts of automotive, aviation, financial, and health industries. In these situations, requirements either don’t depend on external forces at all, or are reasonably stable.

On the other hand, even in a business software project, there can be situations, when it is advantageous to freeze the reqs for one particular small part or layer of the software. For example, in a price negotiation, we can say “OK, if you deliver us all the graphic designs and freeze them, we can go down with our designer costs”. Or we can freeze partial requirements, only concerning the most important and invariant parts (like, usage of specific platforms, technologies, frameworks, and services).

Here, efficiency can be increased and risks mitigated. But, it isn’t accomplished by following the waterfall process. Instead, underlying ideas / principles of this classical process are understood and applied to the concrete situation.

Another now-classical process, Extreme Programming, was born with the intention to be particularly well suited for business projects with their high-volatile requirements and extreme focus on short-term outcome (working software above the documentation, etc). Therefore, XP embraces change. To compensate, they’ve introduced a specific method of architecture development, by evolving it using refactoring, pair programming, and test-driven design. They promise to achieve acceptable costs and software quality, in spite of changes.

Again, it all sounds very logical, but there is one “but”. Too many customers want to have a fixed price (and scope, and time) contract. Fixed price contracts are obviously detrimental to the software product quality, because they inevitable create an area of conflict towards the end of the development phase, and the software quality is often the only variable that can be sacrificed in the following negotiations. Besides, the ever-changing nature of business goals is in a direct logical contradiction to fixed price contracts. Nevertheless, fixed price contracts are preferred, due to various reasons, which are out of scope of this blog post.

Embracing change with a fixed price contract, you can end up in the situation when the whole budget is used up, but software product is still lacking a few of absolutely critical features. Going this way, I have once ended with an enterprise ordering system that was able to distribute many different kinds of goods, except of the most important one. To my excuse I must say that the customer prioritized the stories in the wrong way, and I didn’t know that the budget was absolutely fixed.

Does it mean we should naturally resist to any change when working on a fixed price contract?


Sometimes, it is plainly infeasible to create an architecture upfront. It can be because of surprises of the used technology like MediaElement in Silverlight behaving not like expected, or organizational problems, when the customer isn’t able to create requirements from scratch and needs some starting point in form of a working prototype. Knowing the software release deadline and being able to estimate the relative importance of the feature for the customer’s business, it is possible to guess how many times we may have to change the architecture, and estimate the development costs accordingly. Offering to the customer an “agile wild card” for a part of the software, within some well-defined limits, and evolving the architecture of that software part using XP practices, will increase efficiency, and sometimes even enable a software project at all.

Again, understanding how the process works and using the idea whenever it makes sense, will lead to efficiency and software quality. Following a process blindly, when it is not suitable for a particular situation, will lead to a problem.

But maybe a custom process, suitable for a particular ISV, its business model, would perform good enough? I don’t think so. There are too much variables in a project, and every new project has a new combination of them. Even a software process that is averagely suitable for, say, a one-third of all projects, is in my opinion a wrong goal. Averagely suitable process would lead to profit below average, at best, due to missed opportunities, performing of unnecessary steps and averagely mitigated risks.

Besides that, the ISV’s business model needs to be constantly revised and adapted to the market situation. As the Netflix’ slideshow points out, the mere existence of an official in-house software process works as a selection factor for employees — the number of perfect process performers (or, if you really hate them, you can call them “blind process followers”) increases, and the innovators and independent-thinkers leave. And when the business model has to be changed, suddenly there is nobody who can support the change technically.

Software processes define many interesting and really working ideas and principles, and they should be seen really so: as repositories of ideas. Instead of focusing on selection or definition of the corporate software process, one should focus on “individuals and interactions” as the Agile Manifesto puts it.

People working on software projects must

  • know as many different ideas and principles as possible (and continuously learn new ones)
  • understand why and how ideas and principles work, what variables will be improved and what variables will be sacrificed in each particular case
  • clearly understand their concrete daily changing situation, ideally both from business and technical perspectives.
  • be able to map their situation to available ideas and choose the most efficient one.
  • be willing to pursue the approach most efficient for their company and for their customer, at all times.

Following measures would increase efficiency of ISV business:

  • Create corporate vocabulary for artifacts and activities to facilitate and streamline discussions inside a workgroup about how are they going to approach their particular situation.
  • Eliminate or reduce the number of workflows. Even a workflow defined as a sample workflow is detrimental, because it shifts the focus from really understanding the activities and choosing them according to the situation into the “apply a ready workflow and wait for a miracle” mode.
  • On the other hand, sharing of war stories should be encouraged in the corporate culture. A war story would describe unusual situations and force fields along with the solution path taken by the workgroup.
  • Hire only people who knows popular or at least classical processes, and, more importantly, who understands why and when do they work.
  • Promote acquisition of software process ideas and principles among employees (workshops, reading books and other sources of information at the work time, etc).

Part 1 of series on Software Process

Part 3 of series on Software Process

Join the Conversation

1 Comment

Leave a comment