Thursday, November 20, 2003
Those who have read some of the book will know it is about transparency of process and transparency of tracking the flow of value through the software engineering lifecycle. I really liked this example [E+ membership required] of “extreme” transparency from Japan’s recent general election as described in The Economist print edition November 15th page 25. Here is the relevant quote…
Mr Kan told voters that the flamboyant governor of Nagano - who works in a glass office so that everyone can see whom he meets - would join the DPJ cabinet if the party won the election.
Posted by david on 11/20 at 01:39 PM
Tuesday, November 18, 2003
Centralized Process Selection Decisions
It is now widely recognized that with software development processes - one size does not fit all! This goes as much for eXtreme Programming as it does for SDLC or RUP. In The Right Tool for the Job, Scott Ambler examines the issues in the latest issue of Software Development magazine. Scott references using risk assessment as a tool to help select the right process and points the reader at the recent Boehm and Turner book, Balancing Agility and Discipline.
Last week, I talked about the problems of having a centralized group which selects tools for use in software development. The same applies to software process. It makes no sense to have a centralized proclamation that the entire enterprise - and I’ve worked in companies with 20,000 software developers - should use RUP (as an example). Software process choices should be aligned with value chains. The market into which the software is being deployed should be understood and an assessment made as to the stability of the requirements and the likelihood that the application can be deployed iteratively or incrementally rather than holistically. This is what Boehm and Turner call a risk assessment. There are other treatments of the problem.
In Chapter 34 of “Agile Management…” I divide the problem into a 2x2 matrix providing for 4 categories: immature holistic domains; mature holistic domains; immature incremental domains; and mature incremental domains. I then suggest process choices for each of these quadrants.
In Agile Software Development Ecosystems, Jim Highsmith maps the problem space using Geoffrey Moore’s technology adoption lifecycle model of Early Market, Chasm, Bowling Alley, Tornado and Main Street, as described in his books, Inside the Tornado and earlier in Crossing the Chasm.
So there is lots of advice out there. The bottom line is that this advice should be followed on an as needed basis - by project or business unit. There should be no grand centralized choice made in the ivory tower. The minor cost efficient advantage of having all the staff trained in the one method, is far outweighed by the problems created and the cost in real ROI terms by using the wrong tool for the job.
Posted by david on 11/18 at 12:56 PM
Monday, November 17, 2003
Supply Chain Software Development
I’ve mentioned the notion of creating a software development supply chain to assemble software from components before at this site and on my Yahoo! group. Now Clemens Szyperski and David Messerschmidt develop the idea and examine what makes it different in The Flexible Factory [registration required] in the December issue of Software Development magazine. They speculate that software assembly would be a new “industrial revolution”.
They also observe that a market must exist for components. This is not a new observation. The component world dream really started with products like the OS/2 Workplace Shell (OS/2 2.0). That dream was so far ahead of its time that technologies such as CORBA had to be invented for it. In an interview I conducted with Dave Roberts, one of the designers of the Workplace, he recognized the deficiencies of the model - no market place. However, the markets such as ComponentSource and Flashline do not really solve the problem. Any VC will tell you - “there is no money in (software) components”. Why not? There is huge money in PC components e.g. processors (Intel) and disk drive controllers (IBM). The reason is that value cannot be measured and hence value falls to the lowest common denominator. Value is primarily determined by cost. Not by the risk carried in the value chain. All that these companies are providing is an online version of wholesaling for component libraries. Something we used to do from firms such as Greymatter.
Dave Roberts believed that web services were the answer to the problem. Runtime metering of method calls. I believed this too and was responsible for an infrastructure called “Wireless Application Manager” which planned to offer runtime metering and billing for wireless web services on the Sprint PCS Vision network. That system was never implemented but ideas like it are beginning to emerge. For example, it is now possible, using technology similar to aspect-oriented programming to weave code into applications which meters the use of method calls and reports it to a central billing system - such as that owned by an ISP or a telco.
This solves the final problem identified by Szyperski and Messerschmidt - that of trust and risk. In the design for Wireless Application Manager, for example, all web services were non-repudiated end-to-end through X-509 certificates on the supplier end and through the handset identification on the other. The access carrier is best placed to play the role of trust mediator. They can also provide quality of service - something which wasn’t identified in the SD magazine article. By allowing price differentiation across quality of service lines, supply of services or components will naturally align with value chains and risk is spread across the suppliers in the chain.
The bottom line is that there is a whole lot of infrastructure to be built out before supply chain assembly of software applications will be possible. It involves the development of network access operators to facilitate the marketplace and provide the trust, quality of service, metering, billing, mediation and settlement. It requires a wider use of a meta-data language such as RDF but one which is capable of semantically describing an application in a way which can be identified in an agreed ontology along with its quality of service ranking and its terms and conditions of service, including price. For example, does a downstream partner get a discount for volume - and if so, how much, and how is this administered? Does the end user get to specify QoS and Trust levels for an application such that all the components or services it taps into fit that designated level and will the price the end user pays vary accordingly? Is there a concept of first class, business class and economy for use of a word processor?
Maybe! Just maybe… Check back in 15 years.
Posted by david on 11/17 at 12:24 PM
Friday, November 14, 2003
Chapter 5 - Software Production Metrics
Prentice Hall have allowed me to make Chapter 5 - Software Production Metrics available as a sample in PDF. In this chapter I introduce the only 3 metrics which really matter. They are designed to be simple, supporting of the business goals, self-generating (with a little help from tools) and predictive rather than lagging. These 4 criteria are taken from Donald Reinertsen’s Managing the Design Factory. The 3 most important metrics are Inventory (number of ideas for client-valued functions) in progress, Production Rate (the rate of completion of client-valued functions) and the Lead Time (the time to create a working client-valued function) which has a direct relationship to the average (mean) cost of each function. To find out why these are the only metrics which matter, you will need to read the whole chapter.
Posted by david on 11/14 at 01:49 PM
Thursday, November 13, 2003
Chapter 4 - Dealing with Uncertainty
Chapter 4 introduces a critical theme in the book - uncertainty. It shows how to analyze uncertainty in the 3 key constraints of software development - scope, schedule and resources. There is much greater uncertainty in scope, and considerably less in resources and very little in desired delivery date. Often so much is dependent on a delivery date - a whole marketing and distribution program - that a date cannot slip. Hence, it is certain that the date must be hit. The resources for a project are generally fairly fixed and difficult to vary at short notice. As Fred Brooks said, “adding people to a late project makes it later”. The scope, however, does generally have a lot of uncertainty attached to it. Requirements do change and scope does creep.
Rather than classify uncertainty into Deming’s traditional 2 types of variation - common cause and special cause - a newer approach is taken where uncertainty is classified into 4 types - common cause variation, foreseen uncertainty, unforeseen uncertainty and chaos.
Buffering for uncertainty is examined and the “local safety” problem are introduced. The chapter ends with an explanation of how to reduce uncertainty through aggregation of tasks with the resultant buffering being calculated as the square root of the sum of the squares of buffers for each task.
The Agile Manager must learn to accept uncertainty is real and through acceptance master it with judicious use of buffers.
Posted by david on 11/13 at 02:41 PM