: February 2007
Tuesday, February 27, 2007
Exporing Real Option-based Software Engineering
I’d like to follow up my post on why we aren’t ready for real option analysis based decision making in software engineering with an example. As a basis I’d like to use Stephen Palmer’s excellent article on modeling a library management application originally posted on The Coad Letter. If we look at the model (based on Peter Coad’s Domain Neutral Component pattern) we can see that it decouples and encapsulates many concepts.
Fig 1. Model of a Library Management Application (courtesy Stephen R. Palmer)
In his article, Stephen Palmer explores the requirements space and gradually narrows the possibilities and reduces options in the scope. As he does this, he removes classes from the domain model. This demonstrates the “modeling by taking away” approach of The Coad Method in its final 1999 version using colour archetypes and the DNC pattern. There are many options, for example, the AccountInApplication class decouples MembershipAccount from its role in a Registration. Do we need this class? What does it buy us? If we create this class what “option” is it buying us? We could say the same for the other (yellow) role classes, Library and Member. If we create role classes for Parties, Places or Things, what “option” does that buy us?
Roles classes decouple behavior associated with the transactional (pink) Moments or Intervals of time from the (green) Party, Place or Thing classes. With roles the model is more loosely coupled and more highly cohesive. The (green) PPT classes are more reusable and their responsibilities are more cohesive. If the (green) PPT class might be involved in another transactional (pink) Moment-Interval or even a whole other application then the (green) PPT class is not polluted with behavior associated with several applications on transactions. This makes the classes cleaner, simpler, of higher internal quality and probably easier to test, resulting in better external quality. But there is an even more useful purpose to separating out (yellow) role classes from (green) PPT. It allows postponement of component boundaries and separation of code into discreet coarse-grained components, packages or applications. I explained this in a later Coad Letter article.
So the real option theory question becomes a trade between the cost of creating the (yellow) role class - the price of the option - and the likelihood that (green) PPT will be involved in another (pink) Moment-Interval and that there will be a desire to separate out the different transactional application pieces from the reusable (green) PPT and (blue) Description classes. The likelihood will depend on the domain, the specific requirements and the direction of the business in future. As I described last week, assessing the cost of building the extra (yellow) role class is almost impossible, and assessing the probability of the need to decouple transactional behavior and/or partition in to discreet components is equally hard. Hence, the input data for the real option equation would be highly suspect at best.
Let us analyze another example using the same model from Figure 1. The (blue) Account Description class allows us to separate out meta-data describing types of accounts and associated behavior. For example, the maximum number of books that can be borrowed by any member registered with this type of account. Description classes are often implemented as database tables or as XML files that are loaded dynamically. Both approaches allow for the meta-data to be updated at runtime. In Lean terms, for the description definitions to be postponed until runtime. Building a (blue) description class using a table-driven or XML file-driven approach is buying an option to postpone description definition until runtime. To assess this option, we must again be able to account for the cost of building the table access or the dynamic XML file loading infrastructure. We must also be able to make an assessment of the likelihood that the description definitions will change and how often this is likely and with how much notice. Is it likely to happen more often and with sufficiently short notice that deploying a new version of the software would be undesirable? Would we prefer to have a system administrator or a “super” user update the definitions rather than involve the programming and testing team? If we can assess both the cost of buying the option and the likelihood of the option being exercised then we can use real option analysis to make a decision about the value of building the (blue) description class, and whether it would be better to build a table-driven implementation or an XML file-driven implementation.
If your organization is capable of making these kind of well informed, real option-based architectural and design decisions, please write to me or leave a comment. Equally, if you think we are years from having that kind of maturity, I’d like to hear from you too. Technorati tag: Agile, David+Anderson, Real+Option+Analysis, Software+Engineering, Stephen+Palmer
Posted by david on 02/27 at 01:56 PM
Making Progress with Imperfect Information
I’ve been reevaluating my view on refactoring!
It’s interesting how your viewpoint can color your view of something. My viewpoint in to the agile community and its practices has always been rooted in my experience with FDD/The Coad Method. When I spoke at USC back in 2004, I suggested that though FDD may never be seen as the most agile of Agile methods, it was probably the most Lean. I stand by this comment three years later. FDD is very Lean. All the waste is trimmed out. The Coad Method techniques deliver a very precise definition of a domain, that is loosely coupled and highly cohesive, while the Feature definition technique delivers fine-grained customer-valued units of work that exhibit very low degrees of variation. The batching technique of Chief Programmer Work Packages is very efficient and minimizes transaction costs associated with work-in-process. I could go on, but you get the point that FDD is Lean. The modeling, planning and batching for design and build result in almost no need to refactor code on FDD projects. As a result, my book classified refactoring as rework and labeled it as waste.
From my FDD viewpoint that might have been a fair assessment. However, now that I’m running a software engineering organization where I inherited a waterfall process, that runs through a series of narrow and specialized departments such as Business Analysis, Systems Analysis, Development and Test, I’ve changed my opinion. From a new viewpoint refactoring is clearly a very valuable process.
I believe that Alistair Cockburn’s paper from the ICAM 2005 International Conference on Agility will come to seen as a seminal paper in Lean Software Engineering (ironic as it was inspired by the Theory of Constraints). In this paper, Cockburn explains that asking a non-bottleneck resource to do extra work to rework something does not cost anything extra and can create a desirable effect because it allows progress to be made and demonstrated earlier. Ergo, rework is not waste when performed by a non-bottleneck resource.
There have been many versions of the idea that perfect is the enemy of good enough. The original is attributed to Voltaire. And it is this concept that refactoring (and Cockburn’s paper) embrace. It is argued that it is better to make progress with imperfect information and refactor later when better information is available than to wait for better information before progressing.
Specifically, I am thinking that it is better for developers to start coding with imperfect analysis than to wait for a systems analyst to produce a “perfect” specification. The developer can then refactor the developed code when the analyst makes a final version of the specification available. My reasoning is simple. The developer would otherwise be idle. [Not truly idle, there is plenty of busy work available, grooming environments, training on new languages and APIs and so forth but idle in the sense that they are not adding value to the deliverable.] By definition a resource (or station) with idle time is a not a bottleneck resource. It is, therefore, OK to ask the developer to perform the refactoring. The refactoring cannot be classified as waste in this case.
We can think about this decision using real option analysis. The option we are buying is to deliver the working code earlier. The cost of the option is the cost of having the developer start work before a final specification is ready. The risk (or uncertainty) attached to the option is the risk that the early imperfect specification will be significantly different from the final specification and that any rework will take longer than waiting to start coding on delivery of the final specification. Note that the rework may absorb all of the slack in the non-bottleneck resource turning it in to a bottleneck and delaying the whole project. This gives us a framework to decide whether starting early and refactoring is the correct decision, or whether waiting and coding for “right first time” is the correct decision.
[It would nice if someone reading this and perhaps actively studying for a masters or doctorate in the field were to develop this concept and publish it complete with equations and data from sample projects. ] Technorati tag: Agile, David+Anderson, Real+Option+Analysis, FDD, Coad+Method, Theory+Constraints, Lean, Software+Engineering, Refactoring
Posted by david on 02/27 at 12:12 PM
Friday, February 23, 2007
Insulting the Motown Tribe
I enjoyed reading this post on Tom Peters blog about the unfortunate First Gentleman of Michigan, Dan Mulhern, and how he has upset the auto workers of Detroit with his admiration for Toyota. The story is a classic example of how, as Ray Immelman pointed out in his clever and original book, Great Boss Dead Boss, that all communication is decoded tribally first and for logical content afterwards. Unfortunately, Dan Mulhern walked right in to it. He insulted the tribal value and threatened the individual security of members of the Detroit auto workers tribe. Had he been able to communicate the same message in a way that didn’t raise the tribal hackles he’d have made his point, his audience may have taken it to heart and everyone would have been happy. Instead the result is unhappiness all round.
This came in the same week that my friend Jim Benson reviewed Great Boss Dead Boss on his own blog. Everyone I know who has taken the time to read the book right through has been profoundly changed by it. If you haven’t read it yet, go get a copy! [Click to buy it from Amazon]
Posted by david on 02/23 at 03:35 AM
Tuesday, February 20, 2007
Why We’re not Ready for Real Options
More thoughts from my trip to Central Europe…
I attended a really instructive session at OOP 2007 presented by Hakan Erdogmus of National Research Council Canada on Principles of Software Process and Project Decisions. In this paper, Erdogmus proposes that we adopt the use of Real Option Analysis to inform and frame decisions in software engineering and agile project management. This isn’t the first time real options has come up for discussion. Chris Matts has been a proponent and real options were an active topic of hallway discussions at Agile 2006. It’s even come up from time to time in the Agile Management Yahoo! group.
Real option analysis is a mechanism for making decisions that would help us get beyond blunt management principles such as YAGNI (You aren’t going to need it). YAGNI pre-supposes that options are never worth buying. YAGNI is a reaction to the assumption in traditional software engineering advice that an option is always worth buying - for example options to build quality early in the lifecycle that involves extra cost on analysis, design and review, based on an assumed cost of change curve that shows exponential growth is cost of change late in the lifecycle, or options to build in reusability to a design that involves extra design, coding and testing effort. Kent Beck proposed a different cost of change curve that shows that buying an option based on quality early in the lifecycle is not a good bargain because the cost of fixing the problem later is actually much lower than the traditional curve suggests. In reality, we cannot generalize about cost of change - both curves are wrong. As I pointed out in my book, the cost of change depends on the position of the constraint (or bottleneck) in the software engineering value chain, and in the variability inherent in the domain and around the methods and skill level of the practitioners in that value chain. Real options provides a solution to this problem. Real options promise to offer a framework that will work for each specific situation rather than encouraging the use of a blunt instrument such as YAGNI that is blind to the true cost of change in a specific project.
So real option based decision making is desirable. However, I believe that we aren’t ready for it as an industry or profession. Here are my reasons why…
Real options require us to calculate two different numbers with some degree of confidence. The first is cost. We need to be able to calculate the cost of “buying” an option. So we need to be able to accurately apportion the effort involved in say investing in higher quality early in the lifecycle or in designing a class to be reusable as opposed to foregoing reusability and minimizing the design of the class. This effort estimate then needs to be turned in to a cost estimate. This cost amount becomes the cost of “buying” the option. Next, we need to be able to predict the variation in or probability that we would take up an option or that circumstances would develop such that we would wish to “exercise” the option, i.e. avoid time fixing bugs late in the lifecycle, or reuse a potentially reusable class or framework on a future project or iteration. Once we have both of these pieces of information we can make an informed option theory decision to either buy or pass on an option based on whether the cost of the option is less than the risk adjusted potential loss from not buying the option and suffering the consequences later.
The reality is that the as an industry and profession, we are years away from having the maturity to correctly measure and assess these data and hence I can only conclude that the day-to-day use of real option based decision making is still a long way off in software engineering. Where I feel we can salvage something from this is the paradigm of option based decision making. We need to encourage software engineers to think through decisions with like “are we going to need it or not and if so what would we spend now to balance that risk?” Getting software engineers to think about early lifecycle decisions as “options” will be a step forward to delivering better project decisions that are tuned to the specific situations and organizations rather than decisions based on generalized assumptions about the cost of change or the likelihood of reuse, even if those decisions are not based in reliably informed objective data. Technorati tag: Agile, David+Anderson, Real+Option+Analysis, YAGNI, Kent+Beck, Chris+Matts, Hakan+Erdogmus, OOP+2007
Posted by david on 02/20 at 05:21 AM
Thursday, February 08, 2007
Some more thoughts from my trip round central Europe… I was watching Bill Amelio on CNN. Bill has the wonderful job or merging the former IBM PC Company (one of my former employers) with Lenovo. When questioned about how to get the two sides to play together, he mentioned “respect” as a key behavior that people needed to bring to meetings, “you have got to be willing to compromise and if you are able to do that on a regular basis and respect who each person is and respect their intentions.” I hear this “respect” word a lot in the workplace and in the agile community. “If only people would respect each other we’d all get along better.” “I think your people don’t respect mine enough.” and so on. For example, the agile development team doesn’t show respect to the unreformed PMs or vice-versa ... and so it goes.
Spending some time in the Tirol reminded me of the problem with all of this. Respect isn’t offered or given, it is earned! In business and management literature we are too often confusing courtesy with respect. Courtesy is something I find offered to the tourists of the Tirol by the locals ungrudgingly and always with a smile. [No wonder - more than 90% of their economy depends on revenues from tourists.] However, to earn the respect of the locals you need to earn it by for example, taking the cable car to the top of the mountain and skiing the whole hill to the bottom in time to catch the same car again less than 15 minutes later, or by biking up a series of switchback turns to a peak normally only reached by tourists via a gondola or chair lift, or by hiking up a valley tourists rarely visit and sleeping out a few nights in alpine huts and not coming down below the snow line for a week. Once you’ve earned this respect, you see courtesy for what it is.
So, if you feel you’ve got colleagues who aren’t showing enough respect, ask yourself this… Are my colleagues being courteous? Do they listen and give of their time reasonably? If so, and you still don’t feel they respect you, then you need to look in the mirror. What would it take to earn their respect? Technorati tag: Agile, David+Anderson
Posted by david on 02/08 at 12:04 AM
Page 1 of 1 pages