: June 2004
Monday, June 21, 2004
FDD Six Sigma #1:DMAIC
I want to put some thoughts down on how we might go about explaining or relating the FDD process to Six Sigma. I want to stress that this is work in progress and just thoughts at this time. Not all blog entries (in fact very few) represent anything definitive.
The DMAIC process in Six Sigma is used to reduce variation usually in repeating processes. DMAIC is an acronym which means Define-Measure-Analyze-Improve-Control. People have a tendency to jump on it and state that it is only for manufacturing and only for reproducing the same thing again and again. The immediate reaction is to suggest that DMAIC cannot be used with software engineering and that the DMADV (see tomorrow’s entry) is the right Six Sigma process for software.
I have a problem with this assumption. Firstly, DMADV is certainly about conformant quality but it isn’t really about improvement. DMAIC is the process for controlling and measuring improvement in the system. In the agile community, we are definitely interested in delivering high quality software regularly but we are also interested in creating a culture of continuous improvement. DMAIC is a process which helps us to move from right to left on the Wheeler matrix into the conformant quality column. I, personally, don’t see why you can’t use DMAIC with knowledge work. You simply have to treat it as something which has a wider degree of variation (than you would find in manufacturing). By accepting and understanding that wider degree of variation and defining the notion of conformant quality accordingly, you can make progress.
<!—StartFragment—>DMAIC with FDD and Agile Management
I see the use of DMAIC with FDD and Agile Management as primarily for measuring variance in estimation, productivity and quality. Using my 5 point feature complexity point scale, we can use DMAIC to both refine the estimation technique and monitor both productivity and quality.
Firstly, let’s expand the definition of a Feature to include Coad’s usual template, <action> a|the <result> of|from|to|by|with|for a(n) <object>, but also to include business rules, [Oct 16th 2003], (using the Ross and Von Halle template) and task flows (using my Statechart driven approach or Larry Constantine’s Task Cases approach). Now we can define what we want to track for quality purposes as Features including Business Rules and Tasks Flow Definitions. Further we can track Feature Complexity Points (FCP) - the inventory to be tracked, and our estimating technique which converts FCP’s to man hours.
We will measure Features complete per developer week, defects per feature, Critical Chain buffer usage - variance in actual versus plan, and variance out with control limits of all three measures*
We will analyze the Cumulative Flow Diagram, Control Charts* derived from it, the Critical Chain buffer usage against a temperature chart rating, and the Issue Log - both growth/decline in issues and status of issues
At any time, we may choose to make changes to the system - these could be development method changes or more simply changes to codification for Feature Complexity Points or adjustments to control limits or the level of effort conversion table.
For control we will use daily standup meetings using analysis data from CFD, Control Charts* and Critical Chain plan versus actual, and the Monthly Operations Review and/or Project Retrospectives to analyze longer term trends
[*It occurs to me that I haven’t published the work I’m doing with Control Charts at this site yet. It gets its first airing at the Motorola S3S next month.]
Posted by david on 06/21 at 03:13 AM
Saturday, June 19, 2004
Six Sigma as the Agile Future?
My recent posts discussing the importance of understanding variation can help us to explain and relate agile development in terms of Six Sigma - a process of continuous improvement mostly used in the very big companies such as General Electric and Motorola.
Defining Six Sigma
Six Sigma is a method of management for continuous improvement which understands variation. Most people associate Six Sigma with quality because its name is rooted in the notion of less than 4 defects per million opportunities. However, the practice of Six Sigma requires the deep understanding of variation and the steady elimination of special cause variation and reduction of common cause variation from a process or system. Quality improves as variation is eliminated and reduced. The Wheeler Matrix helps us to understand that. Conformant Quality is defined in the left-hand column and to move from the right-hand to the left-hand column requires the reduction of common cause (or systemic) variation.
Six Sigma is rooted in the work on variation done by Shewhart and in the work in quality by his successors such as Deming and Juran. There is another management method rooted in the work of Shewhart, Deming and Juran which also strives to achieve continuous improvement - Lean or the Toyota Production System. There is now work on-going to consolidate these two branches of management science into Lean Six Sigma and this comparison of Lean Six Sigma with CMM.
We are seeing more members of the agile community being influenced by the work of Deming and talking about very low defect levels. Kent Beck has starting talking about goals for TDD such as 1 defect per quarter. Martin Fowler has also talked about a Very Low Defect Project and observed that this is a trend amongst good agile teams.
If on the one hand, we have the agile development crowd moving towards Deming quality assurance methods and very low defect counts and on the other hand, the agile project management crowd moving towards probabilistic methods such as critical chain which embrace and understand uncertainty then ultimately is the agile movement moving towards a definitive Six Sigma solution for software engineering? Is anyone shocked or surprised by this trend? Comments please…
Posted by david on 06/19 at 01:31 AM
Friday, June 18, 2004
Microsoft and Six Sigma
There was some chat in my Yahoo! group recently about Six Sigma applied to software engineering and one specific question about Microsoft and what if anything they may be doing with Six Sigma. Microsoft aren’t so much as adopting Six Sigma fro software development - this would truly have surprised me - but rather offering a product to help their customers implement Six Sigma. Here are the details. It seems that Microsoft is adopting Six Sigma in its operations and fulfillment side, i.e. stuff they need to do to ship products but not software development related. [Updated: May 5th 2005]
Now, if only I could get them interested in some of my recent work on the underlying theory of variation and how it relates to agile development, then that might really be interesting. Hmmm…
Posted by david on 06/18 at 09:16 AM
Thursday, June 17, 2004
Drive Out Fear!
In Deming’s Theory of Profound Knowledge and his 14 Points for Management, he emphasizes the importance of driving out fear from an organization. Driving out fear is so important to the functional (as opposed to dysfunctional) effectiveness of an organization. Deming underpinned his Theory of Profound Knowledge in the statistical methods of process control. He observed that “some of the greatest contributions from control charts lie in areas that are only partially explored so far, such as applications to supervision, management, and systems of measurement…” [Shewhart 1986] In other words, Deming liked the idea that someone would come along at a later date and apply his theories to areas like software engineering.
Wheeler’s 4 States of Control (see chart from yesterday) and in particular the Threshold State help us to understand how it is possible to reduce fear in an organization. The Threshold State says that the system (of software engineering) delivers non-conformant quality, i.e. the project is late, or over-budget, or dropped scope, or has a higher than acceptable defect count, or perhaps all of the above, but that there was no assignable cause variation. We all know that nonconformance is the norm in the software engineering world. In fact, it’s dominant in about 4 out 5 documented cases. So there is reason to be fearful. How can you drive out fear in a world where non-conformant quality is the norm?
Understanding variation is the vital ingredient in driving out fear - Deming’s second element in his Theory of Profound Knowledge. Management must understand variation and know how to separate out common (chance or systemic) cause variation from special (assignable) cause variation. Management must also be responsible for educating staff on variation and helping them to identify it and report it. Let there be fear at the staff level only of assignable cause variation and then only of assignable cause variation to which they made an inadequate response. As I stated back in September, in Special Cause Truck Grounding, there is no point in assigning blame for special cause variation which was beyond someone’s control. And there is never a cause for assigning blame for excessive common cause variation as seen in the Threshold State.
Management, on the other hand, must carry the burden for that common cause variation beyond the limits of control in the Threshold State. It is all too easy for management to deflect blame from themselves and make false claim to an assignable cause for variation which exceeded the bounds of the prediction in the project plan. How many staff live in fear that their manager will blame them for something over which they had no control? Most current software development methods which root their definition of client value in use cases or stories or loosely worded requirements documents suffer from wide, high tolerance variation. This means that buffers in plans have to be large or the plan is at risk. Even if these projects are profoundly successful at eliminating special cause variation through use of techniques like those described in the Scrum method, then at best they exist in the Threshold State.
Management can drive out fear by accepting responsibility for the system of software engineering and responsibility for non-conformant quality. They can reduce their own personal risk by gathering data and reporting it transparently - don’t give someone else the opportunity to claim false assignable cause for non-conformant quality. By learning to recognize and report when the system of software engineering is operating in the Threshold State, the Brink of Chaos State or in Chaos, a manager can eliminate fear from the staff and increase the likelihood that they, as a team, can bring the process to the Ideal State over time. Only then can they start to use Quality as a Competitive Weapon.
Posted by david on 06/17 at 05:42 AM
From Change to Variation Part 2
Here is the final text extracted from my forthcoming article at the Cutter It Journal. This section deals with why understanding variation ultimately allows us to embrace change. Comments welcome…
Common Versus Special Cause Variation
Walter Shewhart first classified two types of variation from his work at Bell Labs in the 1920’s. He called them “controlled variation” from chance causes and “uncontrolled variation” from assignable causes [Wheeler 1992]. Edwards Deming later modified this terminology to “common cause variation” and “special cause variation,” and it is these terms that are most commonly used today [Wheeler 1992]. The teachings of Shewhart, Deming, and others in the field of statistical process control are at the foundation of the management theory called Six Sigma, which seeks to create a system of continuous improvement through the reduction of variation. Another disciple of Shewhart, Donald Wheeler, classified what he called the “four states of control,” as shown here.
The four states are divided into a 2x2 matrix, with the rows representing common (or chance) cause variation and special (or assignable) cause variation. The columns represent conformant quality and nonconformant quality. For project management, we might decide to define conformant quality as all functionality is delivered on time with a defect count of less than two, Severity 3 (or lower) bugs per 100 function points of scope.
Embrace Change - Embrace Uncertainty - Understand Variation
It may not be immediately obvious why understanding variation is important to being agile. Kent Beck asked us to “embrace change” as the subheading in the title of his Extreme Programming Explained [Beck 2000]. The Agile Manifesto asks us to “respond to change over following a plan”. This seems to place an emphasis on reacting (to change) rather than controlling against a plan. Traditional critical path plans have a deterministic basis but project task durations cannot be calculated deterministically - they exhibit probabilistic behavior. In other words, project task durations are uncertain and over a sample set will exhibit variation. Shewhart and his followers, Chambers, Deming and Wheeler, have helped us to understand variation. By understanding it, we can use it to embrace uncertainty and consequently embrace change through anticipation.
It is worth considering very carefully the applicability and meaning of Shewhart’s original terms, chance and assignable cause, to software engineering project management. Assignable cause variation is, by definition, identifiable. Assignable cause variation is the stuff of issue logs and risk management plans. If you can point at it or give it a name or describe it, then it is probably assignable (special) cause variation in your project. Chance cause, on the other hand, cannot be identified. Chance cause is endemic to the process or system of software engineering. Chance cause is the idea that it took 1 hour 20 minutes to design Feature 167 whilst it took 2 hours and 10 minutes to design Feature 168, which was estimated as being of similar complexity. Chance cause relates to how the work is done - the mechanism, the system dynamics.
Recalling the definition of the responsibilities of the engineering manager (text omitted in this extract), it is clear that chance cause variation is rightly the problem of the engineering manager. Chance cause variation is caused by the system dynamics and the engineering manager is responsible for the system - the team of engineers and their methods. As shown in the figure above, chance cause variation is reduced by changing the system, resulting in a movement of the system from right to left on the diagram. Assignable cause variation must be eliminated (not merely reduced) in order for the system to move vertically from bottom to top on the diagram.
[Beck 2000] Beck, Kent, Extreme Programming Explained - Embrace Change, Addison Wesley, New York NY, 2000
[Wheeler 1992] Wheeler, Donald J., and David S. Chambers, Understanding Statistical Process Control, SPC Press, Knoxville, Tennessee, 1992
Posted by david on 06/17 at 05:17 AM