I just got back from a roadshow talking to people around Australia about some results from a couple of research projects that we’re wrapping up at NICTA. It was the first in a series of breakfast seminars that researchers in our group will be doing over the coming year.
It was all good. The organization went well, the audiences numbers were good (especially considering this is the first time we’ve run a breakfast seminar), and most importantly the audience seemed interested in what we were saying. We got lots of questions, opinions, suggestions, debate, and positive feedback from software development managers, CEOs, SPI practionerss, and academics. We’ve built a little more awareness about NICTA and ESE in the Australian ICT industry, and hopefully made some contacts that will lead to some collaborative projects with Australian companies. There’s also a chance that we might have said something directly useful for someone!
I was speaking about Why Organizations (Don’t) Use CMMI. In the first half of the talk I gave an overview about CMMI SPI and ratings, and their benefits and limitations. In the second half, I presented evidence we found in a systematic review about why organizations have originally decided to adopt CMM-based SPI. I also presented some evidence about why other organizations have chosen not to adopt CMMI.
It’s ironic that CMM was orginally developed in response to customer-related problems (the US Air Force wanted assurance that its suppliers could deliver software on time), but these days companies mostly say they adopt CMM(I) for reasons related to product quality, project performance (e.g. development efficiency), and also perhaps for process-related reasons (e.g. process visibility, best practice, process measurement). Companies usually don’t say they adopt CMM-based SPI to address customer-related issues (e.g. specific customer demands, or market advantage), and almost never say they adopt it to improve the capability of people within their organization.
Of course most companies don’t use CMMI, and I also spoke about why they don’t, based on an analysis of a couple of months of sales data from a consultancy selling CMMI appraisal and improvement services. Companies thought they were too small, or that CMMI would be too costly or time-consuming to adopt. Sometimes they were already doing some other sort of SPI. We found a significant relationship betwen the size of companies and the reasons they gave, which provided some evidence to support the common belief that small companies think that they can’t adopt CMMI – they don’t even get around to considering a ROI analysis because they think that CMMI is either not applicable or is infeasibly costly or time-consuming.
The next steps in this line of research are to understand organizational decision-making for the adoption of SPI, and to understand the needs and practical constraints in Australian software-developing companies, so that we can develop improvement and rating approaches that suit Australian companies.
Speaking with me on the tour was Barbara Kitchenham, on Software Productivity Measurements – The Dark Secrets. She spoke about lessons learned from data gathered by an Australian company about Statistical Process Control (SPC) charts. SPC charts are important in manufacturing industries, but when people use them for “controlling” productivity in software development, things can go strange in a few different ways.
First, ratio measurements (e.g. productivity, defect rates) are unstable when the denominator values (usually a software size measure) are small. So for small units-of-work, you can sometimes get huge but basically meaningless spikes in the SPC charts. Second, Barbara showed some charts from industry where the variances were so large that “one standard deviation below the mean” was negative – that’s a problem if you want to notice low productivity! Third, combining data from a range of different types of projects wasn’t helpful. There was a significant difference in productivity between applications, and so it was much more meaningful for companies to do application-specific productivity monitoring, even though this provided fewer data values for statistical analysis. Finally, SPC charts of productivity weren’t great for visual communication – they tended to under-emphasize low values, and over-emphasize high values.
Barbara recommended a simple alternative to SPC charts – use scatter plots of units-of-work in effort vs. size instead. These scatter plots were dramtically easier to interpret and also provided guidance about productivity for effort estimation.