Why Not is not Can Not

Our paper “An Exploratory Study of Why organisations Do Not Adopt CMMI” has started to be cited in the literature. It’s been in the “hottest 25 downloads” for the journal since it became available 3 quarters ago, so I’m hopeful the number of citations might grow.
On one hand, it’s great to cited! On the other hand, I worry that the results could be mis-interpreted. Specifically, people might be tempted to mis-read the paper as saying “small companies can’t do CMMI”. We don’t say that. So, I thought I’d try to clarify a few points here…
The main question we do provide some initial evidence for is:

  • When companies decide not to do CMMI, why do they say “no”?

Here’s some questions we don’t provide evidence for in the paper:

  • Do most companies do CMMI?
  • Can most companies do CMMI?
  • Do most companies think they can do CMMI?

(These are alternate “complementary research questions” – by saying what my main research question is NOT, I hope you get a better understanding about the boundaries of the main question, and how to interpret the results.)
In looking at the main question, we broke down the companies by size to see if we could get hints about why companies gave the reasons they did. Small companies tended to give different kinds of reasons than medium and large companies. Roughly, small companies said they couldn’t do CMMI, whereas medium and large companies said they shouldn’t.
But obviously some small companies can do CMMI! There are fairly credible reports of it in the literature. So, what’s going on?
What’s going on is that my target population is not “all software companies”. My target population is software companies that have decided not to do CMMI. To start my research, I first exclude companies that are doing CMMI, or that want to do it.
It’s interesting to progressively partition the population of companies, and think about where most software process improvement research happens. (This looks better as an animation, so think about these pictures as a sort of flick book in your mind’s eye.)
First, there’s the population of every organisation that might reasonably use CMMI.

Some of them will not have heard of CMMI, but some will have. You could study the differences between these groups as a piece of advertising or diffusion research, but I’m not aware of anyone who’s published research on this for CMMI or any other SPI approach.

Of the organisations that have heard of CMMI, some will have decided not to adopt it, and some will have decided to adopt it. This is the point in the adoption lifecycle that we examine in our paper. It is an extremely rare kind of research in software engineering.

For interest’s sake, let’s drill down some more…
Of the organisations that have tried to adopt CMMI, some will have succeeded in rolling it out, and some will have failed.

Of those who succeeded in adopting CMMI, some organisations will have got some benefit, while others will have had no benefit.

Where do you think most CMMI/SPI research sits?
Most research papers report on how organisations benefitted from using CMMI. In the literature, you almost never hear about organisations who tried and failed to adopt CMMI, nor about organisations who adopted but then failed to benefit from CMMI. (This effect is called selection bias, or publication bias, depending on how it arises.) It’s sort of understandable – companies don’t want to let themselves be seen in a bad light, and researchers may think that success is more glamorous than failure.
Looking at successful cases is necessary, but to really broaden the impact of software engineering research, we need to learn from failures as well!