Profile cover photo
Profile photo
Mickey Phoenix
314 followers
314 followers
About
Communities and Collections
Posts

"What you measure, improves."

That's one of the big promises of Agile. What you measure, improves. If you want high-quality software at a predictable pace, measure quality and predictability of pace, and you'll get more quality at a more predictable pace.

There's only one problem. It's not a promise. It's a threat.

What you measure, improves. But a measurement is never the thing-in-itself, any more than a map is the territory. Measure quality, and what you get is improvements in whatever metric you're using to define "quality". And that may, or may not, have anything to do with what you actually care about.

Agile doesn't actually have that many intrinsic measurements. But there's something shared in common by the most frequently used of the measurements it does have: I've seen them all misused as often as I've seen them used correctly.

The common Agile measurements:

1) Velocity.

2) Number of tests.

3) Percent code coverage.

Why are those the big Agile measurements? Because they're the easy Agile measurements. And they're easy in two different ways:

1) They are either already being collected (Velocity, in Scrum) or easy to automate (percent code coverage, number of tests).
2) They're easy to fake.

I just heard a collective gasp of horror. But it's true--these metrics get used so much in part because management teams that use them hear what they want to hear.

There are other metrics that are just as easy to collect, but much harder to fake. For example, "number of lines of code per method" and "number of lines of code per class". As long as you look at the whole distribution, and not at the "average", these metrics are wonderfully well-correlated with code quality, maintenance costs, bug counts, and everything else you'd actually like to know about your code base.

But they're extremely resistant to being faked. If you hand someone a 600-line method in a 6000-line class, they're going to have to do some serious analysis and testing work to change that into 60 10-line methods without introducing defects. And there aren't any short cuts.

Velocity is even easier to modify. If a team wants to double their velocity for the next release, all they need to do is size each team a size or two higher in Story Points. Stories that would have been 3-point stories become 5-point or 8-points stories, stories that would have been 13-point stories become 20-point stories, and 20-point stories become "epics" that get split along unnatural, inorganic grounds that obscure business value and hinder implementation.

Steven Levitt and Stephen Dubner, in the introduction to Superfreakonomics, wrote that the theme of their two books was "People respond to incentives." Management responds to incentives, developers respond to incentives, and even Agile coaches and consultants respond to incentives. If you reward teams that have a high velocity, and fire teams that have a low velocity, you will absolutely get the "desired" result of higher velocities. You'll also get some unintended consequences, such as:

1) Developers who no longer believe that Agile works.
2) Scared, nervous developers who do lower quality work more slowly because of their fear.
3) Inability to predict release dates correctly, because Story Point inflation makes your release-planning math meaningless.

Velocity has a purpose, and I'll write about it in a future post if there's interest. Code coverage metrics have a purpose, and I'd be glad to go in more depth about that as well. But they're not metrics, and if you use them for that purpose, you harm yourself and your teams in equal measure.

If you're going to use metrics, make sure you select metrics that are resistant to manipulation, and that closely measure whatever thing-in-itself you actually care about. And, remembering that we're Agile here, make sure you evaluate your metrics empirically. When a metric improves, does the project work better? Are your users happier when the metric is higher? Do you, as a manager, find correlations between the metric and how you feel about the project as a whole? Do you, as a developer, observe that the metric is high when the code is "good", and low when the code is "bad"?

But the best metric you'll ever find for your project is to take a couple of your team members out to lunch, ask them to chat with each other about the project, and then sit back and listen. Don't just listen to what they say, listen to what they don't say (remember the strange thing that the dog did in the nighttime). Ask open-ended questions if things start to taper off. Encourage them to vent their frustrations. And use your human skills to figure out where the actual pains and problems are, and then what you can do to fix them.

Good developers write good code if you make it possible for them to do so. Bad developers write bad code no matter what you do. "That government is best which governs least" (Thoreau) is nowhere more true than in Agile project management.

So remember: what you measure, improves. And improving a metric does not necessarily mean that anything at all has changed other than the metric. Chose your metrics, like your developers, rarely and with great care.

My name came up in a Google+ discussion about Agile, Scrum, and XP discussion topics. Since I'm a professional Agile educator and consultant, I was both flattered and intrigued. Mostly, though, I post about Agile when something particularly interesting or noteworthy happens at work--and I'm on vacation now.

So if anyone has thoughts, questions, or inquiries about Agile that might get me started, please feel free to comment with them here, and we'll see if we can get some posts started churning around in my slightly-less-fevered-than-usual brain.

In other news, the Oregon Country Fair is awesome, and surprisingly Agile in its organization.

Take care,

Mickey.
Wait while more posts are being loaded