Those of you who know me personally, know that nothing can get me on my soapbox quicker than a discussion on measuring productivity. Just over the last week I have been asked three times how to measure this in Agile. Here are my thoughts.
Let’s start with the most obvious
Productivity measures output and not outcome. The business cares about outcomes first and output second; after all there is no point producing Betamax cassettes more productively than a competitor if everyone is buying VHS. Understandably, it is difficult to measure the outcome of software delivery so we end up talking about productivity. Having swallowed this pill and being unable to give all but anecdotal guidance on how to measure outcomes, let’s look at productivity measurements.
How NOT to do it!
The worst possible way that I can think of is to do it literally based on output. Think of widgets or java classes or lines of code. If you measure this output you are at best not measuring something meaningful and at worst encouraging bad behavior. Teams that focus on creating an elegant and easy to maintain solution with reusable components will look less productive than the ones just copying things or creating new components all the time. This is bad. And think of the introduction of technology patterns like stylesheets; all of a sudden, for a redesign, you only have to update a stylesheet and not all 100 web pages. On paper this would look like a huge productivity loss, updating 1 stylesheet against updating 100 pages. Innovative productivity improvements will not be accurately reflected by this kind of measure and teams will not look for innovative ways given they are measured on something different. Arguably function points are similar, but I have never dealt with them so I will reserve judgement on this until I have had firsthand experience.
How to make it even worse!
Yes, widget or line of code based measurements are bad, but it can get even worse. If we have done measurements on this we do not incentivize teams to look for reuse or componentization of code, and we are also in danger of destroying their sense of teamwork by measuring what each team member contributes. “How many lines of code have you written today?” I have worked with many teams where the best coder writes very little code and that is because he is helping everyone else around him. The team is more productive with him doing this than with him writing lots of code himself. He multiplies his strength rather than linearly growing the team’s productivity by doing more himself.
Okay, you might say that this is all well and good, but what should we do? We clearly need some kind of measurement. I completely agree. Here is what I have used and I think this is a decent starting point:
Delivered functionality – You can do this by either measuring how many user stories or story points you deliver. If you are not working in Agile, you can use requirements or use cases or scenarios. Anything that actually relates to what the user gets from the system. This is closest to measuring outcome and hence the most appropriate measure. Of course these items come in all different sizes and you’d be hard pressed to strictly compare two data points but the trending should be helpful. If you did some normalizations of story points (another great topic for a soapbox), then that will give some comparability.
Waste – While it is hard to measure productivity and outcomes, it is quite easy to measure the opposite: waste! Of course you should contextually decide which elements of waste you measure and I would be careful with composites unless you can translate this to money (for example, all the waste adds to US$3 million rather than that we have a waste index of 3.6). Composites of such diverse elements such as defects, manual steps, process delays and handovers are difficult to understand. If you cannot translate these to dollars, just choose 2 or 3 main waste factors and measure them. Once they are good, find the next one to measure and track.
Cycle time – This is the metric that I would consider above all others to be meaningful. How long does it take to get a good idea implemented in production? You should have the broadest definition that you can measure and then break it down into the sub-components to understand where your bottlenecks are and optimize those. Many of these will be impacted by the levels of automation you have implemented and the level of lean process optimization you have done.
This is by no means perfect. You can game these metrics just like many others and sometimes external factors influence the measurement, but I strongly believe that if you improve on these three measures you will be more productive.
There is one more thing to mention as a caveat. You need to measure exhaustively and in an automated fashion. The more you rely on just a subset of work and the more you manually track activities, the less accurate these measures will be. This also means that you need to measure things that don’t lead to functionality being delivered, like paying down technical debt, analyzing new requests for functionality that does not implement or defect triage. There is plenty of opportunity to optimize in this space—paying technical debt down quicker, validating feature requests quicker, reducing feedback cycles to reduce triage times of defects and so on.For other posts of the Agile reporting series look here: Agile reporting at the enterprise level – where to look? (Part 1 – Status reporting)