Measurements have value. Measure something over time and we have a baseline. Monitor that measurement and we can see problems before they become obvious. Experiment with minor changes, and we can compare to the baseline to tell if we have improved or hurt our performance. Collect enough measurements and trends will emerge, exposing a layer of information we didn’t have access to before.
“If you cannot measure it, you cannot improve it” – Lord Kelvin
But this isn’t just relevant to our servers, our software, and our support systems, it’s also relevant to ourselves and our work.
Ongoing Measurements
Often the first measurement we consider, when looking to measure our work, is a productivity measure. How many tickets did we close in a day, how many lines of code did we produce, how many hours did we spend doing a vital task. But productivity is just one option available to us.
Productivity
The overwhelming positive for measuring productivity is that everyone will understand what we’re measuring. They may argue the unit, the targets we chose, the brand of graph paper we drew on, but they understand that we are attempting to show a number that represents how much we are working and getting done.
Examples:
- Tech: How many help tickets we completed
- Developer: How many lines of code we wrote
- BA/Product Owner: How many features/user stories/contracts we defined
- SQL Admin: How many restores were tested
- Sys Admins: How many vendor calls were dodged (hehe)
- Manager: How many roadblocks were cleared for staff
Productivity is the easiest measure.
Effectiveness
What’s a better measure than how much work we get done? How about the amount of work that was completed the first time? Or the amount of work that went beyond the surface problems? Or the effect that prior work has? Measuring effectiveness can be tricky, but it’s basically the opposite of busy work and in some cases it might even be a measure of several ongoing types of work.
Examples:
- Tech: Average number of repeat/follow-up tickets on the same problem
- Dev: End user features completed and signed off
- BA: End user features completed and signed off
- Sys Admin: Rate of critical failures/server over time
- Manager: Average tenure of staff
When we measure effectiveness, we look at the result instead of the work. We create a definition for positive results (fewer outages/server, fewer tickets/problem, less rework and higher customer acceptance, etc) and then measure our impact against those goals. It’s a shift from measuring what we’re doing to measuring what we’re achieving.
Estimate Variability
What’s the impact of taking twice as long as originally estimated to get a task done? Yeah, screamy people. When we provide estimates for people, they have a tendency to make their own plans based on those estimates. The further from the estimate we get, the more it costs for them to change their plans, disrupting estimates they provided to others. On-time delivery is not just a FedEx measure, it’s also a measure of how well we work to our own estimates and reinforces the trust the receiver extended to us.
Examples:
- Tech: Completion of PC builds on time
- Developer: Completion of features to the estimated time
- Project Manager: Completion deliverables or phases on time/budget
- SQL Admin: Completion of new environments to estimated time
- Sys Admins: Completion of upgrade outages to estimated time
- Manager: Ongoing execution against financial budget
Variability is about trust and the snowball effect. When we finish too early, people are not ready and will likely look late in their own tasks, even if they are perfectly on time. Too late and we drive all later tasks to be late as well. In both cases we have affected how people will see our future estimates and how they will trust our ability to deliver on time. The more consistently we deliver against our estimates, the less risky it is for the stakeholders to build their own plans around that delivery and the less time will be required to ensure and reassure them that we are on schedule still (meetings, meetings, graphs, and meetings), ultimately saving us even more time.
Lead Time
Lead time is the average time elapsed from receiving a request to delivering the completed work. How long do people need to wait in order to get what they need? This is another measurement that could be recorded in number of screamy people/task.
Example:
- Tech: Average lifetime of request tickets
- Developer: Average time to correct application bugs
- Project Manager: Average time to answer questions about the project or project status
- SQL Admin: Average time to respond to new requests
- Sys Admin: Average time time to deliver a new service
- Manager: Average time to deliver budgeted estimates for work requests
Lead time measures how quickly we respond and is an excellent number to use when it comes to improvement measures. When lead times get long, you start hearing about how unresponsive the IT department is, how they never get anything done. Short lead times and the IT department is responsive, friendly, busy. Quick lead times also reduce the occurrences of shadow IT and prevalence of do-it-yourself-ers.
Measure twice…
Often when we decide we need measurements, we stop at “how much work is getting done”, but Productivity is not the only measure. Selecting the right measurements depends on our environment, business, and what we intend to achieve.
“What gets measured, gets managed” – Peter Drucker
Measuring our progress or current state can be difficult, often those numbers are less than spectacular. Poor numbers can make us feel worse about the current state. The state hasn’t changed, we’ve simply gotten a clearer picture of it. All of the area above that measurement is opportunity.