Measuring success: why is it so hard to do well?
It’s something I see all the time: a problem statement found, a solution designed, a plan devised, teams assembled, and products and skills delivered to solve the challenge. But are we building in a framework to understand whether the solution really worked? Did it move the needle, and by how much?
These questions are often ranked second to more obvious benchmarks for success and tend to be over-looked early in the design process. To gain ground on objectives, it’s crucial to establish them meaningfully in the first place, as well as establish robust methods of measurement.
A common question asked by agencies to clients is: what does success look like to you? Which can be a harder question to answer that it seems. How would you answer? Can you articulate your response in terms of your customers and your business? If you've got that part nailed down, then great, but here comes the really important question: are you tracking the metrics that actually contribute to your goals?
Monitoring the right metrics can help to:
- Identify current performance levels in key areas
- Gauge progress on achieving business objectives and fulfilling the corporate mission
- Focus business units on driving growth in key areas
- Drive accountability across the organisation
So if the benefits of the right metrics are so clear, why does the question of measurement keep coming up, and why is it so hard to answer? I’ve spent a lot of time researching this subject and I have some ideas to help you identify where things can go wrong and, more importantly, what you can do to get things right.
We’ve already established it’s essential to measure the right things, not just the things right in front of us. To start meaningfully benchmarking success, clients should start thinking about outcomes not just outputs from day one. I often speak to team leaders, sponsors and senior stakeholders who tell me clearly about business strategy objectives but when asked how that translates into the outcomes the business wants, it gets a bit murky… A lot of this is down to the fact that outcomes are hard for businesses to quantify and measure, so they reach for monetary performance indicators such as:
- Total revenue
- Revenue by product or product line
- Market penetration
- Percentage of revenue from new business
- Percentage of revenue from existing customers (cross-selling, upselling, repeat orders, expanded contracts, etc.)
- Year-over-year growth
As stated by Harvard Business Review, the types of data mentioned above are neither persistent nor predictive, they don't reveal cause and effect and therefore have little bearing on strategy or even on the broader goal of earning a sufficient return on investment. It's common to see companies caught in a cycle where the goal becomes about hitting one particular KPI. Goodhart’s law states: “when a measure becomes a target, it ceases to be a good measure”. In other words, when we set a specific goal, people will tend to optimise for it, regardless of the consequences. This leads to problems when other, equally important, focus areas are neglected.
Another outcome conundrum that often contributes to the noise surrounding measuring success is that in the case of most modern digital products, the user is part of the outcome. This requires businesses to really think about what the user themselves is trying to achieve, rather than just what the business wants them to do. By doing this, you will derive a different kind of value from your product, as opposed to the common fallback of relying on something linked to the bottom line.
“Most companies have made little attempt to identify areas of non-financial performance that might advance their chosen strategy. Nor have they demonstrated a cause-and-effect link between improvements in those non-financial areas and in cash flow, profit, or stock price.”
Ittner and Larker – Harvard Business Review
Agencies like us exist to help you see the wood from the trees and identify meaningful areas of growth and what real success will look like, beyond profit.
We work with Richmond Housing Project, whose goal was to drive down contact to their customer service centre. So what did we do? We end-to-end mapped their core services, overlaying the user needs of the customers and fed in qualitative research which highlighted where the current processes were letting internal and external users down. This helped us to identify core improvements which linked meaningfully to their objectives by getting under the skin of the needs of their customers and pinpointed where the challenges were with interaction and completion.
The process highlighted multiple factors contributing to satisfaction and we are now starting to plot out ways to help assess value vs cost across their ecosystem. We helped to uncover value which may have gone un-checked if we'd gone in for clear indicators of success.
So, to boil it down, my advice:
- Don't forget, your journey and your goals are only as good as you make them.
- Before you can solve a problem, you must work out how you’ll know whether your solution has worked.
There are any number of tools to analyse code for you, even write it for you. If reducing cognitive complexity, avoiding potential null reference exceptions, and ensuring 80% code coverage from unit tests is the priority, then peer review processes won't be the focus.