Evaluation isn't boring - it has a PR problem
How places can turn evaluation from a box-ticking exercise into a tool for lasting power

Evaluation has a PR problem and it is costing places the very thing they are working so hard to build: a compelling case for why they should be trusted with more power, more funding, and more room to do things differently. Somewhere along the way, evaluation became the thing bolted on at the end of a project. The box ticked when applying for a grant. The report that is skimmed over. Something that is produced to satisfy a funder and then never looked at again. It has become synonymous with bureaucracy, data collection for its own sake, and looking backwards rather than forwards.
Fundamentally, this is a misunderstanding of what evaluation actually is and what it can do. In an era where devolution is progressing, places are taking risks to do things differently, and initiatives such as Test, Learn and Grow have been launched, getting evaluation right is not a nice-to-have, it is an essential piece of the puzzle.
Evaluation as case-making
At its core, evaluation is the process of understanding whether something worked, why it worked (or didn't), and what that means for the future. However, it should never be a passive exercise. A better way to think about evaluation is as case making. It is a tool that people can use to demonstrate impact and build the argument for why they should be given the powers and funding, to keep going, to scale up and do more.
Different questions need different approaches and there is no single type of evaluation, it can be adaptable to the context of the intervention, the place, and the resources available. It also does not need to be boring, creative approaches can still generate rich insights.
Evaluation as storytelling
In one evaluation I designed for a major workforce transformation project in an NHS trust, alongside collecting quantitative data, I facilitated workshops during the staff training sessions where they wrote stories about the changes they were implementing and the real-world impact following the Pixar storytelling framework. These narratives not only provided a structured way for participants to reflect but helped them connect more deeply with the purpose of their training too. I used these stories to shape how I presented the project's findings so that the results reflected staff voices.
The quantitative data, collected through surveys and organisational data, provided the core evidence of change, but the stories brought the numbers to life for the senior team when we reported back, because they grounded the impact in the everyday realities of the hospital. These stories gave the senior team a richer way to communicate the lived experience of impact and real-world value of the training when making their case for the success and continuation of the project.
Evaluation as part of the work
One of the biggest barriers to good evaluation is the assumption that doing it properly requires significant resource. Sometimes it does, impact evaluations of major interventions need to be robust. However, evaluations can and should be built into how work is designed from the start, then it becomes part of the work itself rather than an expensive bolt-on. This involves being clear from the beginning about what is trying to be achieved, and what data will need to be captured along the way. Answering these questions early on not only builds the groundwork for a good evaluation, but it also builds a more robust project overall. Building in an evaluation from the beginning makes it more valuable to the project, as it can inform decisions while there is still time to make them. This is something we have previously advocated for at the Growth and Reform Network and is vital to our mission on building the evidence base around inclusive growth and public service reform to support better policy making and help places make a stronger case for change.
Why it matters more now than ever
Currently, devolution has real momentum: devolution deals are being struck and trailblazers have been named as testbeds for new approaches to employment. Combined authorities are piloting new models of delivery with ambitions to prove that local leadership produces better outcomes than Whitehall centralisation.
All of this is genuinely exciting work, yet without evaluation, it cannot be proved if any of it succeeds. When a combined authority goes back to government to make the case for more powers or more money, the argument must be evidenced. What moves the dial is being able to say what has been done, what changed, how the approach caused that change, and what the next step would be to take it further. In this sense, evaluation is a critical political and strategic asset and is the difference between a pilot that quietly ends when funding runs out and one that becomes standard embedded practice.
Reframing evaluation as an act of ambition
To be serious about long-term change means to treat evaluation not as a burden but as a tool of advocacy. Evaluation is not just about measuring impact; it is about making a compelling argument about why the work matters and why it should be given lasting power. This means designing projects with that end argument in mind.
The most effective interventions build in the means to evidence their impact and align data collection with the story they ultimately need to tell. At its best, an evaluation can give the work longevity, credibility, and leverage to support more ambitious interventions in the future.
The core takeaway from this can make a big difference: we need to reframe evaluation away from being an obligation, to being an exciting opportunity where we can answer the question 'what story do we want to tell and how will we prove it?'.
GRN blogs and insights
Browse other GRN blogs and insights in inclusive growth and public service reform across the UK:







