REDI Updates 3: Issues Around Levelling Up Evaluation for Place-Based Interventions

Published: Posted on

Welcome to REDI-Updates. REDI-Updates aims to get behind the data and translate it into understandable terms. In this edition, WMREDI staff look at the government's flagship policy - Levelling Up. We look at the challenge of implementing, understanding and measuring levelling up.

In this article, George Bramley outlines some of the key factors to consider when evaluating levelling up programmes. 

 View REDI-Updates.

When I joined Department for Trade and Industry in 1997, I was tasked to design and commission evaluations of CWP2 (Second Competitiveness White Paper) Innovative Projects, which in hindsight supported what we now call placed-based intervention to support economic growth.

The programme involved local partnerships coming together, usually led by the local Training and Enterprise Council (TECs) –in some ways similar to Local Enterprise Partnerships – identifying specific local barriers to competitiveness and designing services for local businesses to address them. CWP2 funded, on a competitive basis, around 200 projects.

My first challenge was getting a Ministerial sign-off because the programme had been developed by the previous administration and this involved making the case by video conference to the Minister’s Special Advisor that the evaluation was needed to capture learning from what we might now call place-based local innovations in business support.

I also worked up evaluation plans for CWP2’s successor programme Local Competitiveness Challenge (LCC), but we did not proceed with the evaluation because it failed the evaluability assessment. I led a suite of evaluations of the Phoenix Development Fund that supported innovative approaches to supporting enterprises in disadvantaged areas and underrepresented groups to tackle social exclusion and economic inactivity.

Drawing on these evaluations in my formative years as an evaluator, and more recently evaluations and development of business cases that I have undertaken more recently at WMREDI – including Pivot and Prosper, Connecting Communities and the Cultural and Creative Social Enterprise Pilot – I present the following observations and reflections that might be relevant to the evaluation of programmes fitting under the levelling up umbrella:

  • Scale and size of placed-based intervention: until recently projects and pilots funded by UK government programmes and the European Regional Development Fund have tended to be relatively modest in level and duration of funding. This has two implications. The first is from a scientific perspective: the interventions are too small and short to be able to pick up an effect size on impacts sought. The second concerns resourcing of the evaluations to assess an intervention’s effectiveness. Applying rules of thumb, a percentage of programme spend being the evaluation budget -as currently being applied in Community Renewal Fund – means delivery organisations in receipt of grants often have insufficient funds to commission a meaningful and useful evaluation that supports their own learning or robustly assesses their own impacts. At a national level, the only realistic approach is to adopt a thematic approach in which specific lines of enquiry are explored based on the scheme objectives and range of projects funded, analysis of management information and case studies or deep drives.

 

  • Level of prescription: by which I mean the extent the funder prescribes the level of intervention required with individual beneficiaries for it to count as significant assistance, types of activities to be delivered and processes to be followed. To some extent, from an evaluator’s perspective, higher levels of prescription make it easier to aggregate activities and outputs, but can also work against the internal logic of the projects being funded in having unintended consequences of suboptimal allocation of resources and acting as a barrier to innovation.

 

  • Failure to nurture place-based interventions as a pipeline of policy and service innovation: while there are some national and local interventions that can be traced back through different funding initiatives where delivery organisations have managed to navigate changing funding regimes, on the whole, many promising innovations have fallen by the wayside. This is mainly due to the short-term project-based nature of funding in response to immediate policy needs which may have moved on when the programme providing the funding ends. It is rarely the purpose of national-level evaluations to recommend the continuation of locally designed innovative services to propose they are scaled up to the national level. That said there are some notable schemes that have gone national from local beginnings, such as Aim Higher which addressed barriers to participation by less advantaged young people in higher education and the Small Business Leadership Programme.

 

  • Ability to aggregate impacts using a bottom-up approach: to be effectively placed based interventions must reflect local circumstances resulting in subtle but sometimes significant variations in project rationales, objectives and choice and definition of metrics to measure progress. For this reason, attempts at common metrics can fail. The reason why LCC failed its evaluability assessment was because of metrics. At the proposal stage projects, bidders were invited to choose from a menu of performance indicators against which they would like to be evaluated. A review of metrics chosen by projects revealed they often did not relate to the aims and activities of the project and appear to be chosen on the basis of what the bid writers believed would tick the right boxes.

 

  • Reliance on case study methods: too often the most practical approach is to adopt a case study approach based on a representative sample of projects by size, geography, and type. Depending on the level of resources available and the level of sophistication, the robustness of case studies can vary; ranging from a single-depth interview to almost a freestanding evaluation, drawing on multiple methods and evidence sources including interviews with different stakeholders, analysis of management information and relevant documents, and in some cases draw on evaluations commissioned by the project lead provider.

 

  • Who commissions, who is the evaluation most useful for and how to create shared utility: who commissions and who will use the findings of an evaluation may not always be the same. Where the evaluation is commissioned by the funder their concerns may be narrowly defined around the programme objectives and whether these have been delivered for accountability purposes, rather than to capture learning to inform the design of similar future interventions. This can mean there is little information on how previous similar programmes have worked. Yet such information is needed for producing better business cases to support investment decisions, and importantly improved delivery placed based interventions. There are traditions in evaluations, such as participatory and democratic evaluation, which more fully engage the range of stakeholders who need to be involved in place-based interventions that are concerned with introducing changes necessary for levelling up.

 

  • Reverse engineering of successful projects to fit funding available: too often projects that have demonstrated proof of concept need to change their design and modify their aims to secure further funding. Sometimes this provides validation of the concept that projects are genuinely adaptable to changing circumstances and strategic imperatives but in other cases, they can so substantially change what is being delivered and to whom that the successor project may not be recognisably linked to the original.

 

  • Pros and cons of single pot: the adage was to follow the money when doing an evaluation, which is only truly possible when there is a discreet budget attached to an initiative. However, in practice, this results in lots of pots of money that increases administration reduces flexibility in allocating resources and means a less agile approach to managing a programme of activities required for genuine place-based interventions. This has resulted in repeated calls for a single pot funding that allows discretion on how funding is spent locally and supports devolution of responsibilities from the central government. This works well where there are no budgetary pressures and money is devolved with additional responsibilities. Otherwise, the result is unintended consequences, such as defunding and scaling back of some services (e.g. public health) which are needed to address health inequalities as part of levelling up.

 

  • Light touch monitoring leading to more evaluation: since around 2002 policymakers have reduced monitoring data requirements on service providers to reduce administrative burden. While this was genuinely necessary, the focus has often been on minimum reporting requirements. This has been at the expense of optimum reporting requirements in terms of gathering data that supports continuous improvement and taking timely corrective action and provides useful insights into how a programme is working and might be optimised. As a result, is often necessary to commission evaluators to collect similar information under the guise of real-time evaluation which can be very resource intensive to do well and difficult to deliver.

 

  • Benefits of supporting self-evaluation and learning by local deliver: At City-REDI we have successfully used tools to support self-evaluation in place-based projects on a quarterly basis which provides a supporting narrative for management information returns. These tools have been designed to allow project managers to reflect on how their project is progressing, identify future opportunities and actions, as well as capture information to support formative and summative evaluations.

You can also view the work from the Evaluation Lab here.

View REDI-Updates.


This blog was written by George Bramley, City-REDI / WMREDI, University of Birmingham.

Disclaimer:
The views expressed in this analysis post are those of the authors and not necessarily those of City-REDI, WMREDI or the University of Birmingham.

Sign up for our mailing list.

Leave a Reply

Your email address will not be published. Required fields are marked *