Working with the Evidence Agenda

fire burning

May the fires of hell burn the randomista!

The international development sector is increasingly obsessed with evidence, results, and value for money. Most aid workers have horror stories about this trend; being told that ‘empowerment’ is not a SMART indicator, asked to quantify the value of training, or struggling to fit a complex project into a rigid proposal format,

This ‘evidence agenda’ has driven a fierce debate. Proponents claim that a focus on results increases the quality of aid programmes, and ensures that scarce resources are well used. Critics argue that donors over-emphasise quantitative, experimental evidence – typified by the increasing role of randomised control trials. They believe that this encourages programmes to focus on easily countable outputs, such as distributing buckets or injecting babies. It prevents staff from working on long-term, more transformative development, which might not be easily counted or photographed.

These fascinating questions have been discussed in depth on FP2P’s Famous Wonkwar and the Big Push Forwards conference. However, these debates have a tendency to become stuck in a loop, preaching to the converted and damning the unbelievers.

With this in mind, I present five recommendations for the vast majority of aid workers stuck in the middle. They accept the right of donors to demand accountability – but are worried that this might undermine accountability to governments and communities in the countries where they work. They understand the importance of effective monitoring and evaluation – but struggle under the weight of unrealistic or downright absurd requirements.

First, assume good faith. Both sides of the debate have a tendency to demonise their opponents. But proponents of evidence-based approaches aren’t trying to return Africa to a colonial state, as is sometimes sinisterly implied. Similarly, critics of quantitative research aren’t trying to hide a terrible aid programme. Assume that the person you’re talking with is genuinely interested in doing the best job they can, and everything will go much better.

Second, find a common language. The development sector has enough jargon, without inventing more for the purposes of this argument. Many critics of the evidence agenda root their argument in concepts of different epistimologies and forms of knowledge, which don’t help anyone outside a narrow academic background. The same goes for economists who wield quasi-experimental jargon like a battering ram. Keep language accessible, and tell off others when they don’t do the same.

Third, pick your battles carefully. You may think that ‘counting beneficiaries’ is an insulting, top-down simplification of the complexity of your programme. It doesn’t matter – in all likelihood, you’ll still have to do it. Save your effort for more important arguments, such as when you’re asked to restructure your programme to fit donor priorities, or your advocacy project is rejected based on a cost benefit analysis.

Fourth, find common ground. Stereotypes about narrow-eyed, viciously-focused calculator-wielding economists are seldom fair. The maestros of randomised control trials – Abhijit Banerjee and Esther Duflo – make plentiful use of qualitative data, continually trying to understand why interventions do or don’t work. Almost all development workers are well aware of the limitations under which they work. Be honest about the problems with the tools you use, and you’ll often find that you’re not alone.

Finally, distinguish between inappropriate tools and appropriate tools used badly. In general, even the worse methodologies have some value. Seek it out, and use it to your advantage. For example, logical frameworks have a terrible reputation – but for simple programmes, they’re excellent ways to focus attention on a single goal and achievable outcomes. Even for complex programmes, rigorous thinking can be helpful in clarifying exactly what you want to achieve. The problems come when they cease to be tools, and become an unchangeable statement of what the project aims to achieve. Try to recognise what is and isn’t valuable, and use it accordingly.

NOTE: We normally put rough notes on all our post – the discussion between the Aidleap team prior to posting. Unfortunately this time they were too busy to develop rough notes – so please add your own comments below and we’ll reply.

10 thoughts on “Working with the Evidence Agenda

  1. Great piece. We hit on a few of these barriers among the speakers at our “Knowledge and Power in Development Policy” conference at the LSE earlier this month — lack of good faith, common language, etc — but also a some potentially interesting “ways forward” per your suggestions above.

    On your last point about tools, you might like this parallel quote from DFID’s Chris Whitty, who spoke in the morning session: “There’s a lot of development research that asks the wrong questions, or asks good questions with bad evidence to back it.”

    Another of our speakers, Dr. Hakan Seckinelgin, presented a possible model to move past one of ongoing logjams between academics and policymakers — that academics tend towards specific, limited findings and policymakers are desperate for evidence that can be applied to policy decisions more generally. Here’s the short form: before applying good research in a new context, ask these three questions:

    1) Don’t just say an initiative “worked”. Ask why.
    2) Are the reasons why a development policy worked in country X, available in country Y (where you want to apply it now)?
    3) Is the development problem the policy is trying to solve in country Y the same as it was in the country it first “worked” in?

    The full Storify of the session is here; Chris Whitty spoke first Dr. Seckinelgin last: http://storify.com/JSRP/knowledge-matters-thinking-about-change

  2. I think this is a great post with some important points well expressed. In all sorts of areas in relation to development there is a need for people to forge new alliances (across government, donors, private sector, civil society) and these are good guiding principles. Plus they can also apply to improving collaborations that should work a lot better than they actually do.

  3. Very useful recommendations. I would probably be characterized as a friendly critic of the evidence agenda. (See http://www.how-matters.org/2013/05/28/preserving-the-i-dont-know-within-big-data/ and http://www.how-matters.org/2010/11/17/161-indicators/) Given that my background is M&E and organizational learning, I don’t argue that more rigorous techniques do not have their place, especially for larger, publicly-funded projects. But the reason I advocate for less reductive and more holistic methods (especially the smaller the funding), is because the quantitative advocates already have a hold on the M&E “market.” I myself am a geek who loves a data set, albeit one who worries about the trickle-down effect of these approaches on local leaders. Conversations about the evidence agenda are important because in my experience, listening, ownership, intuition, and common sense end up often playing the largest role in the success of development programs.

    Check out this discussion on logframes. It highlights very well why these recommendations are important: http://www.how-matters.org/2012/02/26/logframes-errrgh/

  4. This is an excellent and balanced piece. I particularly like point 1 – I think the debates in this area tend to get very polarised and vicious (I have had hate mail from people after posting that RCTs can be a useful form of evidence!) and that doesn’t help anyone. If we could do as you suggest in point 4 and find common ground first and then calmly discuss differences, I suspect we could have a more useful interchange of ideas.

  5. Pingback: Chapter 2…in which kirstyevidence meets a randomista! | kirstyevidence

  6. Pingback: Working with the Evidence Agenda | Strategy, Innovation and Evaluation

  7. Pingback: More good stuff on overhead ratios and “worst charities” – 9-9-13 | Nonprofit update

  8. Pingback: Reflection on 2013: Merci beaucoup | AID LEAP

  9. Pingback: Why Expats? | AidSpeak

  10. Pingback: DFID’s Private Sector Development Programme Receives Poor Rating from Aid Watchdog | AID LEAP

Leave a Comment