ROUGH NOTES – Making Evaluations Work

Here are our rough notes on Making Evaluations Work. Please add your own comments in the main post!

P: There are lots of evaluations which are already publicly available (e.g. just the World Bank publishes hundreds of evaluations yearly). One of the main problems with regards to evaluations is that very little learning is done from all the available evaluations: making more evaluations available won’t solve this problem. Part of the reason for this is that carrying out a systematic literature review in one area is a very time consuming job (often about two years in academia, so should be similar here?)

R:  True. There are a lot of evaluations available – but bear in mind that this covers a huge range of countries and type of programmes. Although a lot is published, the amount available on any individual subject (as almost all systematic reviews start off by noting) is actually quite minimal. So having more available would still be valuable I think. Especially as the World Bank tends to evaluate a relatively limited set of projects, in a relatively limited set of ways – not much humanitarian work, for example.

Systematic reviews tend to focus on quite a narrow type of evaluation (primarily control groups), so tend to read a tonne of documents but only include a very small number. So I’m not sure how effective they are in many international development topics – but would be interested to hear more about that.

P: The system you suggest should lead to an increase in evaluation quality. But it would come at the expense of greatly increasing evaluation review time. While this would be fine for actual impact evaluations, these seem to be the great minority… In some humanitarian context, this payoff might not be worth it. As much as it pains me, timely mediocre information might be preferable to good information which comes too late.

R: That’s an interesting and important point. There’s a definite trade-off there. I think in most cases the increase in quality would be worth the effort – but not always as you say.

P: I’m unsure one organisation would be able to handle processing the sheer number of evaluations which are being carried out…

P:  I feel we’re carrying out way too many evaluations, often without much point. I think fewer evaluations with a bit more learning might be a better way to go.

R:  Agree with that as well. Or perhaps to have the existing evaluations far more specific. Evaluations tend to be very general – to evaluate the success of a project. It would be much better to have an evaluation which looked at one key, perhaps innovative approach of the project (how did the new accountability mechanism which they used help the project? for example) rather than trying to give an overall judgement..

P: There are quite a few problems with academic peer reviews (such as biases, which you refer to in your blog) – just to note that these same problems would be carried over. Though I still find a system with some shortcomings better than no system at all.

R: For some of those problems, see

P: Funding might be an issue with some of your suggestions, given the small proportion of programme spend dedicated to evaluations in multilaterals.

R: This relates to your previous point about resources. Funding is of course a concern – but you certainly wouldn’t use this system on anything more than a proportion of existing evaluations. Given that lots of money gets spent on evaluations as ti is (even though a small proportion of project spend) it would be worthwhile to do fewer, more high quality evaluations and end up spending the same

M: I would agree with both of you – that evaluations currently constitute poor knowledge management, and that a better system is needed to improve this. However – the concern with creating a system like you suggest runs the danger of too much work (creating a further layer of bureaucracy?), potentially diminishing the value of the subsequent output. Though as a peer-reviewed reference bank of evaluations then this could perhaps work.. Its certainly an important issue and a good idea. I would highlight the same concerns to the lack of proper contextual analysis that is actually built into programming, but that is a separate blog post… !

One thought on “ROUGH NOTES – Making Evaluations Work

  1. Pingback: Making Evaluations Work | AID LEAP

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s