Archaeological Evaluations

Indiana Jones and the Missing Baseline

I’ve conducted several reviews of international development organisations, looking across their portfolio of programmes to assess and compare impact. Each review has run into a key problem – nobody could tell us about any finished programmes. We were given enthusiastic reports about current and upcoming projects, but all the completed ones had been forgotten. It makes me feel a bit like Bill Murray in Groundhog Day, stuck in a permanent and repeating present.

This has really highlighted for me the problems of institutional memory in our sector. All too often, programmes are forgotten about straight after they’re finished. If there is an evaluation, it’s conducted, filed, and then lost. This has two obvious negative consequences. Firstly, we never really learn from the past, as ‘lessons learnt’ are swiftly forgotten. Secondly, there are few considerations of sustainability (or incentives to consider it.) Evaluations are almost all short-term, and nobody revisits the programme any later – because nobody knows that there is a programme to visit.

This isn’t an original critique. (In fact, Aidleap has made it in a previous post, though our proposed solution there was pretty unpopular.) But it is curiously unaddressed, both in the development and (as far as I know) the academic world. So I propose a reasonably simple solution. We undertake some evaluation archaeology; we find and evaluate an old and forgotten programme .

First we need to find a programme that finished ten or so years ago. Obviously records will be hard to come by, especially as this was in the early days of the internet, but hopefully there will be something etched on stone tablets in the depths of DFID’s archive. And then, like Bill Murray breaking out of his time–loop, let’s go and evaluate it. We could see if anyone still remembers the programme, whether any farmers use whatever technology it introduced, or whether it has any impact on the workings of government and other systems today.

Obviously there are technical challenges in assessing impact. There is unlikely to be any baseline data, and certainly no control groups. Recall (asking people what happened in the past) is dodgy at the best of times, and pretty meaningless after a ten year gap.

However, with a good understanding of what the project was trying to achieve, secondary data and monitoring reports, it should be possible to find out something worthwhile about the sustainability of major programmes. For example, evaluation archaeology into a programme that set up self-help groups could learn whether any groups are still going after 10 years, and what factors underlay success and failure. This is likely to tell you far more about what really works than an evaluation conducted straight after the end of the project.

Does anyone know of an evaluation which did this? Is anyone interested in collaborating on such a project?