I spent this New Years Eve in a pub with an old friend. We’d covered the normal topics of conversation – the love-lives of mutual friends, why our nostril hair is sprouting, how everything was better when we were young. Conversation shifted to our work, and I started rambling about my life as an M&E consultant.
“Why is that?” my friend asked. “That seems completely stupid.”
And I was stumped for an answer. Development charities are full of bright, enthusiastic, often very nice people, doing amazing things with limited resources. Why are they so slow to use monitoring data that could improve the effectiveness of their work? I spluttered a bit, then changed the topic and we spent the rest of the evening trying to balance a pint glass on a spoon. But it is an important question, and I’ve spent some time since then thinking about why we are so slow to learn from monitoring data. Here is a short, non-exclusive list of reasons that I’ve encountered through my time in the sector.
1) Monitoring frameworks are useless. One reason why many programmes don’t use monitoring data effectively is that the monitoring data they collect is rubbish. Too many monitoring frameworks are geared towards measuring a small number of quantitative indicators, sometimes of little relevance to the programme. If the monitoring framework isn’t encouraging programmes to gather and think about a wide range of relevant data, it’s no surprise that they don’t learn much from it.
2) Pressure to spend. Incentives are generally set from the top, whether this is the donors, board of trustees, or senior management of an organisation. If these pressures are primarily to spend money or conduct activities on time, it is no surprise that there is little interest in learning about whether these activities were successful or not. I worked in one humanitarian organisation which was spending money too slowly – a terrible sin in the aid sector. I remember the team leader strutting up and down in a meeting, waving a folder of paper above his head. “You need to spend, people!” he yelled, like a bearded Gordon Gekko. “Get out there and move some money!” Not exactly calculated to inspire thoughtful, reflective practice.
3) Short projects. Even without the clear management dysfunction described above, short term projects often leave little time for staff to really learn from monitoring information. Imagine that you’re implementing a three year project. You probably spend at least a year setting up, finding teams, and running through an initial cycle of activities. If you find that this initial cycle of activities wasn’t particularly effective – perhaps you used the wrong partner, worked in the wrong place, or were targeting the wrong problem – this doesn’t leave you much time to fix it. Revising the programme could take another six months, which would mean that you’re half way through the project without having achieved anything. In this situation, most programmes prefer to ignore any evidence that things are going badly, and plough on regardless.
4) Complex change processes. Sounds obvious, but learning from monitoring data requires some kind of process to allow organisations to feed this learning back into performance. This learning loop is often dysfunctional, and so revising plans and strategies is so much work that it’s easier not to bother. A prime example of this is DFID’s use of the logframe, a document which sets much of the strategic direction for their programmes. Although DFID guidance allows for – indeed, theoretically encourages – revision to the logframe, in practice it’s a massive pain to revise. By the time any changes have gone up through the organisation, been reviewed, argued over, and reviewed again by more senior people who give completely different advice, it just isn’t worth the bother. So although staff on the ground may be learning from monitoring data, there is no real process for this to feed back into the overall programme strategy. (Although this may be changing.)
5) Not having a clue how to make things better. Finally, one of the key reasons development programmes don’t improve based on monitoring data is that they genuinely don’t have a clue how. Development is a tricky business, and programmes typically aim to do ridiculously ambitious things. Developing health systems, promoting economic development, and providing decent educations are all issues that developed countries have wrestled with for centuries – and they don’t do a great job of them. So if your monitoring data shows that the health system isn’t strengthened at all, then it could well be that the programme just hasn’t a clue what to do, and so continues doing the same thing that they know isn’t working.
Ultimately, good use of monitoring data comes down to strong leadership. Senior management needs to understand the importance of monitoring, and put resources and time into it accordingly. They need to resist organisational incentives to spend money, or to run projects badly, and actually care about what they’re doing. And they need to have a clear idea of what they can do, or inspire others to get a clear idea, and not be afraid to close down projects when necessary.
See a great response from Elina Sarkisova: “Could Paying for Results (finally) Help us Learn?” (and if you’re a real aidleap fan, then scroll down to the bottom to see our our response to her response…)
And also see our previous blog in this series: What have indicators ever done for us?