There is no shortage of indicators in the development sector. Monitoring and evaluation manuals typically stress their importance; they’re referred to as “the backbone of M&E systems”, which “enable reliable monitoring and evaluation.” As a result, they are demanded by donors, supplied by programme staff and consultants, and misused by more or less everyone.
An indicator is a simple idea; it’s a clearly defined, measurable metric. Progress against this metric will ‘indicate’ the performance of the programme. Typically indicators are defined in advance, and targets are set in order to see whether the programme is on track or not. Used correctly, indicators can indeed be a valuable tool. However, they need to be seen as one component of a good monitoring and evaluation system – rather than the ‘backbone’ of it. There are three reasons for this; indicators only capture a small portion of what you should measure, they are not good at capturing qualitative data, and they ignore unexpected information.
Firstly, pre-defined indicators are only a small portion of what you should monitor. Most M&E manuals suggest that you pick a few key indicators, but this risks blinding you to other important and available information. Even a short survey will gather information against tens or hundreds of potential indicators. For example, imagine that you run a survey to assess satisfaction with a health centre. A typical indicator might be ‘average satisfaction rates with the health centre.” However, this only captures a small part of the story. You should be interested in how satisfaction rates vary with gender, age, time or location. You might be interested in the distribution as well as the average; for example, are there some very dissatisfied clients? (And why?). Of course, indicators can be disaggregated, and more indicators developed to capture these additional factors. However, sets of indicators swiftly become too big to be useful, without capturing everything that you might want to examine. The key to good monitoring is flexible, imaginative use of data, not simply reporting against pre-defined indicators.
Secondly, indicators are not good at capturing qualitative data. This is not for want of trying; the OECD evaluation glossary, for example, suggests that indicators should be both qualitative and quantitative. Qualitative indicators may well be very useful in theory – but in five years working in monitoring and evaluation I’ve never seen one that made sense. Qualitative indicators tend to fall in one of two categories. The first is to present quantified opinions, such as ‘’90% of participants believe that the training course was great.” This can be very useful, but is not actually qualitative data. Alternatively, qualitative indicators can become so vague as to be useless, such as ‘participants were satisfied with the training course.’ Good qualitative data is, however, essential for understanding progress, sustainability, and success of a project. Since an indicator presents a single variable along which views can vary, it does not provide the open framework needed to collect good qualitative information.
Finally, indicators – by their nature – can only capture what you’ve predicted in advance. They consequently can miss unexpected changes or challenges. To continue the healthcare example, a good monitoring system should pick up if local beliefs conflict with the health services provided. It should help to build understanding of how global events – such as the recent outbreak of ebola – impact on your work. All of these could be essential to monitor, but very difficult to predict and set indicators for in advance.
Of course indicators are important. They can be a great communication tool, allowing a clear message to be communicated externally. For example, DFID sets consistent indicators across the organisation; this allows it to boast that, last year, they provided access to financial services for 54,450 people. (And, more mysteriously, that 85,806,000 people have choice and control over their own development.) Indicators can set strategic direction – as the MDGs have successfully concentrated international attention on service delivery in developing countries. Indicators are crucial for giving an overall understanding of a complex portfolio, helping managers to compare progress across different contexts using commonly defined indicators. Finally, defining indicators forces staff to think about the data they need to collect, and what success would look like.
Consequently, indicators should be seen as one component of a strong M&E system – rather than the core of it. Unfortunately, this isn’t always the case. I have conducted a number of consultancies to help organisations develop their M&E system, and after much confusion discovered that what the client really wanted was a set of indicators. The international development community as a whole places great stress on indicators, captured in logframes, results management systems, and endless guidance notes. But if you really want to create strong M&E systems which allow you to understand success of the programme, reasons for this success, and emergent trends, indicators aren’t the place to start.
See the next blog in the series: Why Don’t We Ever Learn?