Why are we so quick to use an index? I’ve been told before that without an index a new publication, piece of research or institution won’t get any coverage. The media certainly love them, but I’m also surprised by the intrigue that researchers and academics sometimes show.
Over a pint recently, a colleague of mine was telling me about a recent field trip that was going to provide the evidence for country X’s ranking in the 2013 version of a well known index. She had been shocked firstly by the length of the questionnaire that participants were faced with and secondly by the speed that said questionnaire was undertaken. The results, she felt, were a list of tick box answers to rather complex and perception based queries.
Foreign Policy’s July/August edition was dubbed the ‘annual failed states issue’. 32 of the magazine’s 112 pages were dedicated to the issue, of which 11 were photographs. The 2013 Failed States Index placed Somalia at the top (though its score was lower than last year), closely followed by DRC, Sudan, South Sudan and Chad. The DRC was pulled out as a case study – alongside Greece and Egypt – and renamed ‘The Invisible State’. Rather apt really. Is the Congo a state? More importantly, as the authors state ‘It’s as if the world wishes to believe in the idea of Congo rather than engage with the actual place that exists’. DRC must be vying for the winner of ‘top five in the greatest number of depressing indices’ award. It’s also received a lot of high profile (read celebrity) visits, and yet, we still seem to be treating it in a similar way to other conflict ridden nations. Yes, we’ve just seen a new ‘offensive’ mandate for the UN. But can we help a failed state by continuing to work through the ‘state’?
I digress – the question at hand is: what impact do indices have?
If the Failed States Index aims to raise awareness of troubled nations or incentivise new action then I’m not convinced that objective has been reached. Those reading Foreign Policy are already interested in foreign affairs and are likely to know that the Sudans, Somalia and DRC are all troubled – basket case – nations. The fact that DRC is ranked 2nd most failed state in the world may reach the briefing packs of senior officials in foreign ministries, donor organisations or multi-laterals: and in so doing, increase the impact of the point. It is always good, as a policy official, to include a statistic or binary number. Those pushed for time often skim read till they reach numbers.
The power of an index should be derived from the rigor of its methodology.
One of the problems is the subjectivity of weightings used in composed indices. Unless the weightings are 100% equal, then the composer’s bias is inadvertently (or consciously) woven on to the final results. Most indices are composed by numerous components and each component can therefore be given a different weight depending on its perceived significance. Unless the index is entirely independent – meaning that it’s not funded or undertaken by one or a group of organisations with specific goals in mind – then the weightings on each component can be played with until they represent the existing beliefs of the author rather than the unbiased reality on the ground. And this needn’t be malicious but your opinion of what is important may be different from my opinion, but if I’m creating the index then I get to choose and the results of the index will reflect that.
The politically safest methodology, therefore, would be to equally weigh each component. Though some would argue that this requires each component to be of equal significance for the outset otherwise you are creating equals out of unequal values. In 2010, the UN’s Human Development Index was republished as the IHDI or Inequality-Adjusted HDI. The IHDI intends to show how much inequality affects human development, by comparing its values with the more common HDI values.
In comparison, ECHO’s Vulnerability and Crises Indices does in fact directly impact on policy as ECHO (one of the world’s largest humanitarian donors) determines its priority countries depending on the results of the indices. As the Commission states: ‘They are intended to be a common alternative reference framework to ensure consistency in the allocation of resources among the various geographical zones according to their respective needs’.
Indices are always going to be subjective. They are interesting to see which country is better at x or which sector is better at y. But they are less useful in driving or forming policy. They are good for rankings but not absolute values. Therefore, their greatest asset is the comparability of one year’s rankings with the next, and the next, and the next.
For more on weighting and development related indices see Chowdhury and Squire (2006). If you want to look further in to this issue, here are a few (randomly selected) others:
- Center for Global Development’s Commitment to Development Index
- World Bank’s Country Policy and Institutional Assessment
- DARA’s Humanitarian Response Index
- World Bank’s Doing Business Index
- Save the Children’s Child Development Index
- World Bank’s Logistics Performance Index
Do add others . . .






