in-di-ce-s

Why are we so quick to use an index? I’ve been told before that without an index a new publication, piece of research or institution won’t get any coverage. The media certainly love them, but I’m also surprised by the intrigue that researchers and academics sometimes show.

Over a pint recently, a colleague of mine was telling me about a recent field trip that was going to provide the evidence for country X’s ranking in the 2013 version of a well known index. She had been shocked firstly by the length of the questionnaire that participants were faced with and secondly by the speed that said questionnaire was undertaken. The results, she felt, were a list of tick box answers to rather complex and perception based queries.

Foreign Policy’s July/August edition was dubbed the ‘annual failed states issue’. 32 of the magazine’s 112 pages were dedicated to the issue, of which 11 were photographs. The 2013 Failed States Index placed Somalia at the top (though its score was lower than last year), closely followed by DRC, Sudan, South Sudan and Chad. The DRC was pulled out as a case study – alongside Greece and Egypt – and renamed ‘The Invisible State’. Rather apt really. Is the Congo a state? More importantly, as the authors state ‘It’s as if the world wishes to believe in the idea of Congo rather than engage with the actual place that exists’. DRC must be vying for the winner of ‘top five in the greatest number of depressing indices’ award. It’s also received a lot of high profile (read celebrity) visits, and yet, we still seem to be treating it in a similar way to other conflict ridden nations. Yes, we’ve just seen a new ‘offensive’ mandate for the UN. But can we help a failed state by continuing to work through the ‘state’?

I digress – the question at hand is: what impact do indices have?

If the Failed States Index aims to raise awareness of troubled nations or incentivise new action then I’m not convinced that objective has been reached. Those reading Foreign Policy are already interested in foreign affairs and are likely to know that the Sudans, Somalia and DRC are all troubled – basket case – nations. The fact that DRC is ranked 2nd most failed state in the world may reach the briefing packs of senior officials in foreign ministries, donor organisations or multi-laterals: and in so doing, increase the impact of the point. It is always good, as a policy official, to include a statistic or binary number. Those pushed for time often skim read till they reach numbers.

The power of an index should be derived from the rigor of its methodology.

One of the problems is the subjectivity of weightings used in composed indices. Unless the weightings are 100% equal, then the composer’s bias is inadvertently (or consciously) woven on to the final results. Most indices are composed by numerous components and each component can therefore be given a different weight depending on its perceived significance. Unless the index is entirely independent – meaning that it’s not funded or undertaken by one or a group of organisations with specific goals in mind – then the weightings on each component can be played with until they represent the existing beliefs of the author rather than the unbiased reality on the ground. And this needn’t be malicious but your opinion of what is important may be different from my opinion, but if I’m creating the index then I get to choose and the results of the index will reflect that.

The politically safest methodology, therefore, would be to equally weigh each component. Though some would argue that this requires each component to be of equal significance for the outset otherwise you are creating equals out of unequal values. In 2010, the UN’s Human Development Index was republished as the IHDI or Inequality-Adjusted HDI. The IHDI intends to show how much inequality affects human development, by comparing its values with the more common HDI values.

In comparison, ECHO’s Vulnerability and Crises Indices does in fact directly impact on policy as ECHO (one of the world’s largest humanitarian donors) determines its priority countries depending on the results of the indices. As the Commission states: ‘They are intended to be a common alternative reference framework to ensure consistency in the allocation of resources among the various geographical zones according to their respective needs’.

Indices are always going to be subjective. They are interesting to see which country is better at x or which sector is better at y. But they are less useful in driving or forming policy. They are good for rankings but not absolute values. Therefore, their greatest asset is the comparability of one year’s rankings with the next, and the next, and the next.

For more on weighting and development related indices see Chowdhury and Squire (2006). If you want to look further in to this issue, here are a few (randomly selected) others:

Do add others . . .

5 thoughts on “in-di-ce-s

  1. I’d add the OECD’s better life index. If you go to http://www.oecdbetterlifeindex.org/ you can play around with the weightings, illustrating the point you make regarding subjectivity. I’d recommend playing around with the dials for fun…

    I suppose no comment on subjective indices could be complete without a nod to the Gross National Happiness (GNH) index, so here is mine.

  2. Thanks. I agree with the thrust of this – especially of the value of looking at progress over time. Indices are great for promoting conversation: but they raise questions more than they answer them.

    I would quibble with two things in your post. First, equal weights on components are just as much a choice as any other weighting scheme. There is no completely neutral way to add things up. Second, you also need to take care to normalise components before you add them up: otherwise, an index with equal weights will be most influenced by the components which have the biggest standard deviation.

    Owen

    • Thanks Owen. You’ve highlighted one of the points that I deleted before posting the blog because I was worried it was getting long and complicated, but I’m delighted you’ve raised it. There is no completely neutral methodology. But when I followed that through I ended up in a reductionist place that left me wondering if something (partial indices) might be better than nothing at all . . .

  3. I’m not totally convinced with your argument about rigour. While I think indices should clearly explain where the data comes from and how they are calculated and what assumptions they contain, and are hopefully free of misrepresentation or math errors, I think the most important aspect of an index is whether it succeeds in summarizing a complex phenomenon in a convincing and appealing way.

    We might instinctively know that poverty (or inequality) is about more than just how much money you have, but if you want to create an index to illustrate this it needs to be both explanatory (i.e. variations in the index should tell you something useful) and be understandable and simple enough for people to understand it and “believe” it.

    The reason I take this view is that indices are in most cases about simplifying complex phenomenon for non experts and for advocacy purposes while experts in a field would mostly rather use a nuanced multivariate view of a situation in order to understand it adequately.

    • Good point Ian. But if that’s the case why are many of the one’s that we listed used by or published in trade papers/technical organisations? But for public campaigns and public education they definitely make it easier for us to understand. I’d love to do a study of the general public in each country of the world and ask them which is the worst failed state . . . perceptions play a large role in policy making and these can definitely be influenced by simple lists of worst to best.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s