Contribution vs Attribution – A Pointless Debate

RCT-Gold-StandardThere are three certainties in life. The first two are death and taxes. The third – known only to monitoring and evaluation (M&E) practitioners – is that during any workshop on M&E, someone will smugly point out that we should be examining contribution, not attribution.

Those of you without experience in M&E jargon (lucky you!) will need a bit of help at this point. “Attribution” is the idea that a change is solely due to your intervention. If I run a humanitarian programme dedicated to handing out buckets, then the gift of a bucket is ‘attributable’ to my programme. I caused it, and nobody can say otherwise. “Contribution” is the idea that your influence is just one of many factors which contribute to a change. Imagine that I was lobbying for a new anti-corruption law. If that law passes, it would be absurd to say that I caused it directly. Lots of factors caused the change, including other lobbyists, the incentives of politicians, public opinion, etc. The change is not ‘attributable’ to me, but I ‘contributed’ to it.

There are two reasons why it is a mistake to emphasise contribution. Firstly, it’s far too often used as a get-out clause. The phrase “we need to assess contribution, not attribution” is typically used to mean “Something good has happened. We want to imply that it was thanks to us, without trying to work out exactly how.” Even if you’re assessing contribution, you still need to understand the extent to which you contributed, and the process through which this happened. Of course, the contribution gurus understand this. All too often, however, their disciples just use contribution as an excuse to avoid doing any actual thinking.

This reflects a fundamental misconception that, if you look at contribution, you no longer need to examine the counterfactual. (The counterfactual, for the by-now-bewildered non-M&E folk, is the question of what would have happened if your programme had not existed.) Of course, it is not always possible to quantitatively assess the counterfactual, in the way that randomised control trials (RCTs) do. But it is always a valid thought-experiment, and an essential part of good M&E. If you’re not thinking about the counterfactual, then you’re simply not asking whether your programme really needed to be there.

Secondly, the term ‘contribution’ itself is used to mean multiple, often inconsistent things. The common-sense meaning of contribution, as defined above, is that there were many factors that caused the observed change. This is completely unhelpful. Any change (except for the most simple) is caused by many things. An RCT – often seen as the gold-standard way to assess attribution – recognises that many factors caused the measured outcomes. That’s why you use a RCT in the first place. Consequently, this meaning of ‘contribution’ does not justify the methodological differences that this approach implies.

A more useful definition of ‘contribution’ is that the change is not possible to measure quantitatively, or is a binary change (like a law being passed). In this case, it is not possible to ‘attribute’ a certain percentage of the change to your intervention. To illustrate the point, consider two different outcomes; (1) increased yield, (2) the passing of a law, and (3) a change in societal values. The first one is quantitative and divisible – it makes sense to talk of a percentage of a change in yield. Consequently, we can speak of an ‘attributable’ change in yield. The second and third, by contrast, don’t’ lend themselves to quantitative breakdown. It does not make sense to pass a percent of a law – it either passes or it doesn’t. Consequently, trying to work out what percentage of a new law is ‘attributable’ to your organisation is simply nonsensical. Similarly, you could never assign a percentage to the extent to which societal values change. In the latter two cases, consequently, it makes sense to talk of contribution rather than attribution.

In every case, however – whether ‘attribution’ or ‘contribution’ – the basic question is the same; what difference did the intervention make? How did it make this difference, and what other factors were relevant? Whether you can quantify the difference or not is a methodological detail, but doesn’t affect the basic question that you are asking. Looking at contribution, consequently, is a red herring.

Ebola: Lessons not learned

Guest Author: Marc DuBois, Consultant for Overseas Development Institute
ebola_photo

Today marks 42 days since the last new case of Ebola in Sierra Leone, meaning the country will join Liberia in being declared Ebola-free. That brings the world one step closer to a victory over Ebola the killer.

But Ebola has another identity – messenger. We listened. It told us that many aspects of the international aid system are not fit for purpose. Many – too many – of the problems the outbreak revealed are depressingly familiar to us.

Pre-Ebola health systems in Sierra Leone, Guinea and Liberia were quickly overwhelmed and lacked even basic capacity to cope with the outbreak. The World Health Organisation (WHO) failed to recognise the epidemic and lead the response, and international action was late. Early messaging around the disease was ineffective and counterproductive. There was a profound lack of community engagement, particularly early on. Trained personnel were scarce, humanitarian logistics capacity was insufficient and UN coordination and leadership were poor.

The lessons learned should also come as no surprise: rebuild health systems and invest in a ‘Marshall Plan’ for development; make the WHO a truly robust transnational health agency and improve early warning systems; release funds earlier and make contracts more flexible; highlight what communities can do, and engage with them earlier. Except these lessons learned haven’t really been learned at all: they are lessons identified repeatedly over the past decades, but not learned. 

Why is the system almost perfectly impervious to certain lessons despite everyone’s good intentions? The short answer: these lessons are too simplistic. They pretend that the problem is an oversight, a mistake to be corrected, when in fact the system is working as it is ‘designed’ to work.  The long answer: what is it about the politics, architecture and culture driving the aid system that stops these lessons from becoming reality?

Take a simple idea, like reconstituting the WHO as an intragovernmental agency with a robust mandate to safeguard global public health, and the power to stop an outbreak like Ebola. Sounds great, but not new. So it also sounds like wishful thinking. It does not address the inherent tension between sovereignty and transnational institutions.

Think of it this way: the more robust an institution, the more of a threat it poses to the individual states that are its members, and hence the greater incentive for those states to set limits to its power. WHO was ‘designed’ not to ruffle feathers.

A robust WHO? Can you imagine the WHO ordering the US or UK governments to end counterproductive measures such as quarantining returned Ebola health workers or banning airline flights to stricken countries? It will never happen.

Here is the true lesson to be learned: at a time of public fear and insecurity, it would be political suicide for any government to allow such external interference. The problem isn’t the institution, it only looks like it is; the problem is the governments that comprise it. That is not to say that WHO cannot and should not be improved. It is to say that the solution proposed cannot address the fundamental problem.

Or take a complex idea, such as community engagement. Our Ebola research found that the ‘early stages of the surge did not prioritise such engagement or capitalise on affected communities as a resource’, a serious omission that ultimately contributed to the spread of the disease, and hence a key lesson learned (see e.g., this Oxfam article).

Disturbingly, this is a lesson with a long history. Here, for example, is what the Inter-Agency Standing Committee (IASC) found in evaluating the international response to the 2010 earthquake in Haiti. The relevance, virtually word for word, to the situation in West Africa speaks for itself:

The international humanitarian community – with the exception of the organisations already established in Haiti for some time – did not adequately engage with national organizations, civil society, and local authorities. These critically-important partners were therefore not included in strategizing on the response operation, and international actors could not benefit from their extensive capacities, local knowledge, and cultural understanding … This is not a new observation. Exclusion of parts of the population in one way or another from relief activities is mentioned in numerous reports and evaluations .

Why is this lesson so often repeated and so often not learned? Does the answer lie in an aid culture where ‘taking the time to stop and think – to comprehend via dialogue, engagement and sociological research – runs counter to the humanitarian impulse to act’. Our report discusses a greater concern: the degree to which people in West Africa were treated ‘as a problem – a security risk, culture-bound, unscientific – to be overcome’. 

The ‘oversight’ is hardly an oversight: people in stricken communities ‘were stereotyped as irrational, fearful, violent and primitive; too ignorant to change; victims of their own culture, in need of saving by outsiders’. Perhaps that clash of cultures highlights why we should not expect community engagement to spontaneously break out simply because the problem has been recognised.

Powerful forces work against aid actors engaging with the community during an emergency, leaving us with a lesson that has not been learned even after years of anguished ‘never again’ promises to do better.

Lessons learned are where our analysis of the power dynamics and culture of the international aid system should begin, not where it ends.

For the full report, read ‘The Ebola response in West Africa: Exposing the politics and culture of international aid’.

Why aid shouldn’t be outsourced.

Over the past few decades, an increasing amount of public money has been outsourced. Medical, security, and social services have been put out to tender, and contracts worth over £100 billion a year in the UK are won by businesses ranging from major accountancy firms to tiny consultancies. Sometimes the competitive pressures have cut costs, increased quality, and enabled innovation. At other times, it’s led to huge profits for contractors and shoddy service for everyone else.

By outsourcing, I mean any kind of arrangement whereby an external party, whether NGO or private sector, is contracted to deliver a pre-designed aid programme. The services to be delivered are defined in advance, and then the donor (or contracting body) invites bids to deliver these services. This is increasingly prevalent in the aid sector. A recent review of DFID’s work revealed that the percentage of aid channelled through for-profit partners in fragile states increased from 3.7% (2009-10) to 19.4% (2012-13). Despite the occasional grumbling from politicians and the British tabloids, I’ve seen remarkably little serious critical scrutiny of this trend.

Like anything else, outsourcing is sometimes good, sometimes bad. Outsourcing cleaning services seems to work pretty well. Outsourcing healthcare, by contrast, raises much more significant concerns. This blog discusses three factors that would justify successful outsourcing, and then explores the extent to which the aid sector meets these criteria.

The first factor is a competitive market. Outsourcing is often justified on the grounds that competition between suppliers drives up quality and cuts costs. If there are not enough suppliers to form a competitive market, then this logic falls through. Outsourcing in an uncompetitive market just leads to a state-sponsored monopoly, with no pressure to cut costs or perform well. Even in a market with multiple possible contractors, it needs to be easy to switch from one supplier to another. If that’s not the case – for example, because the donor is locked into a five or ten year contract – then this reduces the pressure on the supplier to perform.

The second factor is predictable and measurable results. Outsourcing aims to harness the innovation and cost-cutting ability of the private and third sector, by aligning private and organizational incentives (to make money) with public incentives (to deliver some kind of service). In order to align incentives, contracts are drawn up specifying what results the business has to achieve to receive the money. If these results can’t be specified upfront, then it’s unclear how such a contract could be created. How could the donor agree to pay money, if it doesn’t know what it’s paying money for? If results can’t be measured throughout the programme, then the donor will never know whether the contractor has performed well or not.

The final factor is intrinsic motivation. Some theoretical research has suggested that mechanisms that reward high performance with high pay can have a negative effect if they undermine intrinsic motivation. In an international development context, organisations also often need to work together to achieve the results they need. Sharing knowledge and developing partnerships is key to success.

Is the market of aid suppliers competitive? To be honest, I’m not sure. I’ve heard plenty of accusations that aid is delivered by a small cartel of large organisations. Large aid programmes, especially in fragile states, are complex to manage and deliver. ICAI found that “As a result, the security and justice portfolio is increasingly reliant on a small pool of large contractors.” Their review of fragile states highlighted a potential “over-concentration in a few big global players”. On the other hand, there are plenty of aid contractors out there: although each with a different speciality and focus. I’d be interested to hear more evidence on the competitiveness of the aid market.

My problem with outsourcing for aid projects comes from the second and third factors. Results in most aid programmes are unpredictable. Attempts to specify them in advance can damage programmes rather than aid them, setting perverse incentives to meet inappropriate targets. If results can’t be specified in advance, however, it is not possible to hold them accountable for performance. Any kind of result can be seen as a success – because you have no prior target by which achievements can be deemed to be successes or failures. Contracting organisations to deliver unspecified services seems to open the door to poor performance backed up by excuses.

This is compounded by the fact that the results that really matter aren’t easily measurable. Some are, of course; number of mosquito nets distributed can be counted, the number of children in school can be assessed, and they can even be split up by gender. Higher level outcomes, however, are often impossible to measure and attribute to one project. Incidence of malaria depends on the strength of the health system, weather, and public sanitation. Children’s education is dependent on the wealth and interest of the parents, the way in which teachers are trained and treated, and so forth. Private organisations would (quite rightly) feel uncomfortable being held accountable for such goals.

Outsourcing aid delivering starts by trying to align incentives through complex contracts. This is the wrong place to start. Instead, donors should concentrate on finding organisations that share values and interests, whether governments, NGOs, or local community organisations. They should build the capacity of these organisations through long term support, and hold them accountable for long-term success, rather than for hitting targets or running projects.