Why are expats paid so much?

A popular pastime in some expat circles is justifying their huge tax-free salaries and expensive benefits.

Different people take different approaches. Many expats will explain (at length) that they need to get paid that much in order to persuade them to leave their comfortable, happy lives in the UK. They will discuss the cost of sending three children to private schools, explain that they still need to pay rent back in the UK, and conclude, wistfully, by pointing out that they could have earned so much more in the private sector. Other expats take a different approach, speculating smugly about how bad the project would be if it wasn’t for their wisdom and strategic insight, and pointing out that their high pay just reflects the value that they add to the project.


I’m not trying to say that expats aren’t an important part of development programmes. They bring essential skills, build the capacity of local staff, manufacture confidence that the programme is going well, and generally help the sector we know and love operate better.

But the stated reasons why expat salaries are so high are rubbish. Certainly a few expats would leave the sector if salaries were lower. But most of the weird people who have floated round the world for decades aren’t motivated by money – but by the job and the lifestyle. There’s a common misconception that the private sector is desperate for balding ex-hippies who can put together a logframe and speak an obscure West African language, but there’s no reason to think that’s true. With mediocre middle-managers able to pull down a six figure, tax free salary, expats earn way more than they would in other sectors, and more than they need to keep them in the business.

Instead, salaries are driven by structural factors in the aid industry that ensure a high demand for expats, but a very limited supply. Expats are typically required for senior management positions. This might be because they are more able than local staff, or simply because donors are biased towards nationals from their own country. Both factors probably play a role. It doesn’t matter for the sake of this argument; the result is that there is a high demand for experienced, capable expats.

But it’s increasingly difficult to find positions for junior, inexperienced expats. Why would any self-respecting development organisation hire a 22 year old Harvard graduate when they could get a much more experienced local staff member for the same role, at the same price?

So expats are in high demand – but only when they’re able to take on positions in senior management. The supply of expats, however, is hugely constrained. The consequence is that very few people actually get the experience which would allow them to take up a senior management position, leaving the lucky few facing relatively little competition, able to charge more or less what they want. Competition between aid agencies drives up salaries, and the big, unaccountable giants of the aid world (UN, World Bank, etc) have relatively little scrutiny over pay, further increasing salaries.

It annoys me immensely when aid agencies spend 100,000 pounds a year on an incompetent advisor, and all the more so when that same manager spends their time after work drinking expensive cocktails and complaining about how they’re not paid enough. But I’m not naïve enough to think that this is going to change by itself. Turkeys don’t vote for Christmas, and World Bank staff don’t vote for salary reduction.

The only way in which this might be solved is through an increase in the pool of workers. As education levels improve, as aid agencies gain more experience in building management capacity, and as the expanding aid sector trains up more strong local staff, perhaps there will be a bigger pool of potential senior managers to draw upon, leading to a reduction in average salaries.

Neither NGOs nor consulting firms have much incentive to ensure that this happens. Capacity building of staff sounds great, but typically leads to good staff leaving for better pay elsewhere. As a public good, this is something which donors should be investing in. In particular, they need to help local staff get that all-important first international experience. This is often a real catch-22; you can’t get international jobs without international experience, and can’t get international experience without international jobs.

So there is a need for better training, mentoring, and a better structured career path leading talented local managers to progressively more senior roles. Trainee schemes, secondments, and overseas placements would make a big difference here. I have seen isolated examples of good initiatives, often driven by a single manager passionate about the issue. But little evidence of what is most successful or systematic attempts to increase the pool of qualified senior management staff. If you have any good examples to share, please put them in the comments!

Did anyone hear about a Refugee Summit?

refugees-are-human-beings-okiThis year newspapers around the world have been full of stories about refugees and migrants – many fleeing war in Yemen arrived on the Horn of Africa, the USA continued to receive arrivals from Central America,the Syrian crisis reached Europe, and countries around Myanmar are supporting hundreds of thousands of Rohingya. And that’s just a handful of stories.

Statistics were released showing that if the world’s displaced people were considered together they would equate to the 21st largest country in the world with one of the youngest populations.  In 2015, UNHCR reported that there were 65.3 million forcibly displaced people in the world, 21.3 million refugees and 10 million stateless people. The UK even saw a TV series filmed by refugees crossing the Mediterranean on camera phones: Exodus. Suddenly those living in the global north are closer to the perils of refugees and migrants.

So it would be a fitting time for world leaders to come together and agree actions to improve the situation for those fleeing their homes . . .

On Monday the UN Summit for Refugees and Migrants was hosted by Ireland and Jordan.Despite arrangements being somewhat last minute, it did attract a certain number of high profile attendees with 50 states and organisations represented. Tuesday saw US President Barak Obama convene the Leaders Summit on Refugees in New York.  The UN High Commissioner for Refugees, Filippo Grandi, welcomed the new declaration signed at the Summit saying: “UNHCR is hugely encouraged to see the strong political commitments in the New York Declaration made immediately tangible through the new, concrete actions announced by governments today”.  Others in the sector weren’t so impressed:

  • Professor Alex Betts from Oxford University kicked us off with a two part series in advance on abstract discussions in the face of a deadly crisis and the real opportunity at the UN Summit. He’s a great writer and knows how to tell it how it is, so worth a read. (For more refugee related rants follow here and @alexander_betts
  • IRIN provided an accurate summary of the event with the conclusion that there were ‘no new ideas’. They provided specific analysis from the perspective of Central American refugees and from a Somali journalist considering the potential closure of Dadaab refugee camp in Kenya.
  • Peter Thomson, President of UNGA, outlined in the Huffington Post why he believes the declaration is an historic document. The declaration states that the adoption of the 2030 agenda or SDGs includes recognising ‘the positive contribution made by migrants for inclusive growth and sustainable development’.
  • The UN Secretary General declared  the ‘bold’ plan to enhance protections for migrants and refugees a ‘breakthrough’.
  • Marc du Bois gives four insights on why summits fail.
  • UN High Commissioner for Human Rights, Zeid Ra’ad Al Hussein, reminded all attendees that signing a piece of paper and patting each other on the back wasn’t adequate – ‘We must stop bigotry’
  • Human Rights Watch’s Bill Frelick called it a ‘failure of vision
  • There was the commitment to educate 1 million refugee children, yet we know at least 3.5 million are out of school
  • And we were reminded that focusing on the front page catching Syrian refugees in Europe might overlook and even potentially worsen the situation for other refugees.

All the new pledges can be found here.

So will much change for refugees and migrants following this week ? . . . . . we still hope so, but we doubt much of that change will be directly attributable to the New York Declaration alone.


A refugee is an individual who is outside his or her country of nationality or habitual residence who is unable or unwilling to return due to a well-founded fear of persecution based on his or her race, religion, nationality, political opinion, or membership in a particular social group. Article 1(A)(2) of the 1951 Convention.

A migrant is any person who is moving or has moved across an international border or within a State away from his/her habitual place of residence. IOM

A stateless person is a person who is not considered as a national by any State under the operation of its law. Office of the High Commissioner for Human Rights.

An internally displaced person is an individual fleeing natural disasters and generalised violence, stateless individuals not outside their country of habitual residence or not facing persecution. They are not given the same protection as refugees as they are not included in the 1951 Convention nor the 1967 Optional Protocol.



What’s wrong with DFID’s monitoring – and how to fix it

This is the third in our series on DFID’s monitoring systems. Click here to read our previous blog, which discussed our analysis of over 600 Annual Reviews from DFID.

I’ve previously mocked DFID’s Annual Reviews on this blog. In the spirit of constructive criticism that (on a good day) pervades Aid Leap, it’s now time to say something more detailed about why they don’t work, and how they might work better.

cartoon on grading.png

Annual Reviews are DFID’s primary way of monitoring a programme. They generate huge amounts of paperwork – with an estimated twenty million words available online – alongside a score ranging from ‘C’ to ‘A++’, with a median of ‘A’. If a programme receives two Bs or a single C, it will be put under special measures. If no improvement is found, it can be shut down.

This score is based on the programme progress against the logical framework, which defines outputs for the programme to deliver. Each of these outputs is assessed through pre-defined indicators and targets. If the programme exceeds targets, it is given an A+ or an A++. If it meets them, it gets an A, and if it falls short, it gets a B or C.

It’s a nice idea. The problem is that output level targets are typically set by the implementer during the course of the programme. This means that target-setting quickly becomes a game. Unwary implementers who set ambitious targets will soon find themselves punished at the Annual Review. The canny implementer will try to set targets at the lowest possible level that DFID will accept. Over-cynical, perhaps; but this single score can make or break a career (and in some cases, trigger payment to an implementer), so there is every incentive to be careful about it.

A low Annual Review score, consequently, is ambiguous. It could mean that the implementer was bad at setting targets, or insufficiently aware of the game they are playing. Maybe a consultant during the inception phase set unrealistic targets, confident in the knowledge that they would not be staying on to meet them. Maybe external circumstances changed and rendered the initially plausible targets unrealistic. Or maybe the programme design changed, and so the initial targets were irrelevant. Of course, the programme might also have been badly implemented.

Moreover, the score reflects only outputs – not outcomes. A typical review has just a single page dedicated to outcomes, and fifteen to twenty pages describing progress against outputs. It makes no sense to incentivise the implementer to focus on outputs at the expense of outcomes by including only the former in the scope of the annual review. The best logframes that I’ve seen implicitly recognise this problem by putting outcomes at the output level– but this then means that the implementers have even more incentive to set these targets at the lowest possible level.

I don’t want to throw any babies out with the bathwater. I think the basic idea of a (reasonably) independent annual review is great, and scoring is a necessary evil to ensure that reviews get taken seriously by implementers. As I’ve previously argued, DFID deserve recognition for the transparency of the process. I suggest the following improvements to make them a more useful tool:

  • All targets should be set by an independent entity, and revised on an annual basis. It simply doesn’t make sense to have implementers set targets that they are then held accountable for. They should be set by a specific department within DFID, and revised as appropriate in collaboration with the implementer.
  • Scoring should incorporate outcome level targets, where appropriate. It’s not always appropriate. But in many programmes, you can look at outcome level changes on an ongoing basis. For example, water and sanitation programmes shouldn’t just be scored on whether enough information has been delivered; but on whether anyone is using this information and changing their behaviour.
  • For complex programmes, look at process rather than outputs. There’s a lot of talk about ‘complex programmes’, where it’s challenging to predict in advance what the outputs should be. This problem is partially addressed by allowing these targets to be revised on an annual basis. In some cases, moreover, there is an argument for more process targets. These look not just at what the organisation is achieving, but how it is doing it. A governance programme, for example, might be rated on the quality of their research, or the strength of their relationships with key government partners.
  • Group programmes together when setting targets and assessing progress. Setting targets and assessing progress for a single programme is really difficult. It’s always possible to come up with a bundle of excuses for any given failure to meet targets – and tough for an external reviewer to know how seriously to take these excuses. The only solution here is to group programmes together, and assess similar types of programmes on similar targets. Of course, there are always contextual differences. But if you are looking at two similar health programmes, even if they are in different countries, at least you have some basis for comparison.
  • Clearly show the change in targets over time. At the moment, logframes are re-uploaded on an annual basis, making it difficult to see how targets have changed. If there was a clear record of changes in logframes and targets, it would be much easier to judge the progress of programmes. I’m not sure whether this should be public – it might not pass the Daily Mail test – but DFID should certainly be keeping a clear internal log.

I analysed 600 of DFID’s Annual Reviews. Here’s what I found.


  • DFID annually reviews the performance of every programme they fund, and publishes this information online.
  • We read 600 randomly chosen annual reviews, in order to look for patterns in the scores awarded by the reviewers.
  • We found relatively little variation in the data; 64% of programmes got an average score (A, “meeting expectations”), with less than 4% receiving the lowest or highest scores (2% received C, 2% received A++).
  • Programmes are scored both during and after implementation. During implementation, programmes are more likely to be scored average (A) grades, and less likely to be scored a high or low grades. Only 2% of annual reviews award a ‘C’, but 8% of post completion reviews do. I suspect annual reviewers favour average grades in order to avoid the potential negative consequences of extreme grades. This represents a missed opportunity to highlight underperformance during implementation, when it is still possible to improve.
  • There is substantial grade inflation over time. This might be because programmes are getting better at setting expectations that they can meet. This casts doubt on the validity of the annual review process; if the current trend continues, by 2018 95% of programmes will receive an A or higher.
  • This blog is the second in a series examining DFID’s annual reviews. For the first blog, examining the weird grading system that DFID uses, click here. Future blogs will suggest ways in which grading can be improved.

Full blog:

DFID annually reviews the performance of every programme they fund, in order to challenge underperformance, suggest improvements and learn from both successful and unsuccessful programmes. The results are published online, demonstrating an admirable commitment to transparency and scrutiny, and supplying fantastic data on the effectiveness of DFID’s programmes. To my knowledge, however, annual reviews have not been externally researched. This is for a simple reason; annual reviews are lengthy word documents, with no easy way to download and analyse them. I estimate that there are at least twenty million words of text available online, growing by around five million a year.

Fortunately, I have quite a lot of spare time and some extremely tolerant friends, so we decided to read annual reviews for a randomly selected 600 out of the 4,000 projects available online, and note down the scores given for each annual review and post completion review. With the help of the amazing IATI data, we compiled a spreadsheet listing the vital details of all projects such as spend, sector and country, alongside the grades awarded by DFID for the achievement of outputs and outcomes. This blog presents some of the findings.

What are annual reviews and post completion reviews?

To understand this exercise – and the limitations – you need to understand how DFID’s reviews work. Each DFID programme has a results framework, which sets annual performance milestones against each indicator.[1] In an annual review, the programme is scored according to whether it met the annual output milestones or not. The programme is awarded A++ if it ‘substantially’ exceeded expectations, A+ if it ‘moderately’ exceeded expectations, A if it met expectations, and B or C for not meeting expectations. Receiving a B or a C is a big deal; a programme that gets two Bs in a row or a single C can be put on a performance improvement plan, and might be shut down. Annual reviews happen (as the name suggests) every year. Post completion reviews are conducted at the end of the programme, and award a score, using the same weirdly inflated grading system, for achievement of both outputs and outcomes.

The remainder of the blog presents some of the findings, alongside my attempts to explain them. Please note that I analysed just the grades – leaving the other twenty million letters of the reviews untouched. More in-depth research might be able to validate some of my theories and suggestions, and would be a promising future research project.

There is little variation in the scores awarded for annual reviews

Fully 64% of the annual reviews in my dataset received an A grade, indicating that the project is meeting expectations. Less than 2% received an A++, and less than 2% received a C. (See the table below).

AR Scores # Projects % projects[2]
A++ 10 2%
A+ 104 17%
A 397 64%
B 92 15%
C 15 2%

This makes it harder to conduct analysis into the factors affecting the scores given, and so was a bit of a disappointment for me. It is not, however, an issue for DFID. There is no objective, correct percentage of programmes that should be exceeding or failing to meet expectations. DFID could reasonably argue that, since a ‘C’ is effectively a mark of failure, you wouldn’t want more than 2% of programmes in the portfolio to receive it. Programmes which get a ‘C’ may anyway be shut down, so there’s a natural bias against having many Cs in the portfolio, given the effort that goes into launching a programme in the first place.

Post completion reviews show more variation in grades than annual reviews, and a lot more negative scores

Post completion reviews and annual reviews both rate the programme on the outputs achieved. It turns out that post completion reviews have a lot more variation. They are less likely than annual reviews to give an A, but more likely to give every single other grade. (See figure 1 below).

figure 1.jpg

In particular, post completion reviews are much more likely than annual reviews to award a ‘C’ grade. Overall, 15 annual reviews (2% of the total) award a C grade; of which 13 are in the first year of project implementation. By contrast, 8% of post-completion reviews award a ‘C’ for the outputs achieved.[3] (See figure 2 below).

figure 2.jpg

There are a number of possible reasons for this. One potential reason is that programmes really do worse in their final year of implementation, perhaps because problems become harder to hide, or staff leave for new jobs and it becomes difficult to recruit new ones. This seems unlikely, however, as the annual review data suggests that programmes actually get better at hitting output targets as programme implementation continues. (See next section).

Consequently, this seems to reflect a flaw in the review process; DFID’s ongoing monitoring is more positive than the review at the end of the programme. It may be that more end-of-programme reviews are done by external reviewers, who perhaps have a more negative view of programmes achievement. I don’t have data on who these reviews were conducted by, unfortunately.

I suspect the lack of variation in the scoring is also due to risk aversion on the part of DFID’s annual reviewers. In particular, a ‘C’ rating has serious consequences, and can lead to a programme being shut down. This can make DFID staff look bad, creates extra work for everyone involved, and leads to difficulty in spending allocated budgets. A post completion review does not have these consequences, as the programme has already finished and can’t be shut down. By contrast, an ‘A’ is an extremely safe option to give; it expresses reservations without any serious impact on the project. This could lead to reviewers giving more ‘A’ grades than they should.

There is grade inflation over time

Since the first set of annual reviews in 2012, the percentage of ‘A’ grades has steadily increased (from 56% in 2012 to 68% in 2015) and the number of ‘B’ has decreased (from 21% in 2012 to 10% in 2015). This is shown in figure 3 below. The same trend is apparent in post completion reviews, where the number of ‘B’s awarded has plummeted from 30% in 2012 to 7% in 2015.

Figure 3.jpg

The same trend is apparent if you look at the scores awarded to a single programme, separated out by the first, second and third annual review. Between the first and the third annual reviews the percentage of A+s and A++s increases, while the number of Bs and Cs reduces. (Shown in figure 4, below).The number of As also increases, from 57% in the first year to 64% in the third, but I haven’t included it in this graph as it distorts the scale.

figure 4.jpg


An optimist would conclude that programmes are getting better – but I don’t know of any other evidence which suggests that there has been a dramatic change, either positive or negative, in programme quality between 2012 and 2016. It could also be that the worst performing programmes are getting shut down, which would lead to an overall positive trend in the data.

Having experienced several annual reviews first-hand, I suspect that programmes are getting better at setting targets below the minimum achievable level. Of course, there is no incentive to set an ambitious target and not meet it; while there are plenty of incentives to set a modest target and exceed it.

While this is partially a good thing – there’s no point in programmes failing because they set unrealistic targets – it threatens to make the whole process meaningless. For example, if the current trend continues, by 2018 95% of programmes will receive an A or higher in the annual review. DFID needs to strengthen incentives to set ambitious targets which really allow programmes to be held to account.


The arguments presented above do not suggest that DFID’s annual reviews are uniformly useless. The score is just one facet of an annual review, and in many ways the hardest one to get right. DFID deserves credit for doing and publishing annual reviews at all, and those who have experienced them will know that they often include hard questions and rigorous scrutiny.

Overall, however, this analysis suggests problems with the review process. Firstly, programmes are more likely to receive an average ‘A’ rating during implementation than on closure, and much more likely to receive a ‘C’ rating once implementation has finished. I suspect the likely cause is risk-aversion on the part of the reviewer, which reflects a missed opportunity to highlight underperformance when improvements are still possible. Secondly, grades are improving over time. While this probably represents an improvement in the ability of programmes to set realistic targets, it also risks devaluing the annual review process, if expectations are set so low that everyone meets them.

This analysis would have benefited from a larger sample; I only sampled 15% of the total number of programmes. It also hasn’t criticised the underlying logic of annual reviews, although it could be argued that programmes should be annually assessed on the likelihood to reach outcomes, not just outputs achieved. Additional insights would have been gained from a qualitative analysis of the annual reviews, as well as the quantitative analysis. Any keen students or researchers want to take on the task?[4]

[1] This is almost always in the form of a logical framework.

[2] Rounded to the nearest percentage. In some of these tables, not all of the scores add up to 100%. This is normally because of missing data in the ARs; not all have awarded grades.

[3] A bit of care needs to be taken in interpretation. The strength of this evidence is limited by the size of the sample; very few programmes get Cs, and so comparisons are naturally tricky. The ‘year 1’, ‘year 2’ and ‘year 3’ annual reviews are defined in relation to how many annual reviews the programme has had; so they actually might happen in different years. More programmes have had a year 1 annual review than a year 2 or 3 review, for example. Likewise, the post completion reviews aggregate things that have happened in different years. Finally, not all programmes which have had an Annual Review have had a post completion review, and vice versa.

[4] If so, please email us at aidleap@gmail.com. We’re happy to share the data we’ve received so far.

The Limits of Transparency: DFID’s Annual Reviews

This is the first blog in a series which will examine DFID’s Annual Reviews, exploring what they say, what they mean, and how they could be improved. 

The aid world is full of contradictions. Think about the last time you worked overnight to produce a report that nobody will ever read. Think about facipulation. Think about the Sachs vs Easterly soap opera. But for sheer, brazen ridiculousness, few things beat DFID’s Annual Review scoring.

DFID should be applauded for scoring Annual Reviews, and publishing all Annual Reviews online. It’s transparent, honest, and allows others to hold both DFID and implementing agencies to account. Quite refreshingly unexpected for an aid bureaucracy otherwise devoted to self-preservation. So at some point in DFID’s internal decision-making, the aid bureaucracy pushed back. You can imagine the conversation within DFID:

Person A: We want to objectively review all our programmes, score them, and publish the scores online!

Person B: But…then people will find out that our programmes aren’t working!

Person A: Good point, I didn’t think of that. *Long pause* I know. How about we only award positive scores?

And that’s what DFID did. Programmes are ranked on a five point scale from ‘A++’, through ‘A+’, ‘A’, ‘B’ and to ‘C’. Programmes which are meeting expectations – just about doing enough to get by – will score an ‘A’ grade. Call me slow, but I thought an ‘A’ was a mark of success, not a recognition of mediocrity.

Programme which underperform will be scored as ‘B’, and must be put on an improvement plan if they score two ‘B’s in a row. Again, possibly I under-performed at school, but I was always quite happy to get consecutive B’s for my homework. A programme which is truly diabolical, and in severe danger of being shut down, would receive a ‘C’. Programmes cannot receive a ‘D’, ‘E’, ‘F’, ‘G’, or ‘U’, unlike the normal English exam system.

dfids scoring system (2).jpgJust to prove I’m not making things up.

DFID thus suffers a kind of technocratic schizophrenia. It possesses the most transparent and open assessment mechanism in the world – and a scoring system designed to prevent any appearance of failure.

World Humanitarian Summit Report: What is it? What does it say? What happens next?

Final_WHS_LogoFollowing 2.5 years of consultation and discussion, today the UN Secretary General’s report about how the humanitarian sector needs to improve has been published. ‘One Humanity: Shared Responsibility’ outlines 5 areas of core responsibility that need Ban Ki-Moon (UN Secretary General) believes should be focused on at the World Humanitarian Summit due to take place in Istanbul 23-24 May.

The 5 core responsibilities are:

1. Political leadership to prevent and end conflicts

Humanitarianism can’t resolve many manmade problems without political input. The report calls for coordinated, compassionate and courageous decisions by political leaders to analyse and monitor risk; act on early warning signals; work together to find solutions in a timely manner; accept that with sovereignty comes responsibility to protect citizens. There is a clear emphasis on human rights violations. Political unity is required for prevention, not just management of crises. The UN Security Council needs to put its divisions aside and actively engage in conflict prevention. More evidence and visibility is needed of successful conflict prevention to help mobilise resources (funds and people) for it in the future. There needs to be more sustained investment in promoting peaceful and inclusive societies.

2. Uphold the norms that safeguard humanity

Re-affirm the humanitarian principles. Despite all the legal frameworks and agreements in place, the world is still ridden with ‘the brazen and brutal erosion of respect for international human rights and humanitarian law’. The type of wars that we now see have left civilians and aid workers in severe danger of kidnapping, injury or death. We need to reassert the demand for respect for agreed shared norms, enforce laws and support monitoring mechanisms to tackle the erosion of rule of law. The Secretary General asks all members states to recommit to the rules and calls for a global campaign to affirm the norms that safeguard humanity. Start by ensuring full access and protection for humanitarian missions. Those not already signed up to core statutes and conventions of international humanitarian and human rights laws, are invited to accede at the Summit. Those already signed up are asked to actively promote and monitor compliance.

3. Leave no one behind

The humanitarian imperative includes the idea that aid shall be given based on need and that everyone’s need should be required regardless of race, religion, nationality etc. The 2030 Agenda has reiterated the need to focus on those at the very bottom and those in the worst situations, not allowing issues of access to be an excuse for not helping those in need. The stateless, displaced and excluded are highlighted, particularly children, though all those deprived or disadvantaged are noted. A new target for the reduction of new and protracted internal displacement by 2030 is called for. Specific changes are listed for the national, international and regional levels (see page 24). Finally it calls for a shared responsibility in addressing large movements of refugees; a commitment to ‘end statelessness in the next decade’; and the empowerment of women and girls.

4. Change people’s lives – from delivering aid to ending need

We need to invest in local systems and stop being obsessed with the humanitarian development divide. Despite the SDGs and the new era of cooperation for development that they represent, ‘conflict and fragility remain the biggest threats to human development’. The focus needs to be on reducing vulnerability rather than just providing short term relief. To do this we need to set aside the ‘humanitarian-development division’, and focus on the assets and capabilities available at all levels and across all sectors. This section calls for collaboration based on complementarity drawing on our collective advantage. The WorldVision statement from the Global Consultation has been used to summarise the new paradigm approach ‘as local as possible, as international as necessary’. People’s dignity and desire to be resilient should be harnessed to reduce the dependency on foreign assistance.

5. Invest in humanity.

The real humanitarians are those who live in countries vulnerable to disasters and so we should be helping them to be better prepared for emergencies.

Local capacity needs to be strengthened in order for funds to be shared directly to national authorities and local NGOs. Funding is currently under representative for national/local NGOs (0.2% in 2014) and for disaster preparedness and prevention (0.4% of ODA in 2014). Areas of greatest risk do not receive the necessary funds. The current financing structure is inadequate, inflexible and ineffective. As local actors are best placed to understand need and develop relevant solutions, their capacities should be increased so that they can receive more resources and accept more responsibility for both preparedness and response. New platforms and mechanisms, as listed in the High Level Panel’s report on Humanitarian Financing, will be introduced by the UN and others are encouraged to set similar targets.

Three key points:

Much of the report includes issues traditionally considered non-humanitarian such as access to justice or economic empowerment programmes for women. From a humanitarian system perspective there were three interesting acknowledgements:

  • The ‘inequity in the aid system’ plus the ‘out-dated’ and ‘fragmented’ international aid architecture are not admitted but it is noted that many have expressed their ‘outrage and frustration’.
  • Then the ‘pride’ of local actors is acknowledged. Highlighting where hope does exist, for instance when women and the young are ‘empowered’ to act. Isn’t it shocking how creative people can be with their solutions when given the space!
  • The good news is that the determination to keep going and fight for change is growing at the local level. Increasingly the formal international aid system is being left behind. Individuals and groups from the ‘global south’ are increasingly organising themselves and their communities – or perhaps they’ve been doing it for decades and we are only now recognising it because of processes like the WHS?

And now?

The report calls for us to stop taking the easiest route and acknowledges that humanitarian assistance and/or peacekeepers alone is not sufficient. Global leaders are called upon to attend the Summit ready to ‘assume their responsibilities for a new era’. Section V outlines what he expects from each of the key actors. However, the ‘unified vision’ that the Secretary General calls for is still a long way off as rifts have emerged among civil society and governments are cleverly keeping their heads in the sand till the final moment. The Annex to the report, an Agenda for Humanity, clearly lists the suggested commitments – could this provide the concrete ideas for groups to galvanise around?

The Secretary General recognises the UN’s role in the failure to date. Can he and his team deliver real change through the Summit? When we’ve seen so much achieved at the global level recently – Global Goals/SDGs and COP – is it realistic to expect the magnitude of change needed to be delivered in just 3 months time? To date the political will to seriously see humanitarianism as more than a front page winner for Presidents/Prime Ministers has been lacking. And the refugee crisis in the EU at the moment has demonstrated just how out of touch and short sighted many of our current leaders are. But then again the UN got Beyonce to sing on World Humanitarian Day !

Dear Summit organisers, please prove us sceptics wrong, pull off something of significance . . . something that at least leaves us with a roadmap or consensus to make a real global compact for change at a subsequent Summit in less than 3 years time. But this needs to be more than a good sing song, more than a global campaign to do better, this needs to be a CONCRETE AGREEMENT FOR CHANGE.

An ambivalence towards Female Genital Mutilation/Cutting

cultural relativism.jpgFor a number of years now I’ve worked, researched, and advocated against Female Genital Mutilation/Cutting (FGM/C). And, yet, I still find myself with a sense of ambivalence towards the practice.

FGM/C, which is also known as Female Genital Mutilation or Female Circumcision refers to the partial or total removal of the external female genitalia, or other injury to the female genital organs for non-medical reasons. Over 125 million girls and women have undergone FGM/C worldwide.

The health consequences of FGM/C are devastating. According to the World Health Organization, FGM/C causes immediate and long term health consequences including pain, shock, bacterial infections, infertility, risk of childbirth complications, and sometimes even death. Consequently, some have argued that female “circumcision” is a form of “female genital mutilation” and should be eradicated (See the late Efua Dorkenoo’sCutting the Rose”).

Furthermore, FGM/C can be seen as a form of male control over and subjugation of women. Frequent justifications for the practice include ensuring virginity and purity before marriage and preventing infidelity during marriage. And let’s not forget that in most cases FGM/C is practised on minors, not of ‘consenting age’ and lacking informed choice.

So far, so unambiguous. FGM/C is a harmful practice which causes so much suffering. So why do I find myself ambivalent towards the work I’ve been doing over the last few years to advocate against it?

It is partly because I’ve also come to realise that anti-FGM/C campaigns can harm, as well as help women. FGM/C is a deeply culturally embedded practice. Consequently, it is not generally perceived as a form of “mutilation” by those who undergo the practice (See Ahmadu 2000). Women who do not undergo FGM/C could become ostracised from their own community or might be unable to marry. Also, is it really my place to determine what others can and can’t do to their own bodies?

Moreover, some people have claimed to detect racist connotations underlying the notion of “female genital mutilation”. Why is it that genital surgeries in the West – more snappily known as “designer vaginas” – are condoned, yet, as Uambai Sia Ahmadu argues, the same procedure on “African or non-white girls and women” is considered “Female Genital Mutilation”, even when it is conducted by health professionals. I personally think there is something very wrong with society if girls and women think they need to surgically alter their genitals…. However, Ahmadu’s points certainly raise some important questions.

In essence, my ambivalence is between a universalist, zero tolerance approach and an open, culturally relative one. Is there a way to combine the two positions? Marie-Bénédicte Dembour suggests embracing the ambivalence and adopting a mid-way position – to “err uncomfortably between the two poles represented by universalism and relativism” (Dembour 2001:59). Using the metaphor of a pendulum, Dembour argues that for FGM/C neither view can exist without the other because as soon as one stance is taken, you have to adjust to the other. In one context, FGM/C may be accepted; it may be practised out of love to ensure a daughter can be married. But in another context, e.g. in the UK, it may be seen as a form of child abuse. Dembour illustrates this point in reference to changing trends on legal decisions on FGM/C in France; moving to severe sentences in the 1980s and early 1990s and back to acquittals in the mid-1990s. She explains that having moved too far in one direction, the judiciary felt uncomfortable with this position and moved back to a more lenient one.

PenArguably, Dembour’s approach is a cop-out. It doesn’t provide any clear, forward direction. And, yet that is likely the point. FGM/C is a highly complex practice – there’s no definitive way forward, otherwise wouldn’t we already have figured that out?

Perhaps my way forward is to fit in-between the two poles and move towards either one depending on the context. For example, in a situation where I am clearly an outsider as a white woman from a non-practising community living in the UK, I find it difficult to condemn those who are not living in my own country for practising FGM/C. Our world views, knowledge, contexts are so very different and I, therefore, have no legitimacy condemning what they can and cannot do to their own bodies. In this situation, I find myself moving towards cultural relativism. However, I do feel that for those women and girls likely at risk or affected by FGM/C in the UK, I lean towards universalism and will advocate against FGM/C.

I’d like to open this discussion for others to contribute to. FGM/C is a highly sensitive and controversial issue, with multiple viewpoints on it. Do you agree with my reasoning here? And do you have other situations you’ve experienced that I and others also in this dilemma might be able to learn from and consider?