For me, the most exciting message coming out of the debates around complexity is that development agencies should focus less on planning and delivering pre-determined outputs, and more on understanding the system that they work in and the part that they could play in changing it. They should throw away detailed implementation plans, embrace iterative management, and merrily accept that they are just one of many factors contributing to change. This blog suggests that this way of working is inconsistent with the use of contractual agreements to deliver aid.
Most large donors are heavily reliant on contractors to implement their projects. These contractors typically commit to conducting activities and producing an agreed number of outputs, in exchange for money. This agreement is formalised in logical frameworks, contract agreements, or just scrawled on the back of an envelope. The donor monitors and manages these programmes against these activities and outputs. For example, DFID conducts annual reviews of each project and scores each output from A+ to C. (With B, inexplicably, being a fail.) This blog refers to any type of organisation managed through this relationship – whether NGO, government, or private sector.
This management style requires outputs to be clearly defined and fully attributable to the programme. It encourages implementers to focus on meeting these quantitative targets rather than broader development outcomes. It is difficult to change the targets and outputs, since the donor will inevitably suspect the contractor of trying to get out of what they promised.
This can work well when the project is to deliver pre-specified services. However, it is generally incompatible with any programme that seeks to influence complex systems. Quantitative results may not be easily specifiable – and any quantitative targets could set perverse incentives. For example, a requirement to reduce the incidence of malaria by 20% can incentivise a health provider to distribute bednets, rather than strengthening health systems. Activities and outputs are likely to change as the programme learns more about the system it works in and experiments with new approaches, making it very difficult to set targets up-front. Information asymmetries between the implementer and the donor make it very difficult for the donor to effectively manage this relationship. How do you know if the implementer is revising targets based on a greater understanding of their potential to leverage systemic agents of change, or if they’re just making excuses for bad performance?
Could we deal with this by improving our monitoring and evaluation systems? This is what I spend my life doing, so I’d love to say that the answer was yes. For example, we might hold contractors accountable to outcomes rather than outputs. In some cases (such as payment by results pilots) this seems to work – but I think will remain the exception rather than the rule. Outcomes typically are only partially attributable to programme activities, if at all. The donor will never really know if missed targets reflect poor implementation or other factors, and so will be unable to hold contractors accountable for them. Perhaps more significantly, a good monitoring system is fundamentally dependent on organisational culture. Staff must want the data they collect to be a fair reflection of their performance, and to have incentives to report on failures as well as successes. They must want to know when things are not going well, and how it can be improved. In a contractor-client relationship, where present and future jobs are at stake, that’s simply not going to happen.
So what can be done? There are some simple steps which could be taken; donor staff could (and should) spend more time managing projects, and staff should have a deeper understanding of the context, which would allow them to better make judgement calls regarding the responsibility of the implementer for success or failure. Perhaps more significantly, they could take evaluation findings to reflect on the implementer – not (as is currently the case) the project design.
However, I don’t think that’s enough. Ultimately, working in complex environments isn’t susceptible to a simple contractual agreement. Transforming social and economic systems isn’t the same as contracting out dustbin collection, or even direct delivery of healthcare services. Instead of managing contractors through activity or output-based contracts, donors need to select partners who share their values, and offer longer-term support to help them achieve their goals. This can be NGOs, governments, civil society, or direct delivery. It could even be the private sector – though it is unlikely to include profit-driven aid contractors. By paying more attention to the organisational culture and motives of implementing agencies, donors might move a step closer to some of their loftier ambitions.
P.S. Regular AidLeap readers (hi Mum!) might notice that this blog is a a mish-mash of two previous blogs– the Risks of Complexity and Why the Private Sector Shouldn’t Deliver Aid. Both sparked fascinating discussions which informed this post, so please continue that in the comments.
2 thoughts on “Can contractual agreements deliver change in complex systems?”
Great blog – really interesting and completely agree that to effectively deal with complexity needs a different type of relationship with partners (whether contracted or through grants) – one where there is mutual accountability, space for flexibility and rapid adaptation based on feedback loops, safety to be open about failure and learning.
PS. One minor point – I am not sure I agree that a score in a DFID annual review of ‘B’ being a fail. The scoring system (which is not perfect) classifies ‘B’ as “moderately not meeting expectation” – i.e. not where we had hoped to be and therefore warrents attention to get it on track (or revise expectations). Certainly not (in my view) a fail, but interested to hear if that is the message our partners get?
Thanks for the comment. I’m not deeply involved in a DFID project so I’m sure your partners get a fuller picture! But I was under the impression that any project scoring all Bs was quite a big deal, and would merit special measures to improve their performance? It certainly doesn’t fit with my intuitive understanding, whereby an ‘A’ means that you’ve done exceptionally well…