Saturday, 30 January 2010

Snow white and the 7 dwarfs


I've tried to highlight practical ideas which haven't already had much coverage (although you might still have come across some some already, or even thought of them yourself).

All of the proposals stem from a desire to fix the 'broken feedback loop' - between aid beneficiaries - and decision-makers, tax-payers and charity-givers.

After the general election, there's a big opportunity for the new International Development Minister to tackle this problem. They're going to need all the help they can get.

The paper isn't meant to be a comprehensive solution to the aid sector's problems. Incremental improvements are usually more profitable than grand strategies.

If you have any thoughts - supportive or critical - then please do share them.

Tuesday, 26 January 2010


Chairman: We know that if you put enough money into any scheme you will achieve something, but we are a value for money committee, we are looking at efficiency...

Dr Shafik: ...I think the NAO Report does also say that the DFID programme in Malawi has contributed clearly to poverty reduction...

Chairman: I am not arguing about that. That was precisely why I said what I said. If you spend enough money you are going to achieve something, but what I want to know is how can we as a value for money committee be assured that you are achieving value for money when clearly you are lacking in data about how effective your programmes have been in terms of value for money?

But obtaining better data on aid effectiveness is two-fold:

The easy part is to demand more evaluations and to demand that they are of higher quality.

The difficult part is designing interventions which are evaluable in the first place. That is, funding projects which can be subjected to a clear, fair and unambiguous test of whether they have succeeded or failed.

How much will aid have to change to become evaluable?

Sunday, 24 January 2010

Here is a startling account of how organisations resist criticism and change.

The following is an excerpt from Burt Perrin's review of DfID evaluations (p.17), which was commissioned by the Independent Advisory Committee on Development Impact.

A key issue regarding the role of committees and DFID managers concerns their authority with respect to commenting upon and approving deliverables, in particular drafts of final reports. Frequently, management and/or steering groups will provide very detailed comments, sometimes using Track Changes, to ‘suggest’ changes or even rewriting parts of drafts, deleting some critical comments and replacing these with other more positive statements. These ‘comments’ are sometimes strident and very directive.

In my view, such ‘feedback’ provided to almost all the evaluations reviewed represents a clear threat to their independence. In most cases, it appears that evaluation teams were able to deal with requests for inappropriate modifications in a responsible fashion. But this can be difficult, where even the expectation of negative reaction to critical comments can lead to self censorship. In my view, there are two studies, the Private Sector Infrastructure evaluation and the Pakistan country programme evaluation, which clearly crossed the line such that their independence was compromised...

In the former case, very strong demands for change to drafts were made, including indicating what the evaluation ‘needs’ to say. Important considerations were omitted from consideration or changed from evaluation questions to assumptions, including questions raised in the terms of reference and the preliminary literature review (whether or not infrastructure support necessarily contributes to poverty reduction, and the value of a facility approach itself).

In the case of Pakistan, some documentation was withheld from the evaluation team on the grounds of confidentiality. The evaluators also were very clearly and strongly told that they just could not say certain things that the Government of Pakistan at the time might find objectionable, out of concern for ‘sensitivity’, even though this included reference to a published article in a UK newspaper (e.g. ‘We cannot allow the sort of judgement in this sentence in what will, effectively, become a public document … The regime and media here will not make the distinction between us and our consultants.’).

For these and other reasons, it was not possible for the evaluation report to speak of the actual reasons for some of the reported findings. There were also separate internal reports that call into question the transparency and integrity of the formal published evaluation report and the management response.

Rotten Controlling Technocrats

William Easterly has co-published a new book of essays on Randomised Control Trials (RCTs). RCTs are a way of rigorously evaluating interventions in a similar way to medical trials.

In a recent talk he gave a forceful explanation of why economists should be 'thinking small' like this when assessing what works:

What we can say now is that this attempt to find the determinants of growth has failed so decisively, so comprehensively, that anyone today who makes any policy recommendation based on a growth regression has zero credibility.

Instead, he argues, RCTs should be used to measure the effectiveness of small-scale interventions. Those shown to work can then be rolled-out more widely.

Some critics have argued that a positive RCT result in one context doesn't necessarily mean the intervention will be successful elsewhere.

I think this line of argument will lose its power over time, as more and more trials are conducted. Once hundreds or thousands of trials have been done, people will have the information to adapt interventions to new contexts.

I'm in no doubt that RCTs offer an exciting new way to approach aid effectiveness. But there's one remaining concern, which few commentators have written about.

RCTs try and make development evaluation a scientific discipline. I firmly believe it is wrong to present 'science' as the only route to knowledge - especially when considering people's lives.

Let's not forget that 'scientific thinking' was at the heart of many fascist and socialist movements. Science has a self-legitimising power which can be very difficult to keep in check.

How will we cope with RCT results that suggest unpalatable or unethical interventions?

What will we do if a community rejects a project that we know will be good for them?