Despite limitations, the use of randomised control trials has led to a paradigm shift in development policy evaluation If Rip Van Winkle was an academic economist and woke up from a two-decade long sleep this week, he would be baffled by the news of the Nobel Prize in Economics this year awarded to Abhijit Banerjee, Esther Duflo, and Michael Kremer for pioneering the use of randomised control trials (RCTs) in development economics. Back in the late 1990s, this was not a well-known concept, let alone a widely practised research method. Moreover, research in economics was still largely theoretical although the shift in a more empirical direction had already started. It is true that the concept of randomised experiments is well-known in medical trials, and the idea itself goes back to the statistician Ronald Fisher back in the 1930s. RCTs use the following insight: you select two groups that are similar and then randomly select one to receive the treatment (a drug, or a policy) being tested and then compare the outcome of this group (called the treatment group) with that of the other group (called the control group). If the difference is statistically significant, then that is attributed to the treatment. Using this method in economics has altered our views about what policies work and what do not. Michael Kremer was visiting Kenya where he spent a year teaching school students in the late 1980s as part of a small educational NGO, and came up with the idea of applying RCTs in the development context somewhat accidentally, or shall we say, randomly. In deciding whether a rural school should prioritise building more classrooms or giving out new textbooks and uniforms, he suggested that the non-governmental organisation (NGO) phase these interventions randomly to study their impact. Over the next two decades, together with Mr. Banerjee, Ms. Duflo and many of their colleagues in the academic and policy world, this method now has become one of the main tools used in empirical work in development economics and in related fields. It has also led to a paradigm shift in development policy evaluation — the World Bank, and many governments and large NGOs now insist on randomised control methods wherever feasible. Real-life worth The key innovation here is not coming up with the idea of randomisation but applying it in real life with programmes and interventions that directly affect the lives of the poor. From testing drugs to placing government programmes as well as those carried out by NGOs on a randomised basis across villages, households, and organisations, takes quite a leap of imagination. The reason it caught on in academic research in economics is because with greater computing power and large data-sets that were available, empirical work was in ascendance and yet, given the nature of data that is collected in usual surveys, it is hard to establish the effect of any programme on any outcome in a rigorous way. The key worry is, the programme or intervention may have been tried out in areas where something else was going on and so we could be picking up a spurious correlation. RCTs solve this problem by placing the programme in a randomised way. RCTs, however, can mostly be applied to study problems at the micro-level where the implementation of an individual programme can be done in a randomised way that allows for a statistically satisfactory evaluation of the programme’s impact. To the extent RCTs are not feasible, which is often the case with more large-scale macro-level questions, one has to rely on other, more roundabout methods to overcome the problem of causal inference. This immediately points to both the strength and weaknesses of RCTs: when feasible, they are a great tool to use, but for many questions of great interest in development economics such as broad macro-level issues or the more long-run aspects of development and institutional change, they are not feasible. As with any new method that attracts young researchers and research funding, there are grounds to worry that this will push out important research that uses other methods, including theory and empirical work that does not use RCTs. Looking beyond evaluation However, the tension is not always so stark. A new generation of RCTs are going beyond programme evaluation and asking how individuals react to change in prices, contracts, and new information in the context of specific markets such as land or credit. Here the experiments are often informed by theory. For example, a recent RCT offered different terms of sharecropping contracts in a randomised way to find out the effect of higher crop-shares on agricultural productivity in the context of tenancy. The evidence suggested significant productivity gains, confirming the importance of incentives. Interestingly the study confirms the findings of earlier studies like the one on ‘Operation Barga’, the tenancy reform programme carried out in West Bengal by the Left Front government in the late 1970s and early 1980s, which shifted crop-share up. In that study, which happens to be by Mr. Banerjee, me and Paul Gertler in the pre-RCT days, there was no way of fully controlling for all other factors that had changed contemporaneously, such as empowering panchayats. This is an example of how RCTs can be potentially applied to a broader set of issues going beyond programme evaluation. Moreover, one key limitation of RCTs is they can establish what works, but cannot say much about what could have worked better or whether it could work in a very different environment. This is a general problem of empirical work not unique to RCTs and once again, the solution is not to abandon RCTs but see how they can be combined with theoretical models to simulate the effect of alternative policies or what could happen in a very different environment. As with any new method, while there is always some displacement of existing methods there are also potential synergies that harness the strengths of both. If we step outside the academic world, there is a whole set of issues regarding the use of RCTs and how they can form the basis of evidence-based policy. There is the concern that funding by large donor or private philanthropic organisations may influence the policy agenda in certain directions. Also, imposing a test of purity that the only form of evidence that counts is that generated by RCTs may lead us to ignore many other forms of useful evidence, and that may be potentially dangerous. Finally, there is the critique that given the political environment within which policymaking and programme implementation happens, it is unrealistic to expect anything more than marginal gains from improving the design of anti-poverty programmes. Where it scores These concerns cannot be dismissed. However, where the RCT revolution deserves credit even in the context of these criticisms is creating a consensus that evidence is important in the context of policy — which pushes us to be aware of both what we know and what we do not know — and to quantity and compare the costs and benefits of alternative programmes.