The announcement of this year’s Nobel Prize in economics has highlighted divisions within the development economics community, particularly around the efficacy of using Randomized Controlled Trials (RCTs) as a tool for making social interventions. In this write-up as a student of economics, political science, social science and medical sociology as well as research ethics, I will discusses the pros and cons of experimental approaches in economics and suggests that rather than seeing routes to delivering social change as a binary choice between macro and micro approaches, social scientists should instead recognize the inherent complexity of social change and adopt realist approaches in assessing how best to make social interventions.
We know that research is a systematic investigation designed to produce generalizable knowledge and research results are usually applied to other populations, published and disseminated. And Research participants are living individuals about whom a researcher conducting research obtains (1) data through intervention or interaction, and (2) identifiable private information. The Evolution of Research Ethics contains codes, guidelines, and regulations developed to observe the rules of the road of research involving human participants.
What is Research ethics? Research ethics involves the application of fundamental ethical principles to a variety of topics involving scientific research. These include the design and implementation of research involving human experimentation, animal experimentation, various aspects of academic scandals including scientific misconduct (such as fraud, fabrication of data and plagiarism), regulation of research, etc. Research ethics is most developed as a concept in medical research. The agreement here is the 1974 Declaration of Helsinki. The Nuremberg Code is a former agreement, but with many still important notes. Research in the social sciences presents a different set of issues than those in medical research.
It is essential that fundamental ethical principles be included in the design and implementation of research involving human participants. Ethical research principles are considered universal, transcending geographic, cultural, economic, legal and political boundaries.
Randomization has its benefits. It is strong when interventions represent “easy fixes”, such as the cost-effective measures to achieve social progress identified by the Copenhagen Consensus. Kremer’s classic study on the effects of deworming on school attendance, while more complex and not free of criticism, is a prime example of just this type of intervention. In these cases, the desired outcome (less malnutrition or higher immunization) equals or is close to the outputs produced (amount of micronutrients processed or a number of vaccinations performed).
However, experimental methods are limited, if not unsuitable, when the achievement of the desired outcome is related in various ways to the activities performed. For example, following Amartya Sen’s understanding of poverty as multiple deprivations, it is not enough to assess interventions, such as microfinance, against changes in household income, or to test against several, but simple, factors collapsed into an index.
Much as in the way that research impact, in general, relies not simply on publication, but a wider range of communication activities. To understand these kinds of multifactorial changes, we need detailed accounts of whether interventions enable new social relations, empowerment, or self-worth. This requires contextual knowledge, from multiple data sources, including qualitative information.
In the absence of this information, we won’t know whether things have really changed for the better. In other words, we may have statistically significant results on variables that are easy to measure, such as more income or a shift in spending from “template goods” to more useful expenses, but little clue as to whether this has improved anybody’s lives. These shortcomings become even more pronounced when RCTs are applied to assess transformational effects, such as behavior change.
To apply randomization to a problem we implicitly assume logic, as in evidence-based medicine, that the intervention (drug) should have the same effect on each individual (patient). However, for social interventions, this is simply not always the case. For instance, in microfinance, the default expectation is that loan recipients will set up a successful business. This impression is fuelled by some of the industry’s figureheads’ naïve projections that we, and especially the poor, are all entrepreneurs – a forceful effort of positive thinking. A more realistic assessment of microfinance would be whether it makes more people entrepreneurs who would otherwise have been deprived of that opportunity, subject to minimizing the social risk for those who fail and might slip into debt cycles as a result. Taken as a whole of issues point to the conclusion that for more “demanding” interventions, more “contextualized” causal chains of impact and kinds of evidence need to be taken into account.
Thinking, big and small: In line with this call for embracing complexity, social impact analysts and social policy scholars are increasingly moving away from impact as a “rational, ordered and linear process’, and from the input-output-impact model. Notably, evidence-based medicine, the field that inspired much RCT based work in economics, has begun to take a more nuanced approach to assess complex health interventions, often using realist reviews that embrace contextual complexity, rather than traditional systematic reviews or meta-analyses of (randomized) data.
The focus on small improvements and neat designs may indeed have pushed us too far (back) into the simplistic neoliberal world that has been criticized for simply being a “bad economy”. But we should not fall prey to the illusion that including power, politics and irrationality into our models is infinitely possible, without sacrificing analytic value.
The heterodox community’s point that we need to think big and not small when trying to fix broken systems is well taken. It echoes the voices demanding structural reforms of tax regimes, social, security provision, or wealth distribution that maintain structures of inequality and cultural dominance. However, the historic limitations of grand social designs may themselves, have led to the current part-replacement of reforms by more experimental approaches, such as mission-oriented innovation. The need for reforms also does not make individual small-scale contributions redundant.
Want change – get organized: Past experience has taught us that organizations, be it in renewable energy or in social care, are key actors of change. However, the number of evidence organizations act on is limited. Evidence is either not gathered at all, or if gathered not acted on.News of a charity that stopped its program for reassessment after negative evaluation (by an RCT), is still “a big deal” We need many more such incidences, where impact analysis is used for organizational learning.
There are ways of combining organizational and structural data and complex thinking with analytical precision. Configurationally approaches, for example, have proved effective in showing how race, gender, family background, and educational achievement, if analyzed in combination rather than in isolation, a matter for social inequality. Unfortunately, we rarely find them in program evaluation. And while ‘randomized’ have conducted thousands of studies, as we have seen, it would require millions of studies to improve the practice of organizations that aim to contribute to the “common good”.
The repertoire we have as social scientists is broad and powerful. The current heated debates are a welcome occasion to make that repertoire more relevant to solving today’s grand challenges. This should be guided by the questions we want to answer, not by predefined toolkits or epistemological tradition.
On a strategic level, we must connect the small-scale with the broad picture: (1) Equip organizations with the means to analyze the effects they create to promote continuous improvement; (2) Conduct larger scale model studies along the example of smaller-scale interventions (whether organizational practices or policies) that signal potential for high impact; and (3) Build or identify a portfolio of interventions to assess for broader and combined effects on systems.
Methodologically, we need more ‘realist as’, who improve our understanding of when we need which kind of evidence. They may be randomizedas at times, and embrace causal complexity at others. The solution lies not in saying the world needs more of the one or the other, but in being able to choose or bring both together in meaningful ways. Because if we are serious about changing the world,and therefore, we need to get our evidence right.
The writer is Former Head, Department of Medical Sociology,
Institute of Epidemiology, Disease Control & Research (IEDCR)
Dhaka, Bangladesh
E-mail: [email protected]
Editor : M. Shamsur Rahman
Published by the Editor on behalf of Independent Publications Limited at Media Printers, 446/H, Tejgaon I/A, Dhaka-1215.
Editorial, News & Commercial Offices : Beximco Media Complex, 149-150 Tejgaon I/A, Dhaka-1208, Bangladesh. GPO Box No. 934, Dhaka-1000.
Editor : M. Shamsur Rahman
Published by the Editor on behalf of Independent Publications Limited at Media Printers, 446/H, Tejgaon I/A, Dhaka-1215.
Editorial, News & Commercial Offices : Beximco Media Complex, 149-150 Tejgaon I/A, Dhaka-1208, Bangladesh. GPO Box No. 934, Dhaka-1000.