Randomised Controlled Trials – The Current Gold Standard for Impact Evaluation
By Zoe Lawson
Picture credit: 955169 from Pixabay
If you’ve heard of Randomised Controlled Trials (RCTs) at all, you probably associate them with the clinical assessment of new medicines. But what exactly are they and how are they being applied to measure the effectiveness of new services and interventions in developing countries?
RCTs are a form of scientific study that reduces experimental bias to obtain the most accurate and reliable results. Since 2000, the use of RCTs to answer questions in international development has grown exponentially. Widely considered to be the gold standard for clinical trials due to their statistically rigorous nature, academics in the social sciences were inspired to apply the RCT method to their own research enquiries.
Random; The Key Criterion
The defining feature of an RCT is it’s ‘randomness’, which is vital for eliminating bias in an experiment. Bias is the intentional or unintentional influence that researchers may have on a study, and it prejudices research findings. Without the elimination of bias, we can’t be sure whether the outcome of an experiment is a result of the intervention, or something else. Even when we try to be as objective as possible in designing our study, our subconscious biases shape our choices. Therefore, random selection is the only true way to remove these influences.
In an international development context, an RCT is an experimental form of impact evaluation in which the population receiving the programme is chosen at random from the eligible population and a control group is also chosen at random from the same eligible population . The control group mimics the counterfactual – which is defined as what would have happened to the same people at the same time, had the program not been implemented. By definition, the counterfactual is impossible to observe; it requires a parallel universe! Thus, in the absence of alternate realities, RCTs are arguably the best way we have to determine whether a cause-effect relationship exists between an intervention and an outcome.
In general terms, these evaluations are asking whether a particular program (the ‘intervention’) really made a difference. So, did the training help small businesses increase their monthly sales or would it have happened anyway?
How is an RCT conducted?
An RCT follows a stepwise process, as outlined here:
Step 1 involves specifying what is being evaluated and why, together with the outcomes and impacts of interest.
At Step 2, the eligible population is identified, together with the unit of assignment for randomisation purposes. The unit of assignment refers to whether random selection for the intervention or control groups applies to individuals themselves or groups of individuals, e.g. villages or schools.
Step 3 covers the randomisation selection process.
There are different ways of doing this according to the characteristics of the program being studied. For example, a phase-in RCT design is used when budget constraints prevent the full-scale roll-out of the program, so who receives the service first is simply selected by lottery. An encouragement RCT design randomly selects individuals to receive an advertisement to prompt them to use the service. The control group also has access to the service but is not prompted to use it. This type of study design can be useful where it would be unethical to deny control group users access to an intervention, for example, clean drinking water.
Steps 4 to 6 cover collection of the data to determine the effectiveness and reliability of the intervention.
Some issues and limitations of RCTs
RCTs are not suitable for analysing the effectiveness of all types of interventions, and there are several practical and ethical limitations to their use. Also, even the best designed RCT can encounter issues that negatively impact results. A significant limitation is the sample size – RCTs need a large sample of a population to determine the effects of a programme with sufficient precision or ‘power’. If the sample size is not large enough, then statistically, we can’t reliably say whether the results of the study were due to the invention or simply due to chance.
Take-up rates of a programme can be lower than anticipated, which reduces the statistical power of the study. Non-compliance by programme participants can also be a major issue. Think about a microfinance programme, which opens branches in randomly-selected treatment areas, but not in control areas. People living in a control area may simply travel to a branch in a treatment area to use it. In this case, the control group can longer serve as a true counterfactual, and the integrity of the randomisation is compromised. Attrition is also a risk – this is where part of the sample is no longer available for follow-up, perhaps because they have moved away. While these issues can never be totally eliminated, they can be minimised and their effects controlled for.
At the current time, RCTs are still seen as arguably the best way of conducting impact assessments to determine whether an intervention has really worked. If you are interested in finding out how Grow Movement has used this method to evaluate the effectiveness of business mentoring for entrepreneurs in Africa, stay tuned for our upcoming news and press releases!
Links & References
21 May 2019
Zoe Lawson is a social entrepreneur with an interest in translating innovations for international development. She has a PhD in biological chemistry and an MBA, and has been involved with Grow Movement since 2015 as a volunteer consultant and supporter.