Dr Noémi RADVÁNYI in conversation with Dr Von Jens BEHNKE
Dr Jens Behnke answers questions from Dr Noémi Radványi, from Hungary, for www.homeopatika.blog.hu/ about the evidence for homeopathy, research methodology and interpretation of data from clinical trials.
Dr N Radványi: There are many trials that measured the effectiveness of homeopathy. Some of them observe no difference between homeopathy and placebo, and many of them prove homeopathy is an effective therapy. Which research is valid? How is it possible to come to such different conclusions?
Different Studies For Different Questions
Dr Behnke: First of all, you have to distinguish between ‘effectiveness’ and ‘efficacy’. The effectiveness of an intervention is measured in outcome studies under real life circumstances. Concerning homeopathy this means that you try to evaluate the effects of potentised remedies plus anamnesis, expectations of the patient, probable therapeutic suggestions and other context factors. Efficacy in contrast means the pure specific effects of potentised remedies without all these additional variables. Therefore, efficacy is measured in randomised, placebo-controlled, double-blind trials [RCTs] where all these possible confounding factors are ruled out by the study design.
The Efficacy Paradox
Outcome studies provide answers to the question ‘What helps the patient in clinical practice?’, whereas RCTs give information about the causal relationship between a therapeutic effect and a substance. That’s not the same, because some interventions might exhibit large specific effects, but they may prove to be not as useful in routine care as other treatments that show smaller specific effects in RCTs. This difference forms the so called Efficacy Paradox. Because of this paradox it has proven useful to evaluate medical interventions in different types of studies. A multi-perspective view highlights more particular features of interventions, enabling more detailed answers to heterogenous questions.
Results From Outcome Studies
Outcome studies of homeopathy document more, or less, uniform results: in patients undergoing homeopathic treatment in routine care, relevant clinical improvements are observed. If compared to conventional care the outcomes are often similar, or even better, but with less adverse effects and, in the majority of all health economic studies, lower costs. Long-term results of a cohort study with 3.981 participants document that even patients who didn’t get satisfying results from conventional treatments benefit from homeopathy.
Results From Randomised Controlled Trials [RCTs]
When it comes to RCTs the results are more heterogeneous: Some trials demonstrate the efficacy of the tested potentised remedies and some don’t. Many of the latter try to evaluate the effect of one single substance in one disease, like in conventional medicine. This approach is not really appropriate for homeopathy, which is a form of individualised medicine. Nevertheless, 44 per cent of all RCTs carried out render positive results, 5 per cent negative and 47 per cent are inconclusive. These numbers are nearly the same as in conventional medicine.
Dr N Radványi: Homeopathy is a personalised, individual therapy. Is it appropriate to measure the effectiveness of homeopathy with quantitative methods?
Homeopathy In Evidence-Based Medicine
Dr Behnke: Of course, it’s possible to evaluate the effectiveness of homeopathy by means of clinical trials. When you perform observational studies under real life circumstances, there is no problem at all. We have consistent data from quite a lot of trials observing the results already mentioned above for different groups of diseases, e.g., musculoskeletal disorders, mental disorders, or upper respiratory tract infections. When it comes to RCTs you have to consider important caveats. Genuine homeopathy needs individualised prescriptions. Otherwise, you may get bad results. But this problem is solvable: Randomise the patients to experimental and control group, do the homoeopathic anamnesis and choose remedies. One group gets the remedy, the other one placebo.
Effect Size In RCTs
In some RCTs of homeopathy we see only small effects, even if they employ individualised remedy selection. Maybe, that’s due to the fact that sometimes the choice of a suitable remedy is not easy for homeopathic doctors. You need a second, or a third try to get satisfying results. In routine care and the corresponding outcome studies, respectively, that’s not a big problem. But if you don’t know if your patient is on verum, or on placebo, in an RCT, you are unable to decide if you have to change your prescription. However, the latest meta-analysis of RCTs of individualised homeopathy observes significant results above placebo on all levels of methodological quality in spite of the fact that many included trials are not perfectly designed to measure best practice of homeopathy from a methodological point of view.
Dr N Radványi: Is it possible to manipulate the results of research? If, yes, how?
Summary Of High Level Evidence
Dr Behnke: Meta-analyses of RCTs comprise the highest degree of evidence in the framework of evidence based medicine. In 4 out of 5 global meta-analyses of homeopathy [all indications, all remedies] published to date potentised medicines tend to reveal specific efficacy in excess of placebo. According to an in-depth statistical analysis the overall outcome is only negative if a large amount [90 per cent-95 per cent] of the available data is excluded and/or dubious statistical methods are employed.
Shang 2005
The only existing negative publication in this field by Shang et al [2005] relies on just 8 out of 110 included trials to draw the conclusion that the effects of homeopathy are completely due to placebo. The authors identified 21 trials of higher methodological quality, but they restricted their analysis to ‘larger trials’ without any scientific justification. Statistical analysis of Shang’s data revealed that also in this meta-analysis the 21 high quality studies proved potentised remedies performing significantly better than placebo [Source: https://www.britishhomeopathic.org/evidence/meta-analysis-shang-et-al-2005/].
The Australian Report [NHMRC]
Another example for putative manipulation of this kind is the report on homeopathy done by the Australian National Health and Medical Research Council [NHMRC]. Mainstream media often refer to this work to substantiate the claim that homeopathy isn’t any better than placebo. The NHMRC used a method similar to Shang’s to cut off most of the data.
The scientists decided that for trials to be ‘reliable’ they had to have at least 150 participants. This is despite the fact that the NHMRC itself routinely conducts studies with less than 150 participants. Existing Cochrane-reviews decide that 10 participants in each study arm [= 20 total] suffice. This arbitrary criterion resulted in 171 trials being qualified ‘unreliable’, leaving only 5 trials considered to be ‘reliable’. As the NHMRC assessed all 5 of these trials as negative, the reviewers could conclude that there was no ‘reliable’ evidence for homeopathy performing better than placebo. The NHMRC-report is investigated within a formal governmental procedure because of possible deception of the public.
Plausibility Bias
Critics of homeopathy often assume that an effect of potentised remedies contradicts certain physical theories. According to this prejudice they interpret the empirical data from clinical trials. The amount of evidence even among the best studies came as a surprise to us. Based on this evidence we would be ready to accept that homeopathy can be efficacious, if only the mechanism of action were more plausible [Source: Kleijnen et al: Clinical Trials of Homeopathy: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1668980/pdf/bmj00112-0022.pdf].
Kleijnen: Clinical Trials Of Homeopathy
This reasoning about data is influenced by plausibility bias. It directly opposes the fundamental principles of the scientific method. If you encounter a phenomenon you can’t explain sufficiently by means of established theories, your theories have to be modified, or new ones have to be found. Neglecting the existence of something you can observe in methodologically astound experiments because you lack a model for its mode of action, means dogmatism.
This type of ideologically beclouded science is especially not in line with the principles of evidence medicine.
- This article is ©Dr Noémi Radványi & Dr Von Jens Behnke. It is republished with grateful thanks to them.