Health Studies 301 Complementary and Alternative Therapies

Study Guide: Unit 2

Evaluating the Effectiveness of Complementary and Alternative Therapies

The validity of all kinds of therapies is evaluated in labs like this one, the Heidelberger Life-Science Lab. CC BY-SA 3.0, CC BY-SA 2.5, CC BY-SA 2.0, CC BY-SA 1.0.

When evaluating information as to the effectiveness of medical therapies, it is important to critically examine the validity of the evidence. In particular, evidence from randomized double-blind clinical trials is more likely to be accurate than other types of evidence. This unit will focus on how to make an objective appraisal of the effectiveness of therapies used in complementary and alternative medicine (CAM).


Learning Objectives

Upon completion of Unit 2, you should be able to

  • discuss observation as a methodology for providing evidence as to the effectiveness of a medical treatment.
  • define placebo effect and discuss its role in CAM therapies.
  • describe research methodology used to investigate the effectiveness of CAM therapies.
  • identify reliable information sources on CAM therapies.
  • discuss the role of the media and the internet in the promotion of CAM therapies.

Learning Activities

Study Questions

As you complete the activities for Unit 2, keep the following questions in mind. You may want to use the Personal Learning Space wiki on the course home page and answer these questions as a way of keeping notes to focus your learning.

  1. What is meant by having a “scientific” understanding of health and medicine versus other ways of knowing?
  2. What is the placebo response? Investigate several ways of understanding it in terms of human physiology—in terms of allopathic mainstream medicine and in terms of complementary and alternative medicine.
  3. Consider the limitations of using clinical research methods designed for testing drugs in the evaluation of the effects of CAM modalities.

Unit 2 Discussion Forum

When you have completed the other activities for this unit, answer at least one of the questions in the Unit 2 Discussion Forum and respond to at least one post by a fellow learner.

The more questions you answer, the better prepared you will be for the final exam!


The Science of Complementary and Alternative Therapies

From a scientific point of view, trying to determine the effectiveness of a CAM therapy is often difficult because of the confusing nature of the available evidence. For most therapies, there has been relatively little medical research and, consequently, no base of independent trials appearing in the medical literature. Instead, assertions about the effectiveness of a CAM therapy are frequently based on claims or testimonials, often made by those with vested interests in the therapy, or on a rationale that is highly implausible according to scientific concepts.

Nevertheless, health practitioners must be careful not to reject potentially valuable forms of therapy, no matter how tempting this may be. It is unlikely that therapies that have been widely practised for thousands of years, as is the case with herbalism and acupuncture, can be of no value. Other therapies, however, may be of dubious value.

Vincent and Furnham (1997) provided the following list of questions for research into CAM therapies:

  1. Does the therapy have a beneficial effect on any individual disease or disorder?
  2. Does the therapy have any advantage over existing therapies in terms of efficacy, safety, patient preference, cost, and availability?
  3. Is the effect of the therapy a placebo, or is there some specific treatment effect?
  4. What mechanisms underlie the therapy’s action?

To establish the scientific basis of a therapy, one should show that the therapy is effective in a controlled trial and, if possible, that symptoms relate to an objective measure. For instance, if a disorder is caused by hypoglycemia or a yeast infection, then the symptoms should be there when blood sugar is low or when yeast level is high, and the symptoms should disappear when the objective measure shows normal levels of blood sugar or yeast.


Methods for Investigating CAM Therapies

There are two main methodologies used to investigate medical therapies including CAM therapies:

  1. Observational research—simple description of the apparent effectiveness of the therapy in practice.
  2. Controlled clinical trials.

The first method provides most of the available evidence on the effectiveness of CAM therapies. Large numbers of claims, often written by practitioners of CAM therapies, report success. How reliable are such claims?

Observational Research

First we look at observational research, also known as descriptive research. An investigator using observational research methodology looks for clues to gain insight into a situation or phenomenon. The investigator collects data through observation. At its simplest level, the study may describe a mere one case. This is called a case report. Observation of more than one subject is a case series. Subjects can be studied through interviews and surveys. They are selected according to their experience with the phenomenon being explored (known as a convenience sample), but they cannot be considered typical or representative of the whole population.

In the case of research on CAM therapies, observational research might take the form of asking a practitioner to describe his experiences of treating patients with a particular condition. He might report his experiences with 100 patients. These findings can then be compared to what typically happens when patients receive either no treatment or conventional treatment.

If the group of subjects studied is representative of the target population, obtained in a random fashion, then the results may have reasonably good reliability. However, observational evidence has many pitfalls. One drawback is bias. A practitioner who has been involved in a particular therapy for a number of years will likely be convinced of its effectiveness and will likely have a vested interest in demonstrating this. It is quite easy for bias to enter a study, even if the practitioner is attempting to be honest and accurate. For example, only those patients who showed signs of recovery while being treated might be selected. And if the focus is on those symptoms where positive changes were seen, the degree of improvement can easily be exaggerated.

A second drawback of observational evidence is the distortion of learning from a single experience. Clinicians are well known for anecdotes and stories about individual patients. For example, a patient with severe migraine shows a major reduction in severity and frequency of migraines after a herbalist treats her using a particular herbal preparation. The herbalist then becomes convinced of the effectiveness of the herbal treatment.

Here is another example: A medical doctor may have achieved a high reputation by diagnosing a rare disorder. The diagnosis was first ridiculed by the attending physicians but confirmed by surgery. Subsequently, the physician continued to make the same diagnosis in similar patients, wrong in each case, leading to unnecessary interventions (Skrabanek & McCormick, 1990).

Buckman and Sabbagh (1993) outlined a number of factors to consider when evaluating anecdotal information or case reports. These factors include

  1. Natural history of disease. Diseases often follow a natural course. For example, the common cold typically lasts for approximately seven days and then naturally heals. By simply observing the course of the disease after treatment is given, one risks believing that a treatment made a difference when, in fact, the disease was simply running its natural course.
  2. Fluctuations in disease. With many diseases and conditions there can be periods of remission where patients may believe that their CAM therapy cured them. Conditions such as depression, migraine, arthritis, and asthma can worsen and then disappear for no apparent reason.

    Buckman and Sabbagh (1993) describe how the Freireich Experimental Plan takes advantage of this in its interpretation of the observed results of CAM therapies. According to Freireich, if a patient starts to improve, a practitioner will claim that the treatment was effective. If a patient stabilizes, treatment is considered to be working. If a patient gets worse, the initial dose was inadequate and must be increased or changed, and if the patient dies, then he or she came to the healer too late. By this means the practitioner achieves a failure rate of zero!

  3. Premature follow-up. A patient who appears to have been healed with no adverse effects or an improvement in his or her condition may at a later date get worse and even die.
  4. Spontaneous regression. This can occur on rare occasions with progressive or irreversible diseases such as cancer, where the disease disappears.
  5. Misinterpretation of information. Medicine is not a precise science, and patients may misinterpret information that they receive from their conventional practitioners, especially when the practitioners are obliged to share with the patient the degree of uncertainty in a particular medical prognosis. A patient may also misunderstand the practitioner and claim to have a disease that he or she never had. Such a patient may subsequently be “cured” of a disease that was never there.

    This problem often occurs with cancer. Patients sometimes claim “Five years ago the doctor told me I only had six months to live.” The patient then gives credit to a CAM therapy. However, predicting the prognosis of patients with cancer is well known to be far from exact. A doctor will seldom say to a patient with cancer, “You have six months to live.” A far more likely comment might be “Patients in your situation survive on average for six months.” This means, in practice, that many will die within three months while others will still be alive after five years.

  6. Wrong information. The patient may be diagnosed incorrectly by both the CAM practitioner and the conventional practitioner. It could be quite easy for a practitioner to think he or she had healed someone, when the person may never have had the disease. (This differs from misinterpretation of information where the patient was given correct information but misunderstood it; in this case the patient was given incorrect information and understood it.)
  7. Simultaneous conventional therapy. If a patient continues with conventional therapy, it may be impossible to determine whether it is the conventional therapy or the CAM therapy, or a combination of both, that ultimately improves the patient’s health.

Thus, observational (or descriptive) research is useful in exploring the nature of a particular condition or phenomenon. The findings may suggest that a CAM therapy is effective for a particular condition and should be further investigated. But the fact that several individuals showed clinical improvement in symptoms after a CAM treatment is seldom sufficient to demonstrate that the recovery was caused by the action taken and is not simply coincidental or due to confounding factors. Results must be interpreted with caution given the many sources of error listed above.

Placebo Effect

Another important factor that can lead to apparent healing success is the placebo effect. This refers to a therapeutic procedure that has an effect on a patient, symptom, or disease without any specific activity for the condition being treated. The placebo effect is especially important when examining CAM therapies, as practitioners typically interact with their clients in ways that are likely to boost the placebo effect.

For example, when a practitioner assures a client who has pain from arthritis that she or he will gain much relief from the treatment, then there is an excellent chance that the patient will indeed feel some relief, even if the treatment has no actual direct impact on the problem. In a clinical study of different treatments for patients with osteoarthritis, 60% reported relief from pain after being treated with a placebo. This demonstrates the huge power of the placebo effect, although random fluctuations in the course of the disease no doubt also played a role (Clegg et al., 2006).

It is noteworthy that witch doctors in Africa are well known for their flamboyant dress and behaviour. This is entirely consistent with having discovered by trial and error, over several centuries, how best to maximize the placebo effect.

Several myths surround the placebo response. One is that there is nothing wrong with placebo responders in the first place, and the second is that a fixed proportion of people, usually around 30%, are placebo responders. In fact, studies have shown the placebo effect can be successful in 70% of cases of angina (chest pain caused by a heart problem), bronchial asthma, and duodenal ulcer (Benson & Friedman, 1996).

There are many explanations for the placebo effect. These include

  • physiological mechanisms (where fear or anxiety increase production of adrenaline and noradrenaline, which modulates the pain response through feedback inhibition);
  • classical conditioning (where pairing conditioned and unconditioned stimuli eventually results in the conditioned stimulus eliciting the same response as the unconditioned);
  • autonomic nervous system activity (which affects neurohormone production such as endorphins); and
  • psychological effects such as mental imagery and the behaviour and attitudes of healthcare practitioners (Bienenfeld, Frishman, & Glasser, 1996).

Buckman and Sabbagh (1993) provide an excellent illustration of the placebo effect. The internal mammary artery ligation was once thought to improve angina. This procedure was standard medical practice until studies revealed that with both subjective and objective evidence, a sham operation produced the same results.

Carver and Samuels (1988) reported that many of the symptoms of myocardial ischemia are subjective and several treatments such as chelation, which have been heralded as breakthroughs, may owe any benefit to the placebo effect. The supporting evidence for the treatments was based on testimonials (subjective) rather than on objective forms of investigation.

However, therapy with a placebo is not the same as no therapy. The placebo effect may be a consequence of many factors in the therapist–patient relationship, including the psychological state of the patient, the patient’s expectations and belief in the efficacy of the method of treatment, and the therapist’s biases, attitudes, expectations, and methods of communication (Bienenfeld et al., 1996).


Learning Activity

Read

Kienle, G.S., & Kiene, H. (1997). The powerful placebo effect: Fact or fiction? Journal of Clinical Epidemiology, 50(12), 1311–1318.

Micozzi, M. (2019). Fundamentals of Complementary, Alternative, and Integrative Medicine. Pages 94–95 (section “Psychonueroimmunology and the Placebo Effect”), 100–109, and 59–60 (section “Energy, Expectation, Intention, and Placebo”).

The paper by Kienle and Kiene expresses a very different viewpoint to that found in the textbook. It seems remarkable that experts can come to such markedly different conclusions. Neither viewpoint can be easily dismissed. For the purposes of this course, we shall assume that the weight of evidence is much more solidly on the side of the textbook. Perhaps, the one take-away lesson from this (as with so much in the whole field of CAM) is to be cautious before coming to firm conclusions.

Randomized Controlled Trials

Randomized controlled trials (RCTs) are a type of experimental research that is of great importance. Let us first examine the key features of experimental research studies. In this method, a hypothesis is tested regarding a causal relationship. A hypothesis is a supposition that a specific cause produces the observed phenomenon. The hypothesis must be specific and testable by experiments in which all other variables that might also cause the observation are ruled out.

A scientific experiment has certain essential features. In particular, there must be a control group. All variables remain constant in the control group. In the experimental group, one particular variable is manipulated, and the effect is measured. Such experiments produce results that either support or refute the hypothesis; thus the researcher is able to draw a conclusion about the validity of the hypothesis. For example, in the 1600s, Francesco Redi used the scientific method to demonstrate that flies do not arise spontaneously from rotting meat. Before Redi, the appearance of maggots was considered to be evidence of spontaneous generation. He followed these steps for the controlled experiment:

  1. He observed that flies swarm around meat left in the open.
  2. He formulated a hypothesis that keeping flies away from meat will prevent the appearance of maggots.
  3. He conducted an experiment using two identical pieces of meat and identical jars. The experimental variable included placing gauze over the opening of one jar and not over the other. Other variables that may affect the results, such as time, temperature, and location, were kept as similar as possible between the two jars.
  4. He observed that when flies were kept away from meat, no maggots appeared.
  5. He concluded that spontaneous generation of maggots from meat does not occur.

Thus, by carrying out this controlled experiment, Redi disproved the age-old belief of spontaneous generation (Audesirk & Audesirk, 1996).

Controlled experimental studies are used widely in all areas of medicine. A randomized controlled trial is a study where subjects are randomized into two or more groups, one of which is the control group. So, for example, to test whether sodium increases the blood pressure, two groups of subjects might be given diets with different amounts of salt. In the first step of conducting such a study, volunteers are recruited who meet the selection criteria (e.g., healthy young adults; this is based on inclusion and exclusion criteria). Subjects are then randomly assigned to the different groups. After a few weeks, the changes in blood pressure are determined. The experiment may reveal that subjects given the extra salt (the experimental group) had an increase in blood pressure, whereas those given their usual salt intake (the control group) had very little change in blood pressure. Assuming the investigators conducted the study carefully, then the findings would strongly suggest that adding salt to the diet raises the blood pressure.

When evaluating CAM therapies or conventional medicine, it is essential to carry out properly conducted trials. For that reason RCTs are widely used for clinical studies. Here we can use the more specific phrase randomized controlled clinical trials. Drugs are regularly tested using this methodology.

When planning a trial, it is essential to select inclusion criteria (features that subjects must have in order to be included in the study) and exclusion criteria (features that eliminate subjects from the study). For example, a study on herbal treatment of depression may decide to confine the study to people with mild depression.

In such trials, subjects are randomly assigned to either the experimental group (given the drug or CAM therapy) or the control group. The control group can be of two types:

  • A control group may be given the standard treatment. For example, we may wish to determine if herb X is superior to a standard drug for the treatment of depression. The experimental group would be given the herb, while the control group receives the drug. The results will indicate how effective the herb is in comparison with the drug.
  • A control group may be given no active treatment. This approach is often used where no effective treatment is available. Alternately, a treatment may exist, but we do not use it as we may simply wish to determine if the treatment has an effect. The control group will be given a placebo or “sham treatment.” This is usually an inert substance that is indistinguishable from the intervention treatment in appearance and mode of administration.

Use of a placebo is important: it helps us determine that the experimental group has not improved (in comparison to the control group) simply because of the placebo effect. As an example (in a variation of the previous study), we may give herb X to the experimental group and a placebo to the control group. The results will indicate whether or not the herb has any effect on the symptoms of depression. In the above examples, there is only one experimental group. Often there may be more than one. This may be done when, for example, the investigators wish to test different doses or different substances.

The test is called double-blind because neither the investigators nor the subjects know who is receiving which treatment. Only when the data from all subjects have been recorded do the investigators find out which subjects received the test substance and which subjects received the placebo. Blinding of patients is necessary because of the placebo effect and to ensure that patients are not biased in how they describe their response to treatment. Blinding of investigators is necessary to ensure that they do not impart an extra dose of placebo effect to patients, which could happen unintentionally, and also to prevent biases from interfering with accurate recording of results.

It should now be clear that double-blind trials are far more reliable than uncontrolled studies. The double-blind controlled clinical trial is ideal for investigating the effectiveness of drugs because a placebo is easily substituted for the drug. Research studies on vitamin therapy, homeopathy, and herbal treatments are much the same.

However, double-blind trials are not always feasible. Massage, for example, is impossible to perform in a sham method that would not have some effect on the patient. But it is possible for the assessor not to know if the patient had massage. Such a study would be called single-blind.

We shall consider acupuncture as an example of a CAM therapy that is less amenable to being investigated by using double-blind trials. There are several ways to approach this challenge. Many trials to investigate the effectiveness of acupuncture use needling in sites away from classical point locations. Depth of insertion and stimulation are the same; only the locations differ. This procedure, which is termed sham acupuncture, has been shown to have an analgesic effect in 40% to 50% of patients, in comparison with 60% for real acupuncture (Vincent & Lewith, 1995).

Other forms of a controlled trial might include randomizing subjects to receive CAM therapy or conventional therapy. An investigator can subsequently evaluate the subjects and compare the progress of the two groups using standard criteria that allow objective evaluation.

For example, if acupuncture is being studied for the control of pain, half of the randomly assigned subjects can be given acupuncture and the other half drugs only. The subjects can be evaluated by having them complete a questionnaire describing how much pain they now have. An investigator who is blind to the treatment that each subject received can then analyze the questionnaires.

This is another example of a single-blind study. However, it still leaves open to question whether acupuncture is inducing a placebo effect. A variation of this trial would be that all subjects receive drug treatment, while half receive real acupuncture and the other half receive sham acupuncture.

Ethical Review

Approval by an ethics committee is required for all research that involves humans. This is done to protect the patient or subject from harmful practice and ensure that the patient is not denied any essential treatment. Ethical considerations include

  • informed written consent
  • right to privacy
  • right to self-dignity
  • assured confidentiality
  • freedom from harm
  • the right to withdraw from the study at any time

Other Types of Research

Clinical interventions are but one methodology used to investigate the effectiveness of CAM therapies. Much research is conducted where the measurement is not the number of people whose symptoms are relieved but, rather, the effect of treatment on the functioning of the body. This can be carried out on healthy people or even on animals.

For example, investigators may observe the effect of acupuncture on brain activity or pain tolerance, the effect of a herbal treatment for hypertension on the blood pressure of people with normal blood pressure, the effect of a treatment that is claimed to prevent infections on the blood level of antibodies, or the effect of a cancer therapy on cancer cells in a test tube or in rats with artificially induced cancer.

No one type of study proves that a therapy prevents or cures a disease. The studies do, however, provide us with insight into the action of CAM therapies on the body. This information, in turn, supports clinical observations and helps form conclusions as to the effectiveness of a particular therapy.


Evaluation of Research

As mentioned earlier, clinical research studies can be divided into two categories: observational studies and randomized controlled clinical trials. For reasons explained earlier, observational research has serious weaknesses. Only under the rigorous conditions of RCTs is it possible to prove that the intervention is (or is not) efficacious. Key features of such a study include dividing subjects into groups randomly and (whenever possible) conducting it double blind.

The first step in evaluating a study is to determine whether the author is biased in any way. For instance, does he or she receive funding from a company, and could this influence how the data are reported? This is a serious problem in research, as much evidence has shown that when research is sponsored by commercial enterprises, such as pharmaceutical or food companies, the conclusions are likely to be favourable to the commercial interests of the company paying for the research (Fraser, 2007; Lesser, Ebbeling, Goozner, Wypij, & Ludwig, 2007). For example, very few RCTs on drugs are paid for by a pharmaceutical company and result in a published paper that reports that the drug is ineffective.

Another important point is whether the study is published in a reputable journal that requires peer review of the study by experts in the field. Such research has far more credibility than, for example, a book in which the author describes his findings.

Before drawing any conclusions from a study, consider whether the population being studied is comparable to the one you are interested in. For example, the findings of a study mainly on older men may not be applicable to younger women. Or, if the group studied includes patients with chronic and mild symptoms of a particular disease or condition, the results may not be applicable to patients with more acute and severe symptoms. A study may suggest that a treatment is helpful for people with early stage colon cancer, but we cannot assume that the treatment will be of any value for patients with advanced colon cancer.

Whenever possible, researchers evaluate the results of a study by making a statistical analysis. If the treatment achieves positive results, this should be reflected in a statement that the difference was significant. This means that the probability that the observation occurred by chance is less than one in 20. This is written as p < 0.05.

For example, let us suppose that the results of a study on herbal treatment for hypertension reveal that systolic blood pressure fell by 3 mm and diastolic blood pressure by 2 mm. The investigators then compare these changes with the changes in the control group. If there was no change in the control group, then we may infer that the herbal treatment was responsible for the fall in blood pressure. Statistical analysis might indicate that the fall in blood pressure was significant, p < 0.01 in the case of systolic pressure and p < 0.05 with diastolic pressure. This means that the chances of such differences happening are less than one in 100 for systolic pressure and less than one in 20 for diastolic pressure.

Three points need to be borne in mind:

  1. The larger the difference between the test and control groups, the lower the p-value. In the above example, the effect on systolic pressure was larger. Reflecting that, the p-value was smaller for the systolic pressure than for the diastolic pressure.
  2. The larger the sample size, the less likely it is that the findings are due to mere chance and, therefore, the lower the p-value. For this reason, research studies generate more reliable results when they use a large number of subjects.
  3. The reliability of the statistical analysis can be no greater than the quality of the data. A badly designed study might give results that indicate a significant effect of treatment, but one must treat such findings with skepticism. In plain English: garbage in, garbage out.

Other errors that can occur with respect to statistical analysis are as follows:

  • Type I errors can occur where a significant difference was reported (i.e., the p-value was < 0.05), but in actual fact the difference was due to chance. This error can arise from the investigator making multiple comparisons until a statistically significant relationship between two variables is obtained.
  • Type II errors can occur when a real difference exists between two groups, but the number being studied is inadequate to demonstrate statistical significance. In this case, absence of proof is not proof of absence. Thus, the number of patients in the study needs to be assessed. Even if a statistically significant result was reported, caution should be used before jumping to the conclusion that the effect is real.

    Publication bias is another cause of Type II error. This refers to the tendency for investigators to seek, and journals to accept, studies in which a positive result is observed and a significant difference is demonstrated. For example, in 20 trials with a small number of patients, 19 of the trials find no difference and do not submit their data for publication; one trial finds a difference and publishes it. Results from that one trial may appear as if an important new therapy has been described when, in reality, this is nothing more than random chance.

Another important factor when evaluating a study is to take into account how the data were collected. Measurements and recording of information must be accurate. If a practitioner claims to be successful in treating people afflicted with a deadly disease such as cancer or AIDS, then the most significant evidence would be patient survival. Reliable clinical evidence demonstrating that the disease is in remission, such as blood levels of HIV markers, are also valuable forms of evidence. Statements that the patient is “recovering” carry much less weight.

A key feature of a scientific claim that has achieved wide acceptance is that different groups of investigators have repeated the study and come to the same conclusion. For example, many separate studies have reported that a class of drugs called statins lower the blood cholesterol level. We can therefore state with a high degree of confidence that this finding is correct.

But the reality is that inconsistent findings are regularly reported in all areas of medicine. This is not really surprising: with so many variables and possible sources of error, it is predictable that different studies will come to quite different conclusions. This problem is very common in all areas of medical research. Quite often, the early reports make a claim but later studies appear to refute that claim. Sometimes the source of the contradictory findings may be fairly obvious, such as that the later studies were done much more carefully with larger numbers of subjects. But often there is no obvious explanation for the lack of consistency. The accepted method used to handle this problem is that experts in the field review all the published work and make an overall assessment.

Let us take an imaginary example:

Observational studies have reported that persons with a higher dietary intake of vitamin X are less likely to develop disease Y. The researchers then propose the hypothesis that supplements of vitamin X prevent disease Y. After a number of years, results are published on six separate RCTs, each by a different group of researchers. The findings from five of the studies report that people given supplements of vitamin X do indeed have a lower risk of developing disease Y (in comparison to subjects given a placebo). The decrease in risk ranges from 6% (non-significant) to a high of 35%. However, in one study subjects give vitamin X had a 4% higher risk of the disease. After carefully evaluating the studies, the experts draw two conclusions:

  1. First, the weighted average reveals that vitamin X lowers risk by 19%. “Weighted average” means that the RCTs with more subjects are given more weight than smaller studies. This is, in effect, the same as pooling all six studies and treating them as if they were one very large study.
  2. However, this approach ignores methodological differences between the studies that might be responsible for the different outcomes. This possibility is examined in a separate analysis. Here the experts might carefully evaluate the six studies and conclude that vitamin X is probably most likely to prevent disease Y among those with a poor diet and therefore a low intake of vitamin X, and also that the study needs to last at least three years before the impact on risk becomes clear.

The above example deals with a vitamin supplement that may prevent a disease. We can use the same approach for a therapy that treats a disease. So, for example, the results of several studies that examined whether herb H relieves the symptoms of depression can be reviewed by the same approach.

As we go through the course, we will come across numerous cases where different studies have given inconsistent results; that is a regular feature of research on CAM therapies. It is often very difficult to identify the factors responsible for these inconsistencies. However, the principles explained in this unit help us make informed judgments.

One of these vitally important principles is to evaluate the evidence as a whole before coming to a conclusion. This is often done using an approach called a systematic review. Here the investigators look for all published studies on a particular topic. We can illustrate this by taking another look at the above example of whether people with a higher dietary intake of vitamin X are less likely to develop disease Y. The experts found six separate RCTs. How did they find these RCTs? They did that by carrying out a computer search of biomedical journals looking for all RCTs that had relevant results.

The AU Library Search Block on the course home page makes it easy for you to start a search by typing a term into the textbox.


Learning Activity

Read

Read the section “Levels of Evidence” on pages 56–57 of the textbook. Figure 6-2 provides a clear summary of many of the points made in this unit.

One further point needs to be made. CAM therapies range across the spectrum from entirely plausible to some that fly in the face of accepted scientific knowledge. A much greater weight of supporting evidence is demanded of the latter than of the former before they can be accepted as effective. This principle applies in all areas of science.

For example, in 2011, results of an experiment were reported suggesting that neutrinos, a subatomic particle, can travel faster than the speed of light. If true, this would break a long-held law of physics based on Einstein’s theory, which states that nothing can go faster than the speed of light. Accordingly, physicists demanded that the experiment be repeated several times and that every possible effort be made to look for errors. This is a far higher level of proof than might occur with, for example, the report of a new species of fish discovered in the Pacific Ocean.

We shall now apply this principle to the evaluation of the results of studies on CAM therapies. Compare the following two randomized controlled clinical trials:

Study 1. A new herbal treatment is tested on persons with insomnia. The dose used is one gram a day.

Study 2. A homeopathic treatment is tested on persons with arthritis. The dose used is five drops of a drug that has first been diluted by a factor of one part in a billion.

Key details were similar between the two trials, including the number of subjects and the proportion of them who reported that their symptoms were relieved. Does that mean that the two therapies are equally likely to be effective? The answer is a firm no. This is because of a huge difference between the two therapies:

  • Based on our current knowledge of biomedical science, it is entirely possible that a herbal treatment may help alleviate insomnia. (The herb may contain an unknown but potent sleep-inducing chemical).
  • However, in the case of the homeopathic treatment, we would need to rethink everything we thought we knew about biochemistry if we were to accept that the therapy really did work. (This is because drugs only work when the dose is sufficiently high that it can affect body functions, which is simply not possible if the drug has been ultra-diluted).

For this reason, the weight of supporting evidence required before we can accept that a proposed new therapy is effective is much lower for a herbal treatment than for a treatment such as homeopathy. To put it bluntly, many CAM therapies, such as homeopathy, suffer from a serious credibility problem.

In the case of homeopathy, the principles behind the therapy are clearly in conflict with accepted scientific laws. For that reason, we must show extreme caution before accepting that the therapy has any validity. However, we must distinguish between the rationale used to support an intervention and the evidence demonstrating whether or not it actually works in practice. We need to ask two key questions:

Question 1. Do the advocates for the intervention argue that it has a beneficial effect on the body when scientific laws argue that this is simply not possible?

Question 2. Does the intervention appear to be of benefit when put to the test?

In the case of homeopathy, the “theory” used to support the therapy makes no sense. Nevertheless, as we shall see later in the course, several studies have generated apparently positive results when homeopathic remedies have been tested (this is still hotly debated). Acupuncture provides an even more provocative example. While the “theory” used to support the therapy is clearly contrary to our knowledge of the human body, it does appear to work in practice, at least for certain conditions. Many medical interventions reveal the precise opposite: they are based on a solid scientific rationale, but fail miserably when tested on real people.

What these examples tell us is that we should always be cautious before dismissing a proposed intervention solely on the grounds that it is based on a very weak supporting rationale. It is entirely possible that a therapy was discovered more or less by accident and then came into popular use because it was found to work. Later, attempts were made to explain how it works, but seriously unscientific reasoning was used. This seems to have been the case with acupuncture. We can summarize the above argument as follows: The proof of the pudding is in the eating, not in whether the recipe appears to have been cleverly designed. Likewise, the critical supporting evidence for an intervention is whether it actually works, even if the scientific rationale is deeply flawed.

There is another very valuable principle that helps us when evaluating CAM therapies, namely Ockham’s razor. It is often summarized thusly: simpler explanations are, other things being equal, generally better than more complex ones. We accept the more complex explanation only if it is clearly more consistent with the observations. Here is an example of the application of Ockham’s razor: Suppose one hears the sound of horses coming from the street. A person might speculate that a herd of zebras is migrating through one’s neighbourhood. But anyone with an ounce of common sense would assume that a far more likely explanation is that horses are making the sound.

Ockham’s razor can be of service when we have a choice of interpretations for some observations. Let us suppose that a CAM therapy has been tested and the results of the study appear to indicate that it actually works. However, the therapy is based on an unscientific rationale. How should we deal with this dilemma? We should seek an explanation that does not demand that we reject accepted scientific principles. This means that we focus our search for an explanation in such areas as the placebo effect, the laws of chance, and error in the recording of observations.


Media Coverage

Nutrition and CAM therapy continue to be popular topics in newspapers and on radio and television. Findings about improving health or extending lives that appear in such medical journals as the New England Journal of Medicine and the Lancet are often immediately reported by the media. Soon the results of new studies conflict with the results of others. For example, vitamin E and beta-carotene were being recommended for their antioxidant and cancer preventive actions, yet large studies found them to be no better, and possibly worse, than a placebo (Unit 12).

The public is confused by reading such contradictory reports and becomes cynical. Single studies can rarely stand alone as definitive evidence: there may have been unrecognized biases, confounding factors may not have been adequately controlled, results found in one population may not apply to another, or the results could have been due to chance (Angell & Kassirer, 1994).

A study by Moynihan et al. (2000) showed that media reports (newspaper and television) were often overly enthusiastic and contained insufficient information about the risks and cost of medications. Angell and Kassirer (1994) recommended caution in response to clinical research news: every study reported in the media does not require an all-or-nothing response, and judgment should be reserved until the results of similar studies are published.

Internet

There are many websites that provide accurate and reliable information on CAM therapies. It is critically important to rely only on information that comes from a credible source. For example, a reliable source of information is likely to be a website maintained by a professional organization whose members have appropriate qualifications. Likewise, websites for government agencies are, in general, also trustworthy. Unfortunately, a great many websites are untrustworthy. In particular, websites run by commercial organizations have the primary goal of selling products. Honesty is often the first casualty with such websites, and they generally have little credibility. Other websites are run by organizations that are giving a particular point of view, such as that promoting a particular type of CAM therapy. It is important to critically evaluate the information.

Several reliable websites are listed at the end of this unit.


Summary

Many factors can help determine whether or not an intervention achieves a beneficial effect on symptoms. It is essential to evaluate the evidence supporting any type of conventional or CAM treatment. A double-blind randomized clinical trial provides more reliable evidence of the apparent effectiveness of a therapy than does observation. However, some CAM therapies are difficult to evaluate using conventional research methodologies. Conventional practitioners need to keep an open mind when evaluating CAM therapies and to critically analyze new research. Users of any type of therapy need to remember that the burden of proof should be with the claimant (Beyerstein, 1997).

Many conventional therapies that have not been fully proven are still used due to a lack of alternatives. If a conventional therapy has nothing better to offer and a CAM therapy does no harm and provides a patient with hope, then supporting a patient through the CAM treatment may be the best medicine a conventional practitioner can offer a patient (Rosenfeld, 1996).


Learning Activity

Self-test Quiz

Do the self-test quiz for Unit 2 as many times as you wish to check your recall of the unit’s main points. You will get a slightly different version of the quiz each time you try it. (This quiz does not count toward your final grade).

If you have trouble understanding the material, please contact your Academic Expert.


References

Angell, M., & Kassirer, J. (1994). Clinical research—What should the public believe? New England Journal of Medicine, 331(3), 189–190.

Audesirk, T., & Audesirk, G. (1996). Biology: Life on earth (4th ed.). Upper Saddle River, NJ: Prentice Hall.

Benson, H., & Friedman, R. (1996). Harnessing the power of the placebo effect and renaming it “remembered wellness.” Annual Review of Medicine, 47, 193–199. doi: 10.1146/annurev.med.47.1.193.

Beyerstein, B. (1997). Alternative medicine: Where’s the evidence? Canadian Journal of Public Health, 88(3), 149–150.

Bienenfeld, L., Frishman, W., & Glasser, S.P. (1996). The placebo effect in cardiovascular disease. American Heart Journal, 132(6), 1207–1221.

Buckman, R., & Sabbagh, K. (1993). Magic or medicine: An investigation of healing and healers. Toronto: Key Porter Books.

Carver, J. R., & Samuels, F. (1988). Sham therapy in coronary artery disease and atherosclerosis. Practical Cardiology, 14, 81–86.

Clegg, D.O., Reda, D.J., Harris, C.L., Klein, M.A., O’Dell, J.R., Hooper, M.M., et al. (2006). Glucosamine, chondroitin sulfate, and the two in combination for painful knee osteoarthritis. New England Journal of Medicine, 354(8), 795–808. doi: 10.1056/NEJMoa052771.

Fraser, J. (2007). Conflict of interest: A major problem in medical research. In N.J. Temple & A. Thompson (Eds.), Excessive medical spending: Facing the challenge (pp. 20–35). Oxford: Radcliffe.

Lesser, L.I., Ebbeling, C.B., Goozner, M., Wypij, D., & Ludwig, D.S. (2007). Relationship between funding source and conclusion among nutrition-related scientific articles. PLoS Medicine, 4(1), e5.

Moynihan, R., Bero, L., Ross-Degnan, D., Henry, D., Lee, K., Watkins, J., et al. (2000). Coverage by the news media of the benefits and risks of medications. New England Journal of Medicine, 342(22), 1645–1650. doi: 10.1056/NEJM200006013422206.

Rosenfeld, I. (1996). Dr. Rosenfeld’s guide to alternative medicine. New York: Random House.

Skrabanek, P., & McCormick, J. (1990). Follies and fallacies in medicine. New York: Prometheus Books.

Vincent, C., & Furnham, A. (1997). Complementary medicine: A research perspective. Toronto: Wiley.

Vincent, C., & Lewith, G. (1995). Placebo controls for acupuncture studies. Journal of the Royal Society of Medicine, 88(4), 199–202.


Websites

The following websites are reliable sources of information:

Healthfinder. A source of health information on many topics. The website is run by the U.S. Department of Health and Human Services.

PubMed. This website provides direct access to a database of more than ten million articles published in thousands of scholarly journals in all areas of the biomedical sciences. Operated by the U.S. National Library of Medicine.

MedlinePlus. This website is operated by agencies of the U.S. government and provides extensive information on many aspects of health and medicine, including CAM therapies. Operated by the U.S. National Library of Medicine.

The National Center for Complementary and Integrative Health provides information on a wide variety of CAM therapies including many foods and dietary supplements. This agency is a branch of the National Institutes of Health, which is a part of the U.S. government.

The following sites give reliable information on various health frauds:

https://healthfinder.gov/

https://www.ncbi.nlm.nih.gov/pubmed

https://medlineplus.gov/

https://nccih.nih.gov/

https://www.ncahf.org/

https://www.quackwatch.org/