Wednesday, May 12, 2010

Medical Research: Part 2

In this second part on medical and epidemiologic research, I'll be going over different types of studies that are conducted, why they're done, and how they're implemented. I will also consider certain concerns that may come about when conducting research.

The goal of research is reasonably straightforward, it is to gain knowledge with the hope that it is generalizable. In other words, the goal is to learn something based on a particular sample that can be applied to a much larger group. To do this, a researcher must first have a theory, [for example 'people who drink a lot of coffee get more headaches than people who don't'], devise an experiment to test that theory [find people, divide them into categories based on coffee consumption, and see which group gets more headaches], and then analyze the results [see if the difference in the groups was statistically significant and how big the difference was, aka the effect size].

If that description sounded much too simple, that is absolutely correct. There are many issues and considerations that go into conducting research, and I'm afraid I wouldn't be able to do it justice here. However, if you came in knowing little to nothing about research, I can hopefully give you enough of a grasp that you will be able to read a medical research paper and know what it is trying to say and how it came to those conclusions. To start off I'll describe some of the concerns one might have in conducting a study.

Confounding: From the first part of this series, it is easy to see that statistics has a great deal to do with associations. Confounding is a problem that leads to the appearance of a false causal relationship, because of a variable that is associated with both what the researcher is looking for [headaches] and where they are looking for it [people who drink a lot of coffee]. These are known as the dependant and independant variables. An example of a confounding variable for the given scenario may be lack of sleep. People who drink more coffee may be doing so because they get less sleep and lack of sleep may be leading to the headaches. What is important is that the lack of sleep has a relationship with both the independent and dependent variables. There are ways of controlling for confounding in the analysis, but if noone was looking for it, it might be assumed that the coffee itself was causing the headaches, when in fact it was the lack of sleep (this is a purely hypothetical example).

Bias: Unlike confounding, bias is a systematic error in the study that can't be corrected for in the analysis of the study. Examples include selection bias and misclassification bias. If, for example, people who drank a lot of coffee were much more likely to remember headaches than people who do not drink coffee, this misclassification regarding headaches would end up giving researchers results that underestimate the number of headaches in peope who don't drink coffee.

Types of research:

Randomized Controlled Trial: Often considered the gold standard, randomized controlled trials are often used to test some sort of intervention, such as medication. These types of trials are able to control many elements of the research and minimize both confounding and bias. How do they do this? When researchers want to test a difference of some factor, the ideal situation would be one in which two groups of completely identical people are compared while varying only the factor they were looking for. Obviously, this is impossible, but randomized controlled trials come about as close as possible. 'Randomized' means that the intervention is assigned randomly, this helps to eliminate selection bias by the researchers, and with a sample that is sufficiently large, will also result in two groups that are very similar. Very often, these types of trials are also blinded, meaning the individual does not know which group they are in. Trials can also be 'double blinded' so that the researchers conducting the study do not know either. The purpose of blinding is to prevent bias.

Observational research: Often, despite its usefulness, conducting a randomized controlled trial is not feasible. This is especially true of epidemiologic research. If you wanted to test the effects of smoking on some outcome, you wouldn't be able to 'assign' smoking to the participants in your study. Instead, you would have to gather a group of people, some of whom already smoke, and make observations about the results. The coffee example offered above would be an observational study because we did not assign coffee drinking to the participants.

*All of the above information is available in "Basic & Clinical Biostatistics" by Beth Dawson.

A couple of final points about statistics and research:  
  • Don't overestimate the worth of a small p-value or be misled by the use of the term 'significant'. What statisticians mean when they say a difference is significant based on a small p-value (i.e. < .05) is that there is likely to be a difference, it does not say anything about what the difference is. In the population of coffee drinkers, 5 more may have headaches than the population of non-coffee drinkers, but there is a difference between this result being statisticaly significant, and pragmatically significant.
  • Keep in mind that correlation does not imply causation, certain studies such as randomized controlled trials and cohort studes can be mindful of direction (whether the exposure came before the outcome), but other studies can't.
  • Reading critically is a good way of spotting possible errors made by the researchers, including those of confounding and bias.
Some link for your consideration:


PubMed Home


Cohort and Case-Control Studies


Randomized Controlled Trials