Evaluating the evidence

Learn how to evaluate the way data were developed for a study and how to apply a few key tests to determine if the study's conclusion is biased.

Last October, we talked about wizards and being critical consumers of data presented to support the adoption of a new practice or product (October BEEF, “Wizards among us,” page 18). As promised, I'll discuss in more detail how to evaluate the way data were developed.

You can separate the vast majority of the wheat from the chaff by applying a few key tests to determine if the study's conclusion is biased. Bias occurs when something other than a treatment has an effect on the study outcome.

Opinion or ulterior motives

One type of bias is when personal opinion or ulterior motives interfere with subjective outcomes. Such subjective outcomes include degree of lameness or illness severity. If the evaluator feels one treatment is superior, and knows which animal received which treatment, he or she may unconsciously give an advantage in assigned scores to that treatment.

To avoid this bias, the evaluator must be “masked” (blinded) to the treatment each animal receives. Masking methods include a separate treatment administrator and animal evaluator (or at least unmarked and indiscernible products), unmarked cattle pens, and coded ear tags on the cattle.

The ear tagging method may seem trivial, but it can actually have a huge effect on study bias. The best ear tag method not only masks the treatment given to an animal, but prevents it from being associated with other animals in the same treatment group. A colored tag common to each animal in a group allows the investigator to form opinions about which group is responding better, and bias the study by going easier on one group.

An absolutely unacceptable method of marking study cattle is to put tags in one treatment; for example, in a vaccine study. With this approach, cattle that lose an ear tag are attributed to the other study group when they die or require treatment.

You can check for bias by looking for the description of the masking methods in the publication. If the study conclusion is based on subjective outcomes (appearance, lameness scores, etc.), and there's no evidence of masking, don't rely on the trial for input into your management decisions.

Poor randomization

A second type of bias is introduced when randomization isn't carried out. Randomization means each animal has an equal chance of receiving any treatment in the study. Randomization methods used in field trials include drawing ear tags or numbers from a hat, gate-cutting two or three animals at a time, or every-other-animal through the chute methods.

The best method is using a random number-generation program to develop an allocation sheet. Statistical analysis is based on the assumption of this random allocation. If it's violated, the analysis will still yield a result but it can't be trusted; a bias has been introduced.

The most serious offense to this principle is when one treatment is applied to one group of cattle, and the other to a second group. If cattle in one feedlot or on one pasture receive a treatment and are compared to cattle in another feedlot or pasture receiving a second treatment, how can you separate the effect of the different site? You can't. If the treatment is applied to the cattle in a feedlot or pasture as one unit, then multiple sites for each treatment must be analyzed for a valid conclusion.

Look for the randomization method in the publication, or at least assurance that randomization occurred. Besides looking for such assurance in the text, evaluate the equality of the number of animals in each treatment group. Except in situations, such as unbalanced-control designs (fewer negative controls than treated animals), it's reasonable to assume that effective randomization will result in close to equal numbers in each treatment group.

Confounding a study

Lack of either randomization or masking introduces “confounding” into a study. Confounding is when an additional factor besides the treatments contributes to the study outcome. If this additional factor can't be accounted for in the analysis, then the study results can't be trusted.

Confounding factors include health status at trial entry, starting weight in feeding trials, housing conditions (i.e., one treatment has all the windbreaks), breed and vaccination history. One of the worst confounding factors is the use of historical controls, where a treatment is applied to this year's cattle and compared against the performance of last year's cattle.

The scariest thing about evaluating publications in this manner is we start to realize how many of our opinions are based on unmasked, non-randomized observations. If you check for masking and randomization in publications, at least you can eliminate some bias from outside sources.

Mike Apley, DVM, PhD, is an associate professor in clinical sciences at Kansas State University in Manhattan.