Skip to main content

HSC 343 Research for Evidence Based Practice: Evidence Appraisal

Evidence Appraisal Criteria

When appraising research, "three things to bear in mind are quality, validity, and size:

Trials that are randomised and double blind, to avoid selection and observer bias, and where we know what happened to most of the subjects in the trial.

Trials that mimic clinical practice, or could be used in clinical practice, and with outcomes that make sense. For instance, in chronic disorders we want long-term, not short-term trials. We are [also] ... interested in outcomes that are large, useful, and statistically very significant (p < 0.01, a 1 in 100 chance of being wrong).

Trials (or collections of trials) that have large numbers of patients, to avoid being wrong because of the random play of chance. For instance, to be sure that a number needed to treat (NNT) of 2.5 is really between 2 and 3, we need results from about 500 patients. If that NNT is above 5, we need data from thousands of patients.

These are the criteria on which we should judge evidence. For it to be strong evidence, it has to fulfil the requirements of all three criteria."

Source:  Critical Appraisal. Bandolier.

How to Read a Scholarly Article

For tips on the structure of scholarly articles, see this tutorial on the Anatomy of a Scholarly Article by librarians at NCSU.

Evaluating Online Information

Is there an author of the document? Can you determine their credentials? If you cannot determine the author of the site, then think twice about using it as a resource.

Is the site sponsored by a group or organization? If it is, does the group advocate a certain philosophy? Try to find and read "About Us" or similar information.

Is there any bias evident in the site? Is the site trying to sell you a product? Ask why the page was put on the web?

Is there a date on the website? Is it sufficiently up-to-date? If there is no date, think twice about using it. Undated factual or statistical information should never be used.

How credible and authentic are the links to other resources? Are the links evaluated or annotated in any way?


Evaluating information and successful healthcare practice is based on:

"Maintaining a healthy skepticism about the quality and validity of all information." (AAMC)
"Making decisions based on evidence, when such is available, rather than opinion." (AAMC)

Evaluating Statistical Results

Many scholarly research articles include statistical analysis of numerical data gathered during a study or experiment.  To understand these results, check out these explanations.

Is this a Quantitative Study? This video demonstrates how to determine if an article is a quantitative study or not.

Study Designs

Different study designs present higher level evidence depending on the type of clinical question to be answered.

Case series

A report on a series of patients with an outcome of interest. No control group is involved.

Case-Control Study

Case-control studies begin with the outcomes and do not follow people over time. Researchers choose people with a particular result (the cases) and interview the groups or check their records to ascertain what different experiences they had. They compare the odds of having an experience with the outcome to the odds of having an experience without the outcome.

Cross-sectional study

The observation of a defined population at a single point in time or time interval. Exposure and outcome are determined simultaneously.

Cohort Study (Prospective Observational Study)

A clinical research study in which people who presently have a certain condition or receive a particular treatment are followed over time and compared with another group of people who are not affected by the condition.

Controlled Clinical Trial

A type of clinical trial comparing the effectiveness of one medication or treatment with the effectiveness of another medication or treatment. In many controlled trials, the other treatment is a placebo (inactive substance) and is considered the "control."

Randomized Controlled Trial

A controlled clinical trial that randomly (by chance) assigns participants to two or more groups. There are various methods to randomize study participants to their groups.

MedlinePlus offers tutorials and information about how clinical trials work.

Systematic Review

A summary of the clinical literature. A systematic review is a critical assessment and evaluation of all research studies that address a particular clinical issue. The researchers use an organized method of locating, assembling, and evaluating a body of literature on a particular topic using a set of specific criteria. A systematic review typically includes a description of the findings of the collection of research studies. The systematic review may also include a quantitative pooling of data, called a meta-analysis.


A way of combining data from many different research studies. A meta-analysis is a statistical process that combines the findings from individual studies.

Study Designs. (2007, November 3.) In NICHSR Introduction to Health Services Research: a Self-Study Course. Retrieved June 23, 2009 from

Glossary of EBM Terms. June 23, 2009 from

Levels of Evidence

Medical, health, and scientific research is based on unique data collection and structuring.

When looking at this research you need to ask more questions.

  • What methodology is the author following?
  • What is their line of reasoning?

The pyramid below diagrams the levels of quality in scientific studies.