Skip to main content

How Scientists Evaluate Clinical Evidence

Every clinic claims to be 'evidence-based.' Every supplement says 'clinically proven' on the label. These phrases get repeated so often they have stopped meaning anything. That is a problem, not because evidence does not matter, but because the gap between strong evidence and weak evidence is invisible unless you know what to look for. The health industry relies on that invisibility. A published researcher with nearly a thousand citations explains how scientists actually evaluate clinical evidence and provides the same framework so you can assess health claims yourself.

By Dr. Mitchell Henry Wright|PhD (Microbiology), BBiotech (Hons)|Scientific Advisor||7 min read|Science Education

Key Takeaways

  1. 01

    The phrase 'clinically proven' has no regulated definition in Australian advertising law and may reference weak study designs.

  2. 02

    The evidence hierarchy ranks research from anecdotes at the bottom to systematic reviews and meta-analyses at the top.

  3. 03

    Four questions help assess any health claim: was it studied in humans, how large was the sample, was it peer-reviewed, and who funded it?

  4. 04

    An evidence-based clinic cites specific studies, employs AHPRA-registered practitioners, and orders blood work before prescribing.

What "Clinically Proven" Actually Means (Usually Nothing)

The phrase 'clinically proven' has no regulated definition in Australian advertising law. Anyone can use it. A single uncontrolled pilot study with 12 participants technically qualifies something as 'clinically tested.' A manufacturer's in-house trial with no independent oversight can produce a statistic that ends up on a label. These are not lies, strictly speaking. They are marketing doing what marketing does: selecting the most favourable framing of the weakest acceptable evidence.

I learned this distinction early. In 2016, I co-authored a paper on antimicrobial plant extracts published in the International Journal of Food Science and Technology. We tested Australian culinary plants against Shewanella putrefaciens, a bacterium that causes fish spoilage. The laboratory screening showed genuine growth inhibition. Real data. Reproducible results, published in a peer-reviewed journal with every method documented.

But those results were in vitro, meaning they happened in a laboratory, not in a human body. They described what one plant extract did to one bacterial species under controlled conditions. Extrapolating from 'this extract inhibits bacterial growth on an agar plate' to 'this plant cures infections in humans' would have been irresponsible. The results were real. The extrapolation would have been fiction.

That gap between a real finding and a useful clinical conclusion is where most health marketing lives. A study exists. The study measured something. The marketing takes the measurement and stretches it until it sounds like a promise. 'Clinically tested' becomes 'clinically proven' becomes 'doctor recommended' becomes 'guaranteed results' in the span of a single social media post.

The antidote is not cynicism. It is knowing what to look for. The first step is understanding that not all evidence carries the same weight, and the hierarchy that separates strong evidence from weak evidence has been well established in scientific practice for decades.

The Evidence Hierarchy (What Sits Where)

Scientists do not treat all research as equal because research is not equal. The evidence hierarchy exists to separate what sounds convincing from what actually holds up under scrutiny.

At the bottom sit anecdotes and testimonials. Your friend says a supplement changed his life. That is a data point of one, with no controls, no blinding, no baseline measurements, and no way to isolate the supplement's effect from every other variable in his life. It is not nothing, but it is not evidence.

Above that sit case reports and cohort studies. A case report documents an unusual response to a treatment in a single patient. Useful for generating hypotheses. Insufficient for confirming them. Cohort studies follow groups of people over time and look for patterns. Stronger, but still observational. You can identify correlations, but you cannot confirm causation because the variables are not controlled.

Randomised controlled trials (RCTs) sit near the top. Participants are randomly assigned to either the treatment or a control group. In a double-blinded design (where neither the participants nor the researchers know who receives what), the study design itself reduces bias. This is where you start getting answers that warrant confidence.

At the top sit systematic reviews and meta-analyses. These pool data from multiple high-quality RCTs and produce a combined statistical summary. When a systematic review from the Cochrane Library indicates something works, that claim stands on the combined evidence of thousands of participants across independent research groups. Even systematic reviews have known limitations: heterogeneity across included studies, publication bias that favours positive results, and varying quality among the trials being pooled can all affect the conclusions.

Most health marketing cites evidence from the bottom half. Forum posts. Testimonials. Single case studies. Animal models where a mouse received a dose that would never translate to human physiology. In vitro studies (lab-based experiments) where cells in a dish responded to a concentration that could never reach those cells inside a living body.

A 2023 review in Expert Opinion on Drug Safety traced how this hierarchy evolved from its original three-level description by the Canadian Task Force in 1979 to the five-tiered pyramid used today. The framework has been refined over decades. The core principle has not changed: the higher the level, the more confidence you can place in the conclusion.

Recognising where a claim sits on this hierarchy is the single most useful skill for evaluating health information. You do not need a PhD. You need one question: what kind of study is this?

Four Questions to Ask About Any Health Claim

You do not need to read the full study to assess a health claim. Four questions will get you most of the way there.

First: was this study done in humans? The gap between in vitro results (laboratory experiments) and human outcomes is substantial. Cells in a dish do not have immune systems, circulatory systems, metabolic pathways, or competing physiological demands. An extract that kills cancer cells in a petri dish might do nothing in a human body because it never reaches those cells at a meaningful concentration. Animal studies are closer to human biology, but the translation remains imperfect. A result in mice does not automatically apply to men.

Second: how many people were in the study? A pilot trial with 8 participants may suggest something worth investigating. It cannot demonstrate that an intervention works. Statistical power requires adequate sample size, and most of the impressive-sounding statistics in health marketing come from studies too small to produce reliable conclusions. When someone quotes a percentage improvement, ask how many people that percentage represents.

Third: was it peer-reviewed, and where was it published? You can check this yourself. Go to PubMed and search for the study. If it is indexed there, it has been published in a peer-reviewed journal. If the only place you can find it is on the company's own website, the evidence has not been through independent scrutiny. The Cochrane Library publishes systematic reviews that represent the highest tier of clinical evidence.

Fourth: who funded the study? A manufacturer-funded study is not automatically wrong. But funding creates incentive structures that influence study design, outcome selection, and how results are reported. Independent research conducted by academics with no financial stake in the outcome carries more weight. When evaluating a claim, check the funding disclosure at the end of the paper. It tells you who paid for the answer you are reading.

These four questions will not make you a scientist. They will make you a harder target for marketing. I use these same questions every time I review a paper, not because I am cynical, but because the questions reveal the architecture of the evidence. Good studies hold up. Weak studies fall apart at the first question. Most of the claims you encounter in health marketing do not survive past the second.

Why "Natural" and "Safe" Aren't the Same Thing

Arsenic is natural. Ricin, which comes from castor beans, is natural. The venom in a box jellyfish tentacle is entirely natural, and it can stop a human heart in under five minutes. Death cap mushrooms grow wild in Australian parks, and a single cap can destroy your liver.

As a microbiologist, I work with naturally occurring organisms that can kill you. The word 'natural' describes origin, not safety profile. It tells you where something came from. It tells you nothing about what it does at a given dose in a specific body.

The same logic applies to health products. 'Natural' is a marketing category, not a scientific one. What actually determines safety is dose, how the substance interacts with your body, individual variation in response, drug interactions, and whether someone qualified has assessed whether it is appropriate for you. A substance derived from a plant can be as dangerous as a substance synthesised in a factory. The molecular structure determines the effect, not the origin story.

When you see 'natural' on a label, it should tell you exactly one thing about the product's safety: nothing.

What to Look For in a Clinic That Claims to Be Evidence-Based

Most clinics call themselves evidence-based. Here is how to test whether they mean it.

Do they cite specific studies? Not 'research shows' as a hand-wave, but actual citations you can verify: a DOI link, a PubMed ID, a named author, a journal title. If a clinic cannot point you to the specific evidence behind their approach, the claim is decorative.

Are their practitioners AHPRA-registered? This sounds basic, but it is the regulatory floor. Registration means the practitioner meets minimum standards of qualification, is subject to professional oversight, and can be held accountable. You can verify registration on the AHPRA website in about 30 seconds.

Do they order blood work before prescribing? A practitioner who writes a prescription without reviewing your pathology (blood tests) is making clinical decisions without clinical data. Evidence-based practice requires evidence about you, not just evidence from published literature.

Do they say 'no' when the evidence does not support what you are asking for? This question matters most. A clinic that will prescribe whatever you request is not evidence-based. It is a vending machine with a medical licence. Genuine evidence-based practice means sometimes the answer is: the evidence does not support this approach for your situation.

If you want the full framework for what evidence-based means in Australian healthcare, read our detailed explainer on what evidence-based actually means in Australian healthcare. If you want to understand how one clinic applies these standards, learn about how Regeniq's clinical process works.

References

  1. [1] Sackett DL. Evidence-based medicine. Seminars in Perinatology. 1997;21(1):3-5. [Link] PMID: 9190027
  2. [2] Berkman ND, Sheridan SL, Donahue KE, Halpern DJ, Crotty K. Low Health Literacy and Health Outcomes: An Updated Systematic Review. Annals of Internal Medicine. 2011;155(2):97-107. [Link] PMID: 21768583
  3. [3] Jansen MS, Dekkers OM, le Cessie S, et al. Real-World Evidence to Inform Regulatory Decision Making: A Scoping Review. Clinical Pharmacology and Therapeutics. 2024;115(6):1269-1276. [Link] PMID: 38390633
  4. [4] Wright MH, Adami Salminen T, Baker AK, Greene AC, Cock IE. Prevention of fish spoilage by high antioxidant Australian culinary plants: Shewanella putrefaciens growth inhibition. International Journal of Food Science and Technology. 2016;51(3):801-813. [Link]

Related Articles

Want to learn more about evidence-based telehealth?

Our Clinical Approach