The Biography of a Term: Evidence-Based Medicine

The Biography of a Term: Evidence-Based Medicine

Prologue

In the winter of 1990, a phrase was born in a small conference room in McMaster University in Hamilton, Ontario, a phrase that would come to redefine how doctors, researchers, and eventually wellness influencers described their decision-making: Evidence-Based Medicine.

It was first published formally in 1992 in The Journal of the American Medical Association (JAMA), in an article by a group of physicians led by Dr. Gordon Guyatt. The phrase was bold, if slightly presumptuous: “Evidence-Based Medicine: A New Approach to Teaching the Practice of Medicine.” With that, a new era had declared itself, not with a bang, but with the tone of modest revolution.

But like all terms that become slogans, "evidence-based medicine" would soon live a life that strayed far from its creators’ intent. As it entered clinical manuals, then hospital guidelines, and eventually TED talks and Instagram captions, influencers and celebrity doc, and lately longevity guru's headlines, it was subjected to the entropy that befalls all language: it became untethered from its epistemological roots.

To understand what went wrong and what went right, we must follow the life of the term through three stages: its conception, its institutionalization, and its exploitation.

Conception: McMaster and the Young Turks

In the late 1980s, the Department of Clinical Epidemiology at McMaster was a hive of contrarian thinking. The professors there were frustrated with how medicine was still being taught by authority, anecdote (sounds familiar?), and what Dr. Guyatt called “impression-based” medicine (or what now would fall into the category of "what they don't want you to know"). They proposed something radical: that medical decisions should be made not solely on tradition or expert opinion, but by systematically assessing the best available evidence from research, especially from randomized controlled trials.

The term “Evidence-Based Medicine” didn’t appear overnight. At first, Guyatt tried “Scientific Medicine,” but it sounded condescending. “Evidence-Based” had the advantage of sounding neutral, responsible, and vaguely noble.

But even in this moment of intellectual birth, there were warning signs.

Institutionalization: The Rise and Rigidity

By the late 1990s, EBM was no longer a manifesto, it was a movement. The Cochrane Collaboration was formed to collect, grade, and synthesize high-quality evidence. Hierarchies of evidence were drawn like battle maps, with systematic reviews and randomized controlled trials (RCTs) at the peak, and expert opinion or case studies banished to the bottom.

Hospitals, insurers, and regulatory bodies embraced EBM with bureaucratic fervor. Clinical guidelines sprouted. Meta-analyses multiplied. And protocols proliferated.

But this was the point when skepticism arose.

My mentor, Dr. Alvin Feinstein at Yale, began to warn against what he called "statistical sanctimony." Evidence-based medicine was increasingly being used not as a guide to help clinicians think better, but as a system to replace thinking altogether. It risked confusing numerical evidence with epistemic truth.

“Statistical significance, absent clinical significance, is a tautology in disguise. And what masquerades as ‘evidence’ is often only the echo of a misapplied method.”

Exploitation: From RCTs to Instagram Reels

By the 2010s, EBM had mutated.

Once a discipline of critical appraisal, it became a brand. Wellness entrepreneurs, tech startups, longevity influencers, and supplement companies began to label their products and practices as “evidence-based” because they could point to a study, somewhere, in some species, with some finding. In doing so, they collapsed analysis into citation, and judgment into marketing.

A wearable company would claim its HRV sensor was “evidence-based,” linking to a small pilot study. A biohacker would defend intravenous NAD infusions by pointing to a rodent trial from 2003. The phrase that once demanded rigor now required nothing more than a hyperlink.

In this postmodern phase of its life, EBM became a mirror: reflecting back whatever the speaker wanted it to mean.

The Patient, the Doctor, the Evidence

The original vision of evidence-based medicine, when stripped of its bureaucratic and commercial cruft, was never about algorithms or absolutes. It was about helping doctors make better decisions using the best available science, while still honoring the gray zones of patient experience, clinical skill, and biological variability.

The true spirit of EBM demands not blind obedience to data, but disciplined doubt, methodical analysis, and a commitment to relevance over ritual. It is not a checklist. It is not a badge. It is a practice, one that remains unfinished.

“Evidence-based medicine will endure not by being repeated, but by being re-examined.”

Citation Is Not Appraisal — The Mirage of “Evidence”

Providing a reference is not practicing evidence-based medicine. It’s a necessary first step, not a sufficient one. Unless you can rigorously evaluate whether a study is:

  • Asking a clinically relevant question
  • Designed to eliminate confounding
  • Demonstrating not just statistical but clinical significance
  • Reporting patient-centered outcomes, not just surrogate markers
  • Reproducible and generalizable

…then you’re not applying evidence. You’re citing it like a credential.

What Actually Makes a Practice Evidence-Based?

And just to be very clear: No observational study, no matter how large or elegant, can justify a clinical intervention as evidence-based. It may generate hypotheses or may inspire exploration. But no observational study can justify treatment or any intervention.

Here’s how you tell the difference between real evidence and premature extrapolation, with examples that matter.

1. Is the Clinical Question Even Relevant?

Example: Zonulin testing for “leaky gut”

If you’re testing something that lacks clinical relevance, you’re not helping patients, you’re entertaining hypotheses.

2. Was There Randomization to Eliminate Confounding?

Example: Vitamin D and COVID-19

Correlation is not causation. No randomization = no justification for intervention.

3. Are the Results Clinically Significant, or Just Statistically?

Example: ACCORD Trial (NEJM, 2008)

  • Lower A1C achieved through intensive glucose control
  • Statistically significant difference
  • But: increased all-cause mortality

A lower number on a chart means nothing if it leads to harm. Statistical significance ≠ clinical benefit.

4. Were the Endpoints Surrogate or Real?

Example A: Rosiglitazone

  • Lowered blood glucose
  • But increased heart failure and MI

Example B: Niacin

  • Raised HDL by 20%
  • AIM-HIGH trial showed no benefit in reducing heart disease

Example C: Vitamin D and Immunity

  • Mechanistic plausibility
  • In RCTs: no consistent effect on respiratory illness or cold prevention

A biological effect does not justify a clinical claim. If it doesn’t improve the patient, it doesn’t matter.

5. Are the Results Durable, Reproducible, and Generalizable?

Example: Homocysteine-lowering with B-vitamins

  • Observational data: mild elevation = ~25% higher stroke risk
  • Lowering with B-vitamins reduces homocysteine by ~25%
  • But multiple RCTs (e.g., HOPE-2) showed no reduction in stroke or MI

You moved the marker. The outcomes stayed the same. That’s not evidence. That’s noise.

A Final Word on Surrogate Endpoints

Surrogates are attractive because they’re fast, visible, and data-friendly. But they are false friends if they don’t improve real outcomes. A drop in inflammation, a change in hormone levels, a shift in a microbiome score, means nothing without benefit to the person.

A therapy that improves a lab value but does nothing for the patient is not a breakthrough. It’s a biological distraction.

Fundamental Rule

You do not get to call a clinical practice “evidence-based” just because you found a study.

If your intervention rests on:

  • A mouse model
  • An unvalidated biomarker
  • A surrogate endpoint
  • A correlation in an observational cohort
  • A small pilot trial without replication

…you are not practicing evidence-based medicine. You are performing scientific theater.

Final Word (and Anticipated Outcry)

If this piece makes certain circles uncomfortable — good. That discomfort is diagnostic. It signals exactly how far the term “evidence-based” has been stretched. So when someone claims “evidence-based” and follows it with gut feeling, mechanistic speculation, or a biomarker you've never heard of?

Nod. Then run.


Author's Note As always, the views here are entirely my own — not those of my employer or former colleagues. This space exists to explore ideas, question norms, and sometimes make people just uncomfortable enough to think. If you’ve found your way here, you’re likely seeking what’s been lost in translation: integrity, systems-thinking, and a grounded view of the body as shaped by constraint, not limitless potential.

 © [Arina Cadariu] [2025]. All rights reserved. This article is part of The Science of Letting Go—a personal and educational project exploring the intersections of biology, genetics, epigenetics, clinical medicine, epidemiology, and the ethics of scientific communication. All views expressed are solely those of the author and do not represent the views of any employer, institution, or affiliated entity.

This work is protected under international copyright law. No part of this publication may be reproduced, excerpted, copied, or adapted—whether in full or in part—without prior written permission from the author. Unauthorized use, including commercial or institutional repurposing by clinics, wellness providers, or longevity brands, is expressly prohibited. This work has been digitally archived and timestamped to confirm original authorship and publication date.


David Lester

General Manager XRV Pty Ltd

2mo

Arina Cadariu MD MPH Isn't evidence-based something that continues to evolve? It isn't a simple target as we don't know what evidence is required to truly support. And if we are only focused on the solution opposed to the cause we will never achieve true evidence- based. Just my thoughts.

Like
Reply

Arina Cadariu MD MPH, This is such an important perspective on the longevity industry! It’s eye-opening to think about how much we rely on these untested tools. Your newsletter sounds like a valuable resource for navigating this confusing landscape. What do you think are the steps we can take to promote true evidence-based practices in health? 🌟 #HealthLiteracy #EvidenceBased #Longevity 💡

Andrew (Andy) Patrick

Happily Married/Lifelong Learner/No Crypto/Retired

2mo

Agreed 👍

Like
Reply
Mark Brezinski MD,PhD,CPT

Physician, Scientist, Medical Educator, and Entrepreneur (LightLab Imaging) with more than 25 years at Harvard and MIT.

2mo

Judgement in designing experiments. But there is no judgement in analyzing data, it is a systematic process. It isn’t patient care where is a lot of judgement. What the least decade has seen is lack of strong data.

Kathy Jensen, MS, RDN

Clinical Dietitian, Renal, Diabetes, Ketogenic Metabolic Therapy, Author, Coach, Speaker

2mo

Very helpful perspective, I am sharing freely! Thank you!

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore content categories