About MedBounty

Patients are already asking AI. We make it safer.

Large language models are becoming the front door to healthcare. Millions of patients ask them clinical questions every day. When the models get it wrong, no one catches it. We think the people best positioned to fix that are the ones being trained to do medicine right now.

Why This Matters
The problem is urgent and getting worse
โš•

Patients aren't waiting for the healthcare system

Before calling their doctor, before going to the ER, patients are typing their symptoms into ChatGPT, Gemini, and Perplexity. LLMs are the new first point of contact for healthcare โ€” and they hallucinate, miss emergencies, and give advice that sounds authoritative but can be dangerously incomplete. This isn't hypothetical. It's happening right now, at scale.

๐Ÿฉบ

Medical trainees are uniquely positioned to fix it

Medical students and residents carry an enormous depth of clinical knowledge โ€” refined daily through patient care, board prep, and clinical reasoning exercises. That knowledge is exactly what's needed to catch the failures these models make. Every correction a trainee submits becomes training data that makes the next patient interaction safer.

๐Ÿ”ฌ

The feedback loop doesn't exist yet

When a model tells a patient that a potassium of 6.2 is "a bit high, talk to your doctor," there's no mechanism to catch that, flag it, and feed the correction back to the model. MedBounty creates that loop: clinicians find failures, submit structured reasoning traces, and that data flows directly into model improvement pipelines.

๐Ÿค

Clinical expertise should be valued, not extracted

Existing data labeling platforms treat clinicians as interchangeable annotators โ€” idle between projects, undifferentiated, and disconnected from impact. MedBounty is built by trainees who know what it's like. We believe clinical expertise should be compensated fairly, that you should earn on your own schedule, and that you should see the direct impact of your work on patient safety.

"Every day, patients are making real medical decisions based on what a language model tells them. The only people who can systematically catch these errors are the ones trained to get medicine right."
The Team
Built by trainees, for trainees
Pavan Shah

Pavan Shah

Neurosurgery Resident
Stanford Medicine
LinkedIn โ†’
Anmol Warman

Anmol Warman

Medical Student
Johns Hopkins SOM
LinkedIn โ†’
Arushi Gulati

Arushi Gulati

Otolaryngology Resident
UCSF
LinkedIn โ†’
Ruchit Patel

Ruchit Patel

Neurosurgery Resident
Brigham / MGH ยท HMS
LinkedIn โ†’
Rishab Ramapriyan

Rishab Ramapriyan

Neurosurgery Resident
UPenn ยท Penn Medicine
LinkedIn โ†’
Derek Teng

Derek Teng

Internal Medicine Attending
MGH ยท Johns Hopkins SOM
LinkedIn โ†’
Galen Shi

Galen Shi

Anesthesiology & CCM Resident
Johns Hopkins
LinkedIn โ†’
Trainees across Harvard Medical School, Stanford Medicine, UPenn, UCSF, and Johns Hopkins