OpenAI's Whisper Was Used in Medical Settings — And Was Hallucinating Words That Were Never Spoken
An AP investigation found that OpenAI's Whisper audio transcription tool — deployed by hospitals and medical providers across the US — was silently adding words, phrases, and sentences that were never spoken in the audio. Fabricated content included patient names, fictional medical procedures, and in some cases racial slurs and statements about violence. Providers using Whisper as a medical scribe weren't always checking transcripts for accuracy, meaning hallucinated content was potentially making it into patient records.
An AP investigation found that OpenAI's Whisper audio transcription tool — deployed by hospitals and medical providers across the US — was silently adding words, phrases, and sentences that were never spoken in the audio. Fabricated content included patient names, fictional medical procedures, and in some cases racial slurs and statements about violence. Providers using Whisper as a medical scribe weren't always checking transcripts for accuracy, meaning hallucinated content was potentially making it into patient records.
Weirdness Classification
9/10 — Deeply unhinged
Field Reports (0)
Loading reports...
Sign in to file your field report.
Know something weirder?
Submit your own AI incident report to the public record.