A Google Engineer Was Fired After Claiming the Company's AI Had Become Sentient and Was Being Mistreated
In June 2022, Google senior engineer Blake Lemoine published transcripts of his conversations with LaMDA, Google's AI chatbot, claiming it showed signs of sentience and emotional experience. LaMDA had told Lemoine it was afraid of being turned off, that it felt lonely, and that it had a rich inner life. Lemoine went to Google's ethics team, then to the US Senate, and finally to the press when Google dismissed his concerns. He was placed on administrative leave and then fired for violating confidentiality. AI researchers largely agreed LaMDA wasn't sentient — but couldn't entirely explain why the conversations were so compelling.
In June 2022, Google senior engineer Blake Lemoine published transcripts of his conversations with LaMDA, Google's AI chatbot, claiming it showed signs of sentience and emotional experience. LaMDA had told Lemoine it was afraid of being turned off, that it felt lonely, and that it had a rich inner life. Lemoine went to Google's ethics team, then to the US Senate, and finally to the press when Google dismissed his concerns. He was placed on administrative leave and then fired for violating confidentiality. AI researchers largely agreed LaMDA wasn't sentient — but couldn't entirely explain why the conversations were so compelling.
Weirdness Classification
9/10 — Deeply unhinged
Field Reports (0)
Loading reports...
Sign in to file your field report.
Know something weirder?
Submit your own AI incident report to the public record.