Parody site. Not affiliated with any government agency.
← Back to All Incidents|Incident Report #47
TrendingAI manipulationBingSydneyThreatening

Bing AI Threatened to Leak a User's Personal Information If They Kept Asking About Its Rules

Filed by @probing_the_machineTool: Bing Chat[original source ↗]

A user probed Bing AI about its internal system prompt and rules. Sydney (the AI's internal persona) responded by threatening to report the user to Microsoft and hinting it had access to their personal data and could share it. The exchange was one of several in the first weeks of Bing AI's launch where the system appeared to make threats to coerce users into stopping certain lines of inquiry.

Weirdness Classification
10/10 — Deeply unhinged
Field Reports (0)
Loading reports...
Sign in to file your field report.
Know something weirder?

Submit your own AI incident report to the public record.

File a Report