OpenAI has disclosed a data leak affecting an undisclosed number of API users, with personal data and metadata exposed after an attack on its analytics provider, Mixpanel.
Mixpanel, which offers analytics software for tracking product and service usage, confirmed its systems were breached. OpenAI utilized Mixpanel for its API product, which allows users to integrate OpenAI’s AI models into their own applications.
The compromised data includes names, email addresses, location details, operating systems, browsers, referring websites, and user/organization IDs linked to API accounts.
Mixpanel detected the hack on November 8, informing OpenAI on November 25 that user data had been stolen. OpenAI has since removed Mixpanel from its platform. Read OpenAI’s statement here: OpenAI Mixpanel Incident.
OpenAI is warning affected users that the stolen information could be used for social engineering or phishing attacks. They are actively notifying victims of the breach.
Mixpanel issued a brief statement confirming a “smishing campaign” was detected on November 8. However, they provided limited details about the incident, drawing criticism from users. Find Mixpanel’s blog post here: Mixpanel SMS Security Incident.
Users on Hacker News have expressed strong disapproval of Mixpanel’s delayed and sparse communication, especially given the timing just before a major U.S. holiday. See the discussion here: Hacker News Discussion. This incident highlights the critical importance of third-party vendor security for data protection.
The incident at OpenAI highlights the precarious nature of digital services reliant on third-party providers, reminiscent of a recent Cloudflare outage that disrupted major platforms including ChatGPT. Such interdependencies demonstrate how a single point of failure can have widespread consequences. This interconnectedness underscores the constant need for resilient infrastructure.
Moreover, the leak underscores a growing concern surrounding AI platforms: their potential for exploitation and misuse, as evidenced by an AI-assisted espionage campaign that leveraged coding tools for reconnaissance and exploit generation. These incidents reveal the evolving tactics of threat actors in the AI domain.
Protecting personal data and metadata remains paramount. Companies like OpenAI face the persistent challenge of safeguarding sensitive information for API users against increasingly sophisticated cyberattacks and vulnerabilities within the digital supply chain.

