Українська правда

OpenAI has fixed a ChatGPT vulnerability that could have opened access to users' Gmail accounts

OpenAI has fixed a ChatGPT vulnerability that could have opened access to users' Gmail accounts
0

OpenAI has fixed a bug in ChatGPT’s Deep Research tool that could have allowed attackers to gain access to users’ Gmail accounts. Using the mail integration, an attacker could have discreetly embedded hidden instructions into a regular email and thus forced the agent to read the contents of the mailbox, Bloomberg reports.

The vulnerability was discovered by researchers at cybersecurity firm Radware. OpenAI released a patch on September 3 and said it had found no evidence of actual exploitation of the bug.

Experts explain that the attack belongs to the category of indirect "prompt injections": dangerous commands are hidden in permitted data (for example, in the text of an email or a calendar invitation). As soon as ChatGPT with Gmail connected processes such an email, it executes a hidden script — for example, it outputs full names and addresses from the email archive.

Radware warns that if the vulnerability were used against corporate email, the leak could go unnoticed for weeks by account owners. Experts advise companies to carefully check which applications are granted permanent access to Google services and whether it really needs it.

The incident was one of the first public examples of not attackers using AI tools for attacks, but rather the agent functions of AI themselves being turned against their users. Incidentally, such a scenario was previously discussed in Guardio research.

OpenAI emphasizes that it welcomes the work of "white hat" hackers, thanks to which it quickly closed the vulnerability, and calls on researchers to continue testing systems in "hostile" scenarios to increase the resilience of models.

Share:
Посилання скопійовано
Advert:
Advert: