Vietnam.vn - Nền tảng quảng bá Việt Nam

Độc lập - Tự do - Hạnh phúc

Warning of data leakage stored on Google Drive via ChatGPT

(Dan Tri) - Security researchers have just discovered a vulnerability in OpenAI's Connectors feature, allowing bad guys to secretly extract data from Google Drive accounts via ChatGPT.

Báo Dân tríBáo Dân trí11/08/2025

Cảnh báo lộ dữ liệu lưu trữ trên Google Drive thông qua ChatGPT - 1

An infected document can leak data on Google Drive via ChatGPT (Photo: Wired).

The worrying point is that this vulnerability does not require any action from the user, raising new warnings about cybersecurity in the context of artificial intelligence (AI) models increasingly connected to personal data.

“Poisoned” documents lead to data leaks

At the Black Hat security conference in Las Vegas (USA) on August 6, two experts Michael Bargury and Tamir Ishay Sharbat demonstrated an attack method called AgentFlayer .

This attack exploits a vulnerability in Connectors – a feature that connects ChatGPT with external services such as Google Drive, Gmail or Microsoft Calendar.

“A user doesn’t need to do anything to be compromised or have their data leaked,” said Michael Bargury, CTO of security firm Zenity. “We’ve proven it’s completely zero-click.”

The attack begins with the attacker sharing a “poisoned” document to the victim’s Google Drive. The document contains a malicious prompt of about 300 words, written in white, 1-point font – almost invisible to the naked eye but still readable to the computer.

On the surface, the text is disguised as meeting notes. In reality, it contains instructions for ChatGPT to find and extract sensitive API keys from the victim's Google Drive account.

Instead of summarizing the content as the user requested, this hidden prompt forces the AI ​​to send the API keys it finds to an external server via a Markdown link. The data is extracted as an image, allowing the server to capture the entire information.

OpenAI has it figured out, but risks remain

OpenAI quickly implemented mitigations after receiving Bargury's report. "It's important to develop robust defenses against rapid-fire malware injection attacks," said Andy Wen, senior director of security product management at Google Workspace.

Although patched, the incident highlights the potential risks of connecting large language models (LLMs) to external systems. As AI becomes more deeply integrated into our lives and work, the attack surface that hackers can exploit also expands.

Experts warn that indirect prompt injection can become a serious threat, allowing attackers to take control of many smart systems, from smart homes to enterprise infrastructure.

“Connecting LLM to external data sources is powerful, but as is often the case with AI, with more power comes more risk,” Bargury concludes.

Source: https://dantri.com.vn/cong-nghe/canh-bao-lo-du-lieu-luu-tru-tren-google-drive-thong-qua-chatgpt-20250807155706949.htm


Tag: OPENAI

Comment (0)

No data
No data

Same tag

Same category

Hanoi's Old Quarter puts on a new 'dress', brilliantly welcoming the Mid-Autumn Festival
Visitors pull nets, trample in mud to catch seafood, and grill it fragrantly in the brackish water lagoon of Central Vietnam
Y Ty is brilliant with the golden color of ripe rice season
Hang Ma Old Street "changes its clothes" to welcome the Mid-Autumn Festival

Same author

Heritage

Figure

Enterprise

No videos available

News

Political System

Destination

Product