Letting AI Summarize Unknown Email, Docs…
Gives attackers a way to execute hidden commands. Your “summary” is their payload.
AI is being forced into everything. Now it’s in Gmail. And it’s exploitable.
This is a problem Google is trying to address but it’s not an easy fix.
We also don’t believe that it’s a problem that will remain limited to Google and Gemini…
Researchers showed how a simple trick (white-on-white hidden text) can hijack Gemini’s summaries. The AI doesn’t see a blank line. It sees instructions. And it obeys.
The result? Gemini spits out a fake “security alert” that looks official. “Your Gmail password has been compromised. Call this number.” On the other end, a scammer waits.
Why this matters
- No links. No attachments. Spam filters don’t stop it.
- Gemini obeys hidden <Admin> tags and repeats them verbatim.
- Users trust AI summaries more than raw emails. That’s the hook.
This isn’t theory. It’s live. Reported through the 0DIN AI bug bounty. Google has tried to patch, but the hole is still there.
Bigger picture
- Works in Docs, Drive, and anywhere Gemini digests content.
- SaaS newsletters or ticketing systems could be turned into mass-phishing beacons.
- Regulators already see this as “manipulation causing harm.”
What to do
- Don’t use Gmail AI summaries. Not yet.
- Treat any AI output as untrusted. Assume it can be hijacked.
- Act only on verified Google alerts—not what Gemini “summarizes.”
- Organizations: strip hidden HTML before it hits the model.
ObscureIQ Insight
This isn’t a brand new problem. It’s now a couple months old. Google has been trying to address the problem and has likely made some progress. But we still feel that AI summaries in Gmail aren’t safe. With one invisible tag, attackers can turn Google’s own AI into their phishing tool.
Until LLMs can filter hidden instructions, every AI summary is executable code. Treat it that way.
What’s more, we wanted to share this with you because it’s very unlikely Gemini is the only AI vulnerable in this way. Other LLMs will have similar problems. Be careful.
