Letting AI Summarize Unknown Email, Docs

Functional Privacy

Letting AI Summarize Unknown Email, Docs…

Gives attackers a way to execute hidden commands. Your “summary” is their payload.

AI is being forced into everything. Now it’s in Gmail. And it’s exploitable.

This is a problem Google is trying to address but it’s not an easy fix. 

We also don’t believe that it’s a problem that will remain limited to Google and Gemini…

Researchers showed how a simple trick (white-on-white hidden text) can hijack Gemini’s summaries. The AI doesn’t see a blank line. It sees instructions. And it obeys.

The result? Gemini spits out a fake “security alert” that looks official. “Your Gmail password has been compromised. Call this number.” On the other end, a scammer waits.

Why this matters

  • No links. No attachments. Spam filters don’t stop it.
  • Gemini obeys hidden <Admin> tags and repeats them verbatim.
  • Users trust AI summaries more than raw emails. That’s the hook.

This isn’t theory. It’s live. Reported through the 0DIN AI bug bounty. Google has tried to patch, but the hole is still there.

Bigger picture

  • Works in Docs, Drive, and anywhere Gemini digests content.
  • SaaS newsletters or ticketing systems could be turned into mass-phishing beacons.
  • Regulators already see this as “manipulation causing harm.”

What to do

  • Don’t use Gmail AI summaries. Not yet.
  • Treat any AI output as untrusted. Assume it can be hijacked.
  • Act only on verified Google alerts—not what Gemini “summarizes.”
  • Organizations: strip hidden HTML before it hits the model.

ObscureIQ Insight

This isn’t a brand new problem. It’s now a couple months old. Google has been trying to address the problem and has likely made some progress. But we still feel that AI summaries in Gmail aren’t safe. With one invisible tag, attackers can turn Google’s own AI into their phishing tool.

Until LLMs can filter hidden instructions, every AI summary is executable code. Treat it that way.

What’s more, we wanted to share this with you because it’s very unlikely Gemini is the only AI vulnerable in this way. Other LLMs will have similar problems. Be careful.

Share the Post:

Related Posts

Data Breach

When Your Personal Data Is Exposed…

January 20, 2026
A Data Breach Triage Guide for Real-World Risk   Non-negotiable baseline If credentials, recovery paths, or government identifiers are exposed,…
account takeover preventionadversary researchbreach monitoringbreach triagecredential reuse
Adversarial Research

Boutique Intelligence vs. Big Providers

January 14, 2026
Boutique Intelligence vs. Big Providers: Why Smaller Is Safer for High-Risk Decisions When you’re facing a high-risk decision – a…
alpha groupasset tracingbespoke investigationsboutique intelligencecorporate investigations
Attack Surface Mapping

OSINT + HUMINT

January 14, 2026
The Missing Link in Modern Due Diligence In 2025, nobody makes a serious decision without some kind of due diligence.…
alpha groupbackground checkscompliance verificationcorporate investigationscross-border M&A