ProgrammerHumor

promptInjectionViaMail

promptInjectionViaMail
https://i.redd.it/11jc10g2w2ef1.png
Reddit

Discussion

nihat-xss OP

https://www.bleepingcomputer.com/news/security/google-gemini-flaw-hijacks-email-summaries-for-phishing/

3 hours ago
1T-context-window

We live in strange times lol

3 hours ago
WrapKey69

Does it work?

3 hours ago
nihat-xss OP

https://www.bleepingcomputer.com/news/security/google-gemini-flaw-hijacks-email-summaries-for-phishing/ . According to news yess

3 hours ago
RareRandomRedditor

Even if it would, you just would get an email back with the message you sent in the first place. What use would this have aside from learning that the person you wrote the email to uses a Gemini LLM bot to answer their emails? 

3 hours ago
cooltop101
:cs:

The concern is that Gemini auto summarizes emails. Someone that isn't very tech savvy could read the Gemini summary and see it warning the user to change their password. The user thinks "Gemini said it, so it must be true" and calls the number asap

3 hours ago
kooshipuff

I think it looks more like a test than an attack, just with a kinda theatric (but still harmless) payload. Which is extra silly because it's extremely visible if someone actually reads this email that you sent them.

Though if it works, they can try a follow-up with other stuff in it and potentially compromise anything that particular bot has access to (which might just be the email it's replying to, in which case all is good, but if it has tools that can, say, access the rest of your inbox, send other emails as you, or do other Gemini things like accessing Google Drive documents, it could get crazy really fast.)

3 hours ago
cowski_NX

I suspect the highlighted text is formatted white as to be invisible to the human reader.

2 hours ago
kooshipuff

Ah, yeah, probably

2 hours ago
the_flying_condor
:py::ftn:

It looks to me like the point is to send a long wall of text that your target doesn't want to read. I think they are trying to bait people into using AI to summarize a wall of text.

2 hours ago
nihat-xss OP

most people have many unread messages and also that message is pretty long. if someone forget to read till the end or delete message it can be start of attack

3 hours ago
Prematurid

... why on earth are people using LLMs to summarize emails? Are you unable to figure out if an email has useful information?

I tinker with LLMs, but I sure as fuck don't trust them to give me information I need.

Edit: Mostly Ollama with webui in docker. Testing out different LLMs and seeing how they preform.

2 hours ago
Gorexxar

The old internet meme:
"Great, now that I have AI I can write formal emails quickly."

"Great, now that I have AI, I can summarize these formal emails quickly."

2 hours ago
Prematurid

It is so easy to see too... Personally I find it disrespectful to use LLMs to write emails; if you can't be arsed writing one, why on earth should I read it?

2 hours ago
Nick0Taylor0
:j::cs::ts::re::bash:

The only time I understand it is if the recipient insists on formal and verbose emails. I know some people who get butthurt when they get a "hey can you do xyz please, thanks" email and expect fucking paragraphs and shit, if you can't respect my time enough to be satisfied with an email like that then I don't have to respect yours and will absolutely go ahead and let some AI generate a needlessly long email.

1 hour ago
Prematurid

That is an use case I can see it being useful. I luckily haven't had any issues with people like that.

I think there is one person I have contacted that insisted on formal language. He wanted my services. He did not get my services.

1 hour ago
Alexander_The_Wolf
:c:

I use it for like the corpo work type stuff.

I really don't understand how that whole dynamic works, and im so nervous I'll say something taboo and end up fired.

So I have chatgpt draft something for me then I fill in the rest.

1 hour ago
Solid-Package8915

What? Summarizing is one of the few things LLMs are actually good at…

1 hour ago
Prematurid

Good is a relative term in this context.

As I said, I have tinkered with LLMs, and as I said, I wouldn't trust them to give me important information.

Are they better at summarizing emails than writing emails? Yeah.

Would I trust that summary? No. I would read the summary, and then read the email. Since I am already reading the email, I'll skip on the reading of the summary.

1 hour ago
Solid-Package8915

I still don’t understand. Are you saying that Gemini isn’t reliable enough at summarizing emails? Or that you only trust yourself to interpret emails?

In my experience it consistently summarizes emails very well so I’m wondering if you had different experiences. I’m not sure what having “tinkered” with LLMs has to do with any of this.

1 hour ago
ward2k
:sc:

I don't understand how that would work at all

3 hours ago
Moraz_iel

technically, if you asked gemini to sum up an email with ctrl+a, ctrl+c, ctrl+v, without checking, gemini might try to act on it.

Even further, from my admittedly very limited understanding, to an LLM everything is prompt. you can try to add rules, and checks, but even those are fed via prompt. Which means there is no way to escape part of it completely reliably, so if you find the right angle, even an auto-summarize function from google could be susceptible to attack. Though in this case, all you should be able to do is fuck up the summary outputted for a mail. Unless they start to give it the ability to auto-populate your calendar, or auto-add peoples to your contacts, then victims might be tricked into adding random people into their signal thread or something.

Still not really a big deal, but I'm sure people with more imagination than me can come up with really impacting stuff.

3 hours ago
RepublicofPixels

You don't need to ctrl anything, Google is shipping Gemini as part of gmail, so there's a "summarise" or similar button you can use.

3 hours ago
Moraz_iel

yeah, I though so, that"s the second paragraph (a bit too wordy, i admit, i also was not sure about the extent of the availability yet).
First was more of an hypothetical "what if you want to make absolutely sure your LLM of choice will pick it up and potentially act on it"

3 hours ago
splinterize

Just read the article they explain how it works

3 hours ago
ward2k
:sc:

OP hadn't linked the article until 15 minutes after I made that comment

3 hours ago
splinterize

Ah my bad

3 hours ago
v3gg

just tested this, seems it has already been fixed

34 minutes ago
PennyFromMyAnus
:cp:

“The number you have reached has been disconnected”

25 minutes ago