Beyond Google Gemini, AI assistants are giving away your secrets

Passively steal private packets

Artificial intelligence assistants have been in the news lately, but not in the way designers hoped. If the Morris 2 self-replicating AI worms weren’t enough to make you question their purpose, perhaps realizing that people can read the supposedly encrypted responses to your queries might give you pause. Man-in-the-middle attacks are very easy to accomplish, relatively effective, and completely undetectable. The query you send can be observed by anyone on the same network you are communicating with the AI ​​assistant, without the need to install any malware or obtain or forge credentials. The problem is that the encryption used is flawed, and LL.M.s can be trained to decrypt AI assistants’ answers to your questions; Google’s Gemini is the only exception.

The researchers who discovered the flaw were able to identify the subject of your query more than 50% of the time and extract the complete message 29% of the time. Since the attack only requires someone to observe your traffic, there’s no way to know if your queries were eavesdropped. Worse, an LLM trained to decrypt traffic may only become more accurate as it gets more training data.

Ars Technica delves into the details of the attack, or if you dare, you can ask your AI assistant.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *