"Hackers Can Read Private AI-Assistant Chats Even Though They're Encrypted"
"Hackers Can Read Private AI-Assistant Chats Even Though They're Encrypted"
Researchers at Ben-Gurion University's Offensive AI Research Lab have presented an attack that can decipher AI assistant responses. The technique involves a side-channel found in all major Artificial Intelligence (AI) assistants except Google Gemini. It refines the fairly raw results through Large Language Models (LLMs) trained specifically for the task.