"Researchers Uncover Vulnerabilities in Open-Source AI and ML Models"
"Researchers Uncover Vulnerabilities in Open-Source AI and ML Models"
About three dozen security flaws have been discovered in different open source Artificial Intelligence (AI) and Machine Learning (ML) models, some of which enable Remote Code Execution (RCE) and the theft of information. The flaws, found in tools such as ChuanhuChatGPT, Lunary, and LocalAI, were reported as part of Protect AI's Huntr bug bounty program. Two of the most severe flaws are in Lunary, a production toolkit used for Large Language Models (LLMs).