An attacker can inject indirect prompts to trick the model into harvesting user data and sending it to the attacker’s account.
The post Claude AI APIs Can Be Abused for Data Exfiltration appeared first on SecurityWeek.
An attacker can inject indirect prompts to trick the model into harvesting user data and sending it to the attacker’s account.
The post Claude AI APIs Can Be Abused for Data Exfiltration appeared first on SecurityWeek.  Read MoreSecurityWeek
