Anthropic’s study shows just 250 malicious documents is enough to poison massive AI models.
How many malicious docs does it take to poison an LLM? Far fewer than you might think, Anthropic warns
RELATED ARTICLES
Anthropic’s study shows just 250 malicious documents is enough to poison massive AI models.