News

Anthropic launches automated AI security tools for Claude Code that scan code for vulnerabilities and suggest fixes, ...
Malicious traits can spread between AI models while being undetectable to humans, Anthropic and Truthful AI researchers say.
U.S. state legislatures are where the action is for placing guardrails around artificial intelligence technologies, given the ...
AI is a relatively new tool, and despite its rapid deployment in nearly every aspect of our lives, researchers are still ...
Anthropic has released a new safety framework for AI agents, a direct response to a wave of industry failures from Google, ...
An OpenAI spokesperson said the API access allowed for industry-standard benchmarking and safety improvements.
Everyone loves receiving a handwritten letter, but those take time, patience, effort, and sometimes multiple drafts to ...
Using two open-source models (Qwen 2.5 and Meta’s Llama 3) Anthropic engineers went deep into the neural networks to find the ...
A new study from Anthropic suggests that traits such as sycophancy or evilness are associated with specific patterns of ...
In the paper, Anthropic explained that it can steer these vectors by instructing models to act in certain ways -- for example ...
Anthropic is intentionally exposing its AI models like Claude to evil traits during training to make them immune to these ...
Anthropic revealed breakthrough research using "persona vectors" to monitor and control artificial intelligence personality ...