News
Can AI like Claude 4 be trusted to make ethical decisions? Discover the risks, surprises, and challenges of autonomous AI ...
Researchers observed that when Anthropic’s Claude 4 Opus model detected usage for “egregiously immoral” activities, given ...
The Register on MSN17d
Anthropic Claude 4 models a little more willing than before to blackmail some usersOpen the pod bay door Anthropic on Thursday announced the availability of Claude Opus 4 and Claude Sonnet 4, the latest iteration of its Claude family of machine learning models.… Be aware, however, ...
This development, detailed in a recently published safety report, have led Anthropic to classify Claude Opus 4 as an ‘ASL-3’ system – a designation reserved for AI tech that poses a heightened risk of ...
Startup Anthropic has birthed a new artificial intelligence model, Claude Opus 4, that tests show delivers complex reasoning ...
Claude 4’s “whistle-blow” surprise shows why agentic AI risk lives in prompts and tool access, not benchmarks. Learn the 6 ...
In a fictional scenario set up to test Claude Opus 4, the model often resorted to blackmail when threatened with being ...
The CEO of Windsurf, a popular AI-assisted coding tool, said Anthropic is limiting its direct access to certain AI models.
Anthropic’s Chief Scientist Jared Kaplan said this makes Claude 4 Opus more likely than previous models to be able to advise novices on producing biological weapons ...
A separate report from Time also highlights the stricter safety protocol for Claude 4 Opus. Anthropic found that without extra protections, the AI might help create bioweapons and dangerous viruses.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results