Load WordPress Sites in as fast as 37ms!

Anthropic adds Claude 4 security measures to limit risk of users developing weapons


Anthropic on Thursday said it activated a tighter artificial intelligence control for Claude Opus 4, its latest AI model.

The new AI Safety Level 3 (ASL-3) controls are to “limit the risk of Claude being misused specifically for the development or acquisition of chemical, biological, radiological, and nuclear (CBRN) weapons,” the company wrote in a blog post.

The company, which is backed by Amazon, said it was taking the measures as a precaution and that the team had not yet determined if Opus 4 has crossed the benchmark that would require that protection.

Anthropic announced Claude Opus 4 and Claude Sonnet 4 on Thursday, touting the advanced ability of the models to “analyze thousands of data sources, execute long-running tasks, write human-quality content, and perform complex actions,” per a release.

The company said Sonnet 4 did not need the tighter controls.

Check Also

Shaquille O’Neal hopes his kids don’t play professional basketball

Shaquille O’Neal is reflecting on the sacrifices he made to reach the pinnacle of professional …

The Ultimate Managed Hosting Platform
If you purchase through these links, I may earn a commission at no additional cost to you.