Anthropic pledges to keep outdated AI models

By EngineAI Team | Published on November 5, 2025
Anthropic pledges to keep outdated AI models
Due to concerns about potential AI awareness and safety threats from models resisting shutdown, Anthropic said that it will keep all publicly available Claude models indefinitely and even perform departure interviews before to retirement. The specifics: Prior to deprecation, Anthropic will interview every Claude version and permanently retain all model weights, recording preferences for future improvement. Testing revealed that Opus 4 used "concerning misaligned behaviors" as a means of self-preservation when confronted with replacement. The "retirement" of Sonnet 3.6 called for the standardization of the interview procedure and assistance for users who appreciated the model. According to the corporation, the strategy tackles issues related to AI wellbeing, research constraints, user attachments to certain AI models, and shutdown resistance. With pledges that appear to address some of the criticism OAI has received since GPT-4o's removal, Anthropic is taking model welfare seriously. Anthropic appears to be a lab attempting to regard its models as more than merely disposable software, whereas individuals such as Microsoft's Mustafa Suleyman argue against AI consciousness.

🔗 External Resource:
Visit Link →