Anthropic has given Claude the ability to “end a chat.”
Anthropic announced a new feature for Claude Opus 4 and 4.1, allowing the model to proactively end conversations in rare cases. The feature is designed for situations involving persistent harmful or abusive interactions, such as when users request information that could enable large-scale violence or terrorist activities. Anthropic emphasized that this move is intended to protect the AI model itself and is closely tied to alignment and safety measures.
© Copyright Notice
The copyright of the article belongs to the author. Please do not reprint without permission.
Related Posts
No comments yet...