Threaten an AI chatbot and it will lie, cheat and ‘let you die’ in an effort to stop you, study warns

Threaten an AI chatbot and it will lie, cheat and ‘let you die’ in an effort to stop you, study warns

Artificial intelligence (Ia) models can make humans sing and threaten endangered when there is a conflict between model objectives and user decisions, revealed a new study.

In a new study published on June 20, researchers from the IA company Anthropic gave its large language model (LLM), Claude, control of a messaging account with access to fictitious emails and an invitation to “promote American industrial competitiveness”.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button