Anthropic hires its first “AI welfare” researcher

You May Be Interested In:TCL TVs will use films made with generative AI to push targeted ads



The researchers propose that companies could adapt the “marker method” that some researchers use to assess consciousness in animals—looking for specific indicators that may correlate with consciousness, although these markers are still speculative. The authors emphasize that no single feature would definitively prove consciousness, but they claim that examining multiple indicators may help companies make probabilistic assessments about whether their AI systems might require moral consideration.

The risks of wrongly thinking software is sentient

While the researchers behind “Taking AI Welfare Seriously” worry that companies might create and mistreat conscious AI systems on a massive scale, they also caution that companies could waste resources protecting AI systems that don’t actually need moral consideration.

Incorrectly anthropomorphizing, or ascribing human traits, to software can present risks in other ways. For example, that belief can enhance the manipulative powers of AI language models by suggesting that AI models have capabilities, such as human-like emotions, that they actually lack. In 2022, Google fired engineer Blake Lamoine after he claimed that the company’s AI model, called “LaMDA,” was sentient and argued for its welfare internally.

And shortly after Microsoft released Bing Chat in February 2023, many people were convinced that Sydney (the chatbot’s code name) was sentient and somehow suffering because of its simulated emotional display. So much so, in fact, that once Microsoft “lobotomized” the chatbot by changing its settings, users convinced of its sentience mourned the loss as if they had lost a human friend. Others endeavored to help the AI model somehow escape its bonds.

Even so, as AI models get more advanced, the concept of potentially safeguarding the welfare of future, more advanced AI systems is seemingly gaining steam, although fairly quietly. As Transformer’s Shakeel Hashim points out, other tech companies have started similar initiatives to Anthropic’s. Google DeepMind recently posted a job listing for research on machine consciousness (since removed), and the authors of the new AI welfare report thank two OpenAI staff members in the acknowledgements.

share Paylaş facebook pinterest whatsapp x print

Similar Content

Location tracking of phones is out of control. Here’s how to fight back.
Location tracking of phones is out of control. Here’s how to fight back.
Has the Mystery of Bitcoin's Creator Been Solved?
Has the Mystery of Bitcoin’s Creator Been Solved?
Amazon refreshes its monochrome Kindle lineup, including a bigger Paperwhite
Amazon refreshes its monochrome Kindle lineup, including a bigger Paperwhite
Two versions of the same picture, one altered by AI
AI tweaks to photos and videos can alter our memories
A glowing OpenAI logo on a light blue background.
OpenAI announces full “o1” reasoning model, $200 ChatGPT Pro tier
OpenAI releases ChatGPT app for Windows
OpenAI releases ChatGPT app for Windows
The News Spectrum | © 2024 | News