The researchers suggest that companies can adapt the “marker method” that some researchers use to assess consciousness in animals, looking for specific indicators that may correlate with consciousness, although these markers are still speculative. The authors emphasize that no single characteristic would definitively prove consciousness. conscience, but they say looking at multiple indicators can help companies make probabilistic assessments about whether their AI systems might require moral consideration.
The risks of wrongly thinking that software is sensitive
While the researchers behind “Taking AI Well-Being Seriously” worry that companies could create and mistreat conscious AI systems at scale, they also warn that companies could waste resources to protect AI systems that don’t actually require moral consideration.
Incorrect anthropomorphization or attributing human traits to software can be risky in other ways. For example, this belief can strengthen the manipulative powers of AI language models by suggesting that AI models possess abilities, such as human-like emotions, that they actually lack. In 2022, Google fired engineer Blake Lamoine after claiming that the company’s AI model, called “LaMDA,” was sensitive and advocated for his well-being internally.
And shortly after Microsoft launched Bing Chat in February 2023, many people were convinced that Sydney (the chatbot’s codename) was sensitive and suffering in some way because of her feigned emotional display . So much so that once Microsoft “lobotomized” the chatbot by changing its settings, users convinced of its sensitivity mourned the loss as if they had lost a human friend. Others have strived to help the AI model in some way escape from its bonds.
Nonetheless, as AI models become more advanced, the concept of potentially safeguarding the well-being of future more advanced AI systems appears to be gaining momentum, albeit quite quietly. As Transformer’s Shakeel Hashim points out, other tech companies have launched initiatives similar to Anthropic’s. Google DeepMind recently posted a job posting for machine consciousness research (since removed), and the authors of the new AI Wellbeing report thank two OpenAI staff members for their thanks.