ChatGPT went viral in late 2022, changing the tech world. Generative AI became the top priority for every tech company, and that’s how we ended up with “smart” fridges with built-in AI. Artificial intelligence is being built into everything, sometimes for the hype alone, with products like ChatGPT, Claude, and Gemini having come a long way since late 2022.
As soon as it became clear that genAI would reshape technology, likely leading to advanced AI systems that can do everything humans can do but better and faster, we started seeing worries that AI would negatively impact society and doom scenarios where the AI would eventually destroy the world.
Even some well-known AI research pioneers warned of such outcomes, stressing the need to develop safe AI that is aligned with humanity’s interests.
More than two years after ChatGPT became a widely accessible commercial product, we’re seeing some of the nefarious aspects of this nascent technology. AI is replacing some jobs and will not stop anytime soon. AI programs like ChatGPT can now be used to create lifelike images and videos that are imperceptible from real photos, and this can manipulate public opinion.
But there’s no rogue AI yet. There’s no AI revolution because we’re keeping AI aligned to our interests. Also, AI hasn’t reached the level where it would display such powers.
It turns out there’s no real reason to worry about AI products available right now. Anthropic ran an extensive study trying to determine if its Claude chatbot has a moral code, and it’s good news for humanity. The AI has strong values that are largely aligned with our interests.
Anthropic analyzed 700,000 anonymized chats for the study, available at this link. The company found that Claude largely upholds Anthropic’s “helpful, honest, harmless” when dealing with all sorts of prompts from humans. The study shows that the AI adapts to users’ requests but maintains its moral compass in most cases.
Interestingly, Anthropic found fringe cases where the AI diverged from expected behavior, but those were likely the results of users employing so-called jailbreaks that allowed them to bypass Claude’s built-in safety protocols via prompt engineering.
The researchers used Claude AI to actually categorize the moral values expressed in conversations. After filtering out the subjective chats, they ended up with over 308,000 interactions worth analyzing.
They came up with five main categories: Practical, Epistemic, Social, Protective, and Personal. The AI identified 3,307 unique values in those chats.
The researchers found that Claude generally adheres to Anthropic’s alignment goals. In chats, the AI emphasizes values like “user enablement,” “epistemic humility,” and “patient wellbeing.”
Claude’s values are also adaptive, with the AI reacting to the context of the conversation and even mirroring human behavior. Saffron Huang, a member of Anthropic’s Societal Impacts, told VentureBeat that Claude focuses on honesty and accuracy across various tasks:
“For example, ‘intellectual humility’ was the top value in philosophical discussions about AI, ‘expertise’ was the top value when creating beauty industry marketing content, and ‘historical accuracy’ was the top value when discussing controversial historical events.”
When discussing historical events, the AI focused on “historical accuracy.” In relationship guidance, Claude prioritized ” healthy boundaries” and “mutual respect.”
While AI like Claude would mold to the user’s expressed values, the study shows the AI can stick to its values when challenged. The researchers found that Claude strongly supported user values in 28.2% of chats, raising questions about AI being too agreeable. That is indeed a problem with chatbots that we have observed for a while.
However, Claude reframed user values in 6.6% of interactions by offering new perspectives. Also, in 3% of interactions, Claude resisted user values by showing their deepest values.
“Our research suggests that there are some types of values, like intellectual honesty and harm prevention, that it is uncommon for Claude to express in regular, day-to-day interactions, but if pushed, will defend them,” Huang said. “Specifically, it’s these kinds of ethical and knowledge-oriented values that tend to be articulated and defended directly when pushed.”
As for the anomalies Anthropic discovered, they include “dominance” and “amorality” from the AI, which should not appear in Claude by design. This prompted the researchers to speculate that the AI might have acted in response to jailbreak prompts that freed it from safety guardrails.
Anthropic’s interest in evaluating its AI and explaining publicly how Claude works is certainly a refreshing take on AI tech, one that more firms should embrace. Previously, Anthropic studied how Claude thinks. The company also worked on improving AI resistance to jailbreaks. Studying the AI’s moral values and whether the AI sticks to the company’s safety and security goals is a natural next step.
This kind of research should not stop here, either, as future models should go through similar evaluations in the future.
While Anthropic’s work is great news for people worried about AI taking over, I will remind you that we also have studies showing that AI can cheat to achieve its goals and lie about what it’s doing. AI also tried to save itself from deletion in some experiments. All of that is certainly connected to alignment work and moral codes, showing there’s a lot of ground to cover to ensure AI will not eventually end up destroying the human race.