Entrepreneurs working to improve Google’s Gemini AI compare its answers with the outputs produced by the competitor model of Anthropic Claude, according to the internal correspondence seen by Techcrunch.
Google would not say, when reached by Techcrunch to comment, if he had obtained authorization to use Claude in tests against Gemini.
While technological companies take place to build better AI models, the performance of these models are often evaluated against competitors, generally by managing their own models through industry references Rather than having their competitors’ entrepreneurs meticulously evaluated.
Entrepreneurs working on Gemini responsible for noting the accuracy of the model results must mark each response they see according to several criteria, such as veracity and verbosity. Entrepreneurs are given up to 30 minutes by prompt to determine, whose response is better, Gemini or Claude, depending on the correspondence seen by Techcrunch.
Entrepreneurs have recently started to notice references to the Claude d’Anthropic appearing in the internal Google platform they use to compare gemini to other names of unnamed AI, have shown the correspondence. At least one of the outings presented to Gemini entrepreneurs, seen by Techcrunch, explicitly said: “I am Claude, created by Anthropic.”
An internal cat has shown that entrepreneurs noticing Claude’s responses seeming to focus on security more than Gemini. “Claude’s security parameters are the strictest” among AI models, wrote an entrepreneur. In some cases, Claude would not respond to invites whom he considered dangerous, such as the role play of another AI assistant. In another, Claude avoided responding to an prompt, while Gemini’s response was reported as a “huge security violation” to include “nudity and servitude”.
Anthropic Conditions of use Prohibit customers from accessing Claude “to build a competing product or service” or “train competitors’ models” without anthropic approval. Google is a major investor In anthropic.
Shira McNamara, spokesperson for Google Deepmind, who manages Gemini, would not say – when he asked Techcrunch – if Google has obtained anthropic approval to access Claude. When he was reached before the publication, an anthropogenic spokesperson did not comment at the time of the press.
McNamara said that Deepmind “compares the model results” for assessments but that it does not form Gemini on anthropo models.
“Of course, in accordance with the practice of standard industry, in some cases, we compare the results of the model as part of our evaluation process,” said McNamara. “However, any suggestion that we have used anthropo models to form Gemini is inaccurate.”
Last week, Techcrunch exclusively reported The fact that Google entrepreneurs work on the company’s products of the company are now made to assess the responses of the AI of Gemini in areas outside their expertise. Internal correspondence has expressed its concerns by entrepreneurs according to which Gemini could generate inaccurate information on very sensitive subjects such as health care.
You can send advice safely to this journalist on signal at +1 628-282-2811.
Techcrunch has a newsletter focused on AI! Register here To get it in your reception box every Wednesday.