Software engineers, developers and university researchers have serious concerns concerning the transcription of WHISPER D’OPENAI, according to A report in the Associated Press.
Although there was no shortage of discussion around Generative tendency of hallucinating AI – Basically, to invent things – it is a bit surprising if it is a transcription problem, where you expect the transcription closely follows audio under transcription.
Instead, the researchers told AP that Whisper had introduced everything, racial comments with medical treatments imagined in transcriptions. And this could be particularly disastrous because Whisper is adopted in hospitals and other medical contexts.
A researcher from the University of Michigan Studying public meetings found hallucinations in eight audio transcriptions out of 10. An automatic learning engineer studied more than 100 hours of whispering transcriptions and found hallucinations in more than half D ‘between them. And a developer reported having found hallucinations in almost the 26,000 transcriptions he created with Whisper.
An Openai spokesperson said that the company “was continuously working to improve the precision of our models, including the reduction of hallucinations” and noted that its use policies prohibit the use of Whisper “in certain contexts decision -making at high stakes ”.
“We thank the researchers for sharing their results,” they said.