As the researchers note, not all safety research comes in the form of a public research paper. Tech companies would argue that AI safety is built into the work they do. And the counterintuitive argument is that researchers have to build advanced AI to understand how to protect against it.
In a recent interview with Lex Fridman, OpenAI CEO Sam Altman said that at some point in the future, AI safety will be “mostly what we think about” at his firm. “More and more of the company thinks about those issues all the time,” he said. Still, OpenAI did not show up as a major contributor to AI research in the Georgetown study.
The Effective Accelerationist argument is that the risks of AI are overblown, and 30,000 AI safety papers over five years sounds significant, considering the nascent nature of this technology. How many papers on automobile safety were written before the Model T was invented and sold?
What makes less sense is proposing stringent AI regulations while not also advocating for a massive increase in grant money for AI research, including funding compute power needed for academics to study massive new AI models.
President Joe Biden’s executive order on AI does include provisions for AI safety research. The Commerce Department’s new AI Safety Institute is one example. And the National Artificial Intelligence Research Resource pilot program aims to add more compute power for researchers.
But these measures don’t even begin to keep up with the advances being made in industry.
Big technology companies are currently constructing supercomputers so enormous they would have been difficult to contemplate a few years ago. They will soon find out what happens when AI models are scaled to unfathomable levels, and they will likely keep those trade secrets close to the vest.
To get their hands on that kind of compute power, AI safety researchers will have to work for those companies.
As the CSET study points out, Google and Microsoft are some of the biggest contributors to published papers on AI safety research.
But much of that research came out of an era before ChatGPT. The consumer interest in generative AI has changed the commercial landscape and we’re now seeing fewer research papers come out of big technology companies, which are mostly keeping breakthroughs behind closed doors.
If elected officials really care about AI safety going forward, they would likely accomplish more by allocating taxpayer dollars to basic AI research than they would by passing a comprehensive AI bill when we know so little about how this technology will change society even five years from now.