Arvind Narayanan, professor of computer science and director of the Center for Information Technology Policy (CITP), began working on artificial intelligence (AI) five years ago with a focus on bias and discrimination. Around the same time, Sayash Kapoor GS left his job as a software engineer at Facebook to begin his Ph.D. at Princeton.
Their recent book, “AI Snake Oil,” sparked a global conversation about the limits and potential misuse of artificial intelligence. But it is their approach to interdisciplinary collaboration in the fields of computer science, ethics and policy that has been hailed as innovative.
Narayanan, the current director of CITP at Princeton, conducted research with Kapoor on open source technology policy that affected other academics, industry researchers, lawyers, and policymakers.
“These (disciplinary) walls that we put up have never made much sense to me,” Narayanan said. “Nature does not distinguish between different disciplines. If you want to discover the truth, you must forget these disciplinary boundaries.
Zachary Siegel ’25, a computer science student who conducted research with Narayanan and Kapoor, said the book was a great opportunity for non-practitioners to fully understand why AI works and what it can and cannot not do.
“For someone who is not an AI researcher, it can really be difficult to understand how different AI technologies work in different ways,” Siegel said. “The book does an excellent job of breaking down the distinctions between different types of AI technologies and taking an evidence-based approach to measuring AI capabilities.”
Narayanan and Kapoor’s journey toward AI skepticism
Narayanan’s early work focused on digital privacy in apps and websitesand cryptocurrency as a computer scientist researching tech company claims. A dominant theme of his research has been the power of the technology industry and the need for social checks and balances, which has informed his views on AI.
“While the tech industry faces a lot of criticism, few have the IT expertise to empirically identify instances where companies’ claims might be exaggerated or false,” Narayanan said.
“I’ve seen articles about recruiting automation products claiming to infer job suitability from 30-second videos of candidates discussing their hobbies or qualifications,” he continued. “It seemed too ridiculous to be true: an elaborate random number generator, appealing to busy HR departments.”
“About five years ago, I started to become interested in the question of whether, beyond prejudices, AI worked?
This use of AI inspired Narayanan to give a 2019 talk“How to Recognize the Snake Oil of AI,” at MIT, in which he explained how to recognize false promises from companies about the potential of their AI. His slides went viral and were downloaded thousands of times and his tweets have been viewed by millions.
Kapoor began working as a software engineer at Meta (then Facebook) after graduating from college, where he focused on AI theory and the societal impact of AI. At Meta, he saw how AI was used to make consequential decisions, for example, detecting non-consensual images of people and predicting images of child sexual abuse.
At Facebook, Kapoor also witnessed the impact of the European Union’s General Data Protection Regulation (GDPR) – a data privacy law that came into effect in 2018 – on the company. “I really saw the impact that a single set of laws can have on an entire multi-billion dollar company, and it got me thinking about how I wanted to make change.” And I realized that one of the best ways to do that was from the outside.
In 2021, Kapoor started at Princeton as a Ph.D. student under Narayanan.
On-campus mentoring, counseling and education
Beyond their joint research, Narayanan and Kapoor have made the societal impact of AI a focus of their mentoring and teaching.
In fall 2023, Narayanan taught the first iteration of the Computer Ethics course, which combines philosophical inquiry and practical programming work. Narayanan and Kapoor also teach a graduate seminar titled Limits to Prediction with sociology professor Matthew Salganik.
“It’s not just about teaching students how to code or how to build AI systems,” Narayanan said. “It’s about teaching them to think critically about the implications of these technologies. We want our students to be able to ask the right questions, challenge assumptions, and consider the ethical implications of their work.
Mihir Kshirsagar, a lecturer in the School of International and Public Affairs with a legal background, will present chapters from the book in his upcoming spring course, Big Data in Society.
“(AI Snake Oil) is used in seminars with computer science and SPIA students because they speak very clearly, avoiding jargon to identify fundamental problems,” he said. “It’s an effective teaching tool.”
Narayanan and Sayash also collaborate with the broader community of Princeton undergraduate and graduate students.
Varun Rao GS was a software engineer at Amazon before pursuing a Ph.D. at Princeton. He has worked with Narayanan and Kapoor on various projects, including serving as an adjunct ethics and computer science instructor in fall 2023. On the one hand, Rao has studied the impact of AI on dismissal – concluding that even if cases of job loss exist, workers are also adapting to the changes brought about by AI – as part of a working group responsible for informing the Code of Good Practice of the Law on European Union AI for AI providers.
Rao said he most admired the constructive nature of Narayanan and Kapoor’s approach.
“Not only are they criticizing the current state of things, but they are providing concrete, actionable suggestions on how to fix it, and I think that’s the hardest thing to do,” Rao said. “I’ve worked in the industry, and one of the criticisms they make about this whole area of fairness and transparency and bias is, ‘well, people just criticize, but don’t don’t really offer solutions and don’t tell us what to do or how.” do it better.’
Siegel said he enjoys working on specialized projects on the capability of AI agents, such as assessing whether AI agents can reproduce published scientific articles.
“Many researchers often end up working on their own projects, but Arvind and Sayash both collaborate across universities and in different fields,” Siegel said. “We held a workshop on AI agents and invited many speakers from academia and industry.”
Unmasking the AI Snake Oil: Beyond the Orange Bubble
Narayanan and Kapoor have shaped policy discussions around AI, offering a nuanced view of its capabilities and limitations.
Sujay Swain ’25, an electrical and computer engineering student, worked last summer with the Federal Trade Commission, funded by the CITP Siegel Public Interest Technology Fellowship. He mentioned that Narayanan and Kapoor’s Substack blog on AI, which predates the book, was widely referenced during his internship.
“It was a resource that people everywhere were really interested in and using as a reference point,” Swain said. “I think it was really cool to see the broader influence of the work Princeton is doing on technology policy.”
Rao recalled that, in his experience, industry professionals have criticized the AI fairness field for not presenting clear solutions to the issues they raise.
“A lot of Arvind and Sayash’s work and the material in the book addresses solutions,” he said. “I found it really fascinating.”
Narayanan and Kapoor are also involved in the Princeton AI Dialogues and AI Policy Precepts, offered in collaboration with the School of Public and International Affairs in Washington. Both nonpartisan programs foster conversations about the fundamental concepts, opportunities, and risks underlying the future of technology for federal policymaking.
“Arvind and Sayash are really pushing the boundaries of what we think of as computer science education,” says Steven Kelts, a CITP lecturer who teaches technology ethics at Princeton. “They show that it is possible to combine rigorous technical training with a broader understanding of societal impacts.”
Looking to the long-term future of AI, Narayanan said he was optimistic – but it was crucial to understand its limitations.
“Many AI success stories surround us. Things like autocomplete and spell check used to be cutting edge AI,” he said. “Self-driving cars were overrated, but now they are real, with taxis transporting millions of people. Ultimately, everyone will have access to it, which will reduce road accidents, which cause around a million deaths per year worldwide.
Chloe Lau is a features editor for the “Prince”.
Please direct any corrections to correction@dailyprincetonian.com