One topic to be discussed: big tech companies pursuing “smarter-than-human” AI (Art by Zeke Barbaro, a human (with photos from Getty Images))
Donald Trump and Elon Musk have created an almost impenetrable news cycle. And they seem intent on bringing artificial intelligence to the forefront. Coverage has ranged from Trump announcing a $500 billion private investment in AI infrastructure to Musk trying to buy OpenAI from his rival Sam Altman to Musk’s Department of Government Efficiency working to automate the federal government. That list doesn’t even include China-based DeepSeek’s Sputnik moment, which raised doubts about the United States’ dominance in the industry.
In other words, through a chaotic media environment, AI has maintained a foothold.
For Emilia Javorsky, who began working in AI in 2018, the technology going mainstream is all a bit surreal. “Back in the day … you would have to be so careful on talking about AI, because everyone would be like, ‘What’s this science-fiction thing you’re talking about?’” she said.
Javorsky is the director of the Futures Program at the Future for Life Institute and will be speaking on the “AI and Power Concentration” panel at South by Southwest. She has spent time looking into both how AI can be used for good, but also how it is being misused. Right now, she is concerned about big tech companies pursuing “smarter-than-human” AI.
“This pursuit of power concentration and this pursuit of developing something smarter than us is all kind of hanging on the assumption that you’ll be able to control it or anticipate how it’ll work,” she said. “Our understanding of how these models work, our understanding of how to evaluate them, our understanding of safety engineering practices, is just pretty much in its infancy and needs to come a long way.”
In her view, the goal should be augmenting what humans are able to do. “I think we’re still in the early days of this AI takeoff,” she said. “The next frontier is not just making the models, but how do we actually apply those models in the real world?”
And there are some examples of those models being applied in the real world as a force for good. Javorsky sees AI as a tool that can be used to solve climate, education, and health care problems. Google’s AlphaFold, which predicts protein structures, is the “poster child” for this, she added. The technology could have profound implications for pharmaceutical drugs. Javorsky pointed out that AlphaFold not only benefits humanity, but it also unlocked a new business for Google.
Suhair Khan, a tech company founder who will be speaking on the Emotional Machines: AI, Feeling & The Human Body panel, also thinks scientific research is an area where AI could benefit humanity. Khan, who said she served as an adviser for the European Union’s AI Act, would like to see the government play a role in guiding the technology in the right direction.
For example, she has environmental concerns about how AI data centers impact carbon emissions, water evaporation, and physical degradation. The government could implement environmental regulations on the AI industry. It could also help with bringing a range of perspectives to an industry dominated by men in Silicon Valley. “I think government can support investing right now, very specifically in R&D labs and incubators at scale that are bringing together a diversity of perspectives,” Khan said.
Diversity of perspectives is important to her in part because how emotions are interpreted can vary from culture to culture. “That’s the issue of having generative AI platforms that talk to us in English,” she said. “It’s a translation of a translation.”
From an application in health care to ChatGPT, Khan said AI platforms “are connecting with your emotions, your senses, your mind, your lived experience every single day.” She added that “we don’t really have any kind of code or language for dealing with this.” We need that code to ensure that we are creating AI that isn’t harmful, addictive, or destructive to our emotions in Khan’s view. But how do we do that?
“I think we should actually be thinking about how you’re creating cross-disciplinary spaces, where it’s not just engineers working with the data, it’s people coming to question, not just the ethics of it, but to bring their expertise,” Khan said.
SXSW Panels About AI Changing Our Lives
Are Whistleblowers Going to Save Us From the Harms of Tech?
Friday 7, 11:30am, Hilton Austin Downtown, Salon K
Emotional Machines: AI, Feeling & The Human Body
Sunday 9, 10am, Hilton Austin Downtown, Salon C
AI and Power Concentration: Building Tech to Empower Us All
Monday 10, 4pm, Hilton Austin Downtown, Salon C
From Cages to the Real World: The Dawn of Physical AI
Tuesday 11, 2:30pm, Hilton Austin Downtown, Salon H