Highlights
- SFLC.In hosted an AI dialogue titled AI in Focus: Navigating Risk, Regulatory and Responsibility on Monday.
- Meta develops a mobile LLM for improved accessibility and scalability.
- The Alliance to Combat Disinformation flags media manipulated by Deepfakes.
- NASSCOM recommends stakeholder-focused solutions to regulatory uncertainties.
- SFLC.In will soon launch a white paper, “Leveraging Open Source in AI” to shape future dialogues.
On Monday, the Software Freedom Law Center, India (SFLC.In) organized a dialogue on artificial intelligence at the India Habitat Centre, bringing together industry leaders, academics and technology experts.
The event, titled Focus on AI: navigating risks, regulation and liabilityincluded two-panel sessions examining the challenges and opportunities of generative AI and open source technologies and their impact on technological and social frameworks.
Mishi Choudhary, founder of SFLC.In, led the way with a compelling opening statement. She remarked, “Sflc.in has been working around AI since 2018, when the AI generation was relatively non-existent. We’ve seen data evolve from simple pattern-matching software to systems that can mimic human behavior; This raises urgent questions about ethics and We must move from reactive measures to proactive, innovation-friendly regulations, but not at the expense of human rights, equity and environmental sustainability.
As panelists shared their insights, the discussion explored the proliferation of deepfakes, the pursuit of responsible AI practices, and the need to navigate India’s unique socio-cultural complexities.
Saikat Saha, Chief Technology Officer, NASSCOM, highlighted, “At NASSCOM AI, we develop technical charters and address collective risks to drive responsible adoption of AI in India. AI can change the economic situation for priority sectors such as SMEs. challenges persist, we focus on promoting open consultations between businesses, MSMEs and stakeholders.
Pamposh Raina, Head of Deepfake Analysis Unit (DAU), Misinformation Combat Alliance, said
“AI-generated misinformation, particularly audio and video manipulation, is a growing concern. We analyzed more than 2,200 media articles over eight months, revealing significant data misuse in elections, growing health misinformation, and financial fraud. focusing on AI and digital culture and ensuring platforms flag false content.
The conversation then turned to the balance between innovation and responsibility in AI, with panelists highlighting the nuanced interplay of technology, ethics and regulation.
Sunil Abraham, policy director, META India, said, “LLMs are not deterministic and engineers often get ahead of scientists with this black box. At Meta, we believe that the future lies in a multiplicity of large and small models that are more accessible, affordable and efficient. The ecosystem must catch up with the regulatory landscape, deploy responsibly, and focus on model explainability, especially for sensitive use cases like healthcare.
Udbhav Tiwari, Director of Global Product Policy at Mozilla, added: “Open source AI has immense potential, but it must be approached with responsibility. Clear definitions and safeguards are important to avoid risks such as air washing and ensure alignment with shared standards and values. The deliberate design of these systems and their data comes with accountability that laws and institutions must encourage. »
The roundtables highlighted the urgent need for clear and inclusive AI governance frameworks, tailored to India’s unique context and also aligned with global best practices such as the EU GDPR. The event reinforced the importance of innovation-friendly regulations, digital literacy and stakeholder collaboration to shape an AI ecosystem. Based on the discussions, SFLC.In will soon launch its research paper, Exploiting Open Source in AIwhich will serve as a cornerstone for all future dialogues and initiatives aimed at shaping responsible AI practices.
The event brought together a group of experts and innovators, including Aindriya Barua, founder and CEO of Shhor AI; Charles Brecque, co-founder and CEO of TextMine; Saikat Saha, Chief Technology Officer at NASSCOM; Pamposh Raina, head of the Deepfake Analysis Unit at the Misinformation Combat Alliance; Chaitanya Chokkareddy, CTO of Ozonetel Communications; Ubdhav Tiwari, director of global product policy at Mozilla; Smita Gupta, curator of OpenNY AI; Sunil Abraham, policy director at META India; Vukosi Marivate; and Professor Eben Moglen.