AI companies are embracing agentic AI, billed as the next evolution of generative AI. As innovation continues, AI developers argue that current safety processes and existing state and federal rules can protect businesses and consumers starting to use the technology.
Social media giant Meta is one of many AI companies rolling out agentic AI capabilities. Agentic AI is an AI system composed of multiple AI agents that can act autonomously to complete tasks. Though AI tools are constantly evolving, existing consumer protection laws, contracts and sector-specific regulation, as well as an enterprises’ own safety and security tools, can serve as technical guardrails for new capabilities like agentic AI, according to Erica Finkle, AI policy director at Meta.
“Seeing how all of that comes to play with respect to agents and AI more generally is a really important part of understanding where to go from here and applying what exists in the best ways possible for the new and developing technology,” Finkle said during an online panel discussion hosted by the Center for Data Innovation on Thursday.
Panelist A.J. Bhadelia, AI public policy leader at AI company Cohere, echoed Finkle and said it will be critical to assess where current laws can be applied to products like AI agents and where there might be gaps to fill with additional laws and regulations. President Donald Trump’s administration is currently developing an AI action plan to guide U.S. policy on AI.
Bhadelia said it’s also important to focus on individual use cases, something the European Union’s AI Act does for different AI systems, categorizing them into different buckets of risk. He said not all agentic AI applications carry the same level of risk. For example, building an AI agent to operate within an enterprise business carries a different level of risk than an AI agent developed to be consumer-facing.
“A consumer agent might operate in an uncontrolled environment, might have to handle unpredictable content, have limited assurances of security,” he said. “In an enterprise use case, the agent essentially becomes an extension of the enterprise’s IT system, subject to the same security and audit requirements.”
AI agents need standards, human involvement
A standard vocabulary needs to be established on an architectural level for AI agent communication, Finkle said.
Different vendors are developing specific AI agents specialized to complete tasks in individual sectors such as healthcare or energy. Maintaining an open and interoperable AI infrastructure, as well as a standardized vocabulary, will be “critical towards achieving multi-agent interactions,” Finkle said.
“If two agents are interacting on a task, they need to have a standard vocabulary,” she said.
In an environment crafted for AI agents to talk to one another, human control needs to be implemented and maintained for safety and security reasons, said panelist Avijit Ghosh, a researcher at AI and generative AI platform Hugging Face. The company recently released a paper arguing that AI companies should not develop fully autonomous agentic AI due to risks such as overriding human control, malicious use and loss of data privacy.
Ghosh said it’s “important for human control at every level of agentic workflow.”
Implementing multi-agent communication protocols and considering how to delegate responsibility and liability to humans will be an important question and consideration for agentic AI, said panelist Helen Toner, director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology.
As companies continue to develop agentic AI, Toner said transparency and disclosure on AI agent training will help provide a clear roadmap for policymakers as they consider whether to create new rules for the technology.
“It doesn’t directly solve any particular problem, but it puts policymakers, the public and civil society in a much better position to understand and respond as the space changes, as it tends to do very quickly,” she said.
Makenzie Holland is a senior news writer covering big tech and federal regulation. Prior to joining Informa TechTarget, she was a general assignment reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.