While many of the ethics issues stemming from agentic AI relate to misbehaviors, other ethics concerns arise even when autonomous AI technology performs as expected. For example, much discussion has focused on AI applications like OpenAI’s ChatGPT replacing human labor and eliminating livelihoods.
But even when AI is deployed to augment (rather than replace) human labor, employees might face psychological consequences. If human workers perceive AI agents as being better at doing their jobs than they are, they could experience a decline in their self-worth, Varshney explains. “If you’re in a position where all of your expertise seems no longer useful—that it’s kind of subordinate to the AI agent—you might lose your dignity,” he says. In some discussions of AI ethics, such loss of dignity is considered a human rights violation.8
In an August 2024 research paper, Varshney and several university-based researchers proposed an organizational approach to addressing the dignity concern: adversarial collaboration. Under their model, humans would still be responsible for providing final recommendations, while AI systems are deployed to scrutinize the human’s work.
“The human is ultimately making the decision, and the algorithm isn’t designed to compete in this role, but to interrogate and, thus, sharpen the recommendations of the human agent,” researchers wrote.9 Such adversarial collaboration, Varshney says, “is a way of organizing things that can keep human dignity alive.”