Employers must be aware of the ethical considerations of the use of artificial intelligence technology (AI) – even in its current nascent stage – at the workplace, said Kerry Wang, speaking at the conference and the exhibition of HR technologies in Las Vegas.
HR and business leaders are inspired by the conflict between the competitive advantage that technology can provide and concerns about negative implications such as involuntary predictions, said Wang, CEO and co-founder of Searchlight, a technological company based in San Francisco which uses AI to help employers recruit talents and measure the rental quality.
“Imagine that you have implemented an AI tool to proactively detect the candidates,” she said. “Recruiters are happy because they can now spend less time detect curriculum vitae. The candidates are happy because recruiters respond them faster. But one day you notice that technology recommends more men than women for interviews. Do you continue to use technology or decide to put it in place?”
Something similar has happened on Amazon when the company built an experimental AI tool in 2015. Amazon has canceled this particular system, but since then there has been an explosion in suppliers praising AI for HR functions of supply and screening for the forecast of turnover and the switching on labor analyzes.
“Whether we like it or not, AI is everywhere,” said Wang. “But AI is as good as the rules that program it, and automatic learning is as good as the data on which it is based.”
She explained that AI understands any computer system that imitates human intelligence to accomplish a task. For example, a simple chatbot powered by an algorithm – a set of rules or lines of code – uses AI. The more complex AI uses automatic learning.
(SHRM reserved for HR members: What is artificial intelligence and how is it used in the workplace?))
“This is where modeling comes into play,” she said. “Modeling is when you find models in a huge set of data, then code these models as rules. In the example of the chatbot, if you give the 10 million human discussion transcriptions starting when someone said ‘hello’ ‘, AI would learn a multitude of ways to respond to this greeting.”
Wang said AI is not supposed to be “a miracle solution” but is rather supposed to help human decision -making. “AI can make us smarter and more effective – Research shows that the management of more technical tasks frees people to do more strategic things.”
Having introduced him to HR AI comes down to abundance and rarity, said Ann Watson, principal vice-president of people and culture at Verana Health in San Francisco, also at the conference.
“How can we do more?” She asked. “How can we increase productivity? How can we better develop talent pipelines? How can we bring more people and be more inclusive? The advantages of AI technology mean having more time to do the things I want to do.”
Maisha Gray-Diggs, vice-president of the global talent acquisition at Eventbrite, said that his team uses AI for recruitment and integration.
“The advantage of AI for me is to get an advantage, save time and resources,” she said, speaking at the conference. “I am very aware that we don’t want AI to replace people, but AI can be used to increase people. HR cannot continue to do more and more, it must do smarter things.”
The ethical use of AI
Wang said there were two main areas of concern with regard to the use of AI in employment: privacy and prejudices.
“I am very uncomfortable with the idea of surveillance of employees,” said Watson. “I think AI as finding ways to do more, not to find ways to catch people to do less.”
It has given the example of a certain technology which can predict whether an employee is about to leave depending on behavior at work. But research has revealed that it only works if employees do not know that it is there.
“For it to work, it should be kept secret by the workforce,” she said. “And this is not something that I am ready to do, even if it can precisely predict the roll.”
Wang said that when Searchlight joins a customer, the company first sends communication to employees detailing what is going on, why it is and what to expect.
“When we do this, 70 to 80 percent of employees are dedicated to data collection,” she said. “When you give people the choice and you explain the advantages of using AI, the majority will agree to oppose.”
Another major ethical question that arises when AI envisages AI for the workplace is the discriminator potential. A bias can be created in technology intentionally or involuntarily.
“Biases already exist in human judgment,” said Wang. “The biased technology potential is there. But the more we are aware of it and we ensure that the data we provide on the models we use to make our decisions are as holistic as possible – so we are in a better place.”
Wang mentioned the new law of its kind to enter into New York on January 1, 2023, which prohibits employers from using technologies based on AI and algorithm for recruitment, hiring or promotion without these tools being first verified for the bias.
“We all who use the tool will have to commit us to ask questions, to make sure that we do not discriminate,” said Gray-Diggs. “We are going so quickly in the technological space that I think we have to spend more time and do more research to understand technology. And before bringing a new tool, he must first recognize the prejudices of the organization. Think first of women and other under-represented people who do not sell them as well as they cannot have had the chance.”
Choose an AI seller
Wang said that before approaching an AI supplier, you should choose a problem to solve that the company really cares. “It is already quite difficult to plead for any new technology and even more difficult to convince the leadership of a problem for which they are not in interest,” she said.
When you engage with the sellers, ask them how they think prejudices in their system, she explained. “Can they talk about how they use their data, how they form their model, how they validate without having a negative impact? I love it when an employer asks me questions about prejudices because it shows that we are philosophically aligned.”
You have to ask difficult questions, Watson said. “Push stronger than you feel comfortable growing. If you need to find someone else in the organization that has more understanding of technology, bring them to this conversation.”
Gray-Diggs has accepted, saying that “if HR is uncomfortable or out of its depth evaluating a new product, bring people of data or computer science. Bring business leaders to make sure you don’t miss things.”
The composition of the seller’s team itself can be enlightening, said Gray-Diggs. “I look at the team that presents the product to me. If the team is a diversified team, this allows me to know that they are already thinking about potential biases and discrimination.”
The pilot programs are your friend, said Watson: “Especially if you plan to implement a tool that could be disruptive, even if it creates efficiency. Find a department of favorable supporters to pilot the product. Work with them to deploy it in a test scenario, learn and build membership in a measured way before having an impact on the organization at the same time.”
Wang added that AI technology pilot programs are useful, but only if there is a sample size large enough to use it. “Look for a driver of at least 100 people or the data models of your models will not be as precise,” she warned.