HR executives are increasingly relying on AI-powered processes to customize background checks, ensure diversity in hiring, train employees, and monitor employee happiness.
For many executives, the hope is that AI allows them “to cut out interviewing, eliminate recruiters, [and] just run the algorithm,” says Peter Cappelli, director of the Center for Human Resources at the University of Pennsylvania’s Wharton School of Business.
A recent ServiceNow and ThoughtLab survey, which polled 900 executives in 13 countries, supports that thesis. Two-thirds of CHROs said that they were responsible for finding ways to simplify company processes in HR and across the organization, and 9 in 10 reported leading efforts to train new employees to optimize workflows.
The promise of AI
The people behind the development of HR platforms promise better, faster, and smarter decision-making.
Gretchen Alarcon, vice president and general manager of HR service delivery at ServiceNow, says AI can deliver training materials to employees in real time. “An employee working on a presentation, for instance, can get rehearsal tips or best practices as they’re working,” says Alarcon.
Other uses of AI include identifying and preventing discriminatory work practices. Researchers at Penn State and Columbia universities developed an AI tool in 2019 to flag racial and gender discrimination in hiring, pay, education, policing, and consumer finance. They said they created the tool to highlight how implicit bias can influence organizational decision-making.
Daniel Greene, an assistant professor of information studies at the University of Maryland, says using AI to evaluate soft skills is “the most major development in automated hiring in the past five years.”
Avoiding AI pitfalls
But some are raising red flags. Greene says that because AI relies on human-inputted data to make decisions, there’s the potential for bias to infect the algorithms. Removing humans from decision-making doesn’t ensure less descrimination or bias. When humans are unaware of how their biases impact their decision-making, researchers like Cappelli and Greene say they are concerned that AI tools will simply mirror those flaws.
Greene says there isn’t a lot of science underpinning hiring platforms like HireVue, for example, which records an interview and uses AI to evaluate a candidate’s voice or expression to score them on attributes like friendliness and assertiveness. “This, unfortunately, has led to the resurrection of physiognomy,” says Greene, referring to the pseudoscientific and often racist practice of assessing someone based on physical characteristics such as their facial features.
Other AI-powered recruiting tools rely on past hiring data to identify optimal candidates, so if companies have mostly hired men, says Greene, the algorithm will continue to choose men for available jobs. “Bias in past hiring might cause bias in current hiring,” he says. An AI-powered recruiting tool for Amazon, for instance, made headlines in 2008 for teaching itself to systematically favor male job candidates over female candidates based on past hiring practices.
Regulation of AI in hiring
Governments are stepping in to address such transparency and fairness concerns. Over the past few years, Illinois and Washington passed laws forcing employers to inform potential hires if AI is used in the decision-making process, and to create transparency whenever AI is marketed to consumers. New York City passed a law last year that will force local employers to reveal the use of AI in hiring and to make sure AI systems undergo annual bias auditing when it goes into effect in 2023.
Some researchers think government oversight doesn’t go far enough. In a 2019 paper co-written by Wharton’s Cappelli, the authors argue that organizations building or buying automated HR tools should test any system they plan to buy for bias, invite employee input on uses of these tools, and create an appeals process that would force a human to enter the decision-making process.
Goals first, tools second
Cappelli recommends that before HR teams invest in a new tool, they should streamline existing HR workflows and employee-management communication channels. “Employers could get very close to what a good algorithm would do just by standardizing their processes,” he says.
Removing humans from decision-making doesn’t ensure less descrimination or bias.
AI-enabled HR platforms can reinforce the illusion that a perfect potential employee exists who could show up ready to work on their first day, says Greene. Instead of searching for such perfection, Greene suggests that hiring managers remain open to people who are a good fit for a role, and then invest in their long-term success.
If companies do decide to invest in AI, experts advise HR teams to think first about goals and then about tools. ServiceNow’s Alarcon says that AI can help teams who want to invest in employee growth. Instead of necessarily using AI to look for someone who perfectly matches a job description, HR teams can use the technology to train people with potential.