Employers are using AI in hiring practices. What could go wrong?

In recent years, Artificial Intelligence (AI) has invaded virtually every industry, from technology on your phone, to cameras at your city’s traffic lights, to drones used by the military. Employment and hiring practices are not exceptions.

AI systems are created by humans and then learn on their own by analyzing data. Over time, an AI system is supposed to improve its efficiency and results. In the employment context, AI is used in most steps of the hiring process, including advertising for the job, scanning resumes and job applications, selecting applicants for interviews, and even analyzing applicants’ facial expressions and behavior during recorded interviews.

Proponents for the use of AI in hiring practices claim it speeds up the hiring process, more accurately identifies the right candidates for the position, and eliminates human bias and subjectivity. In a survey conducted by LinkedIn, 67% of recruiters surveyed said AI saved them time and 43% of them said AI removed human bias from the hiring process. I can agree that AI saves time. But removing bias? Not so fast.

At the end of the day, an AI system is only as good as the human who built it and the data it’s trained on. And this opens the door for many things that could go wrong. For example, an employer could optimize its AI system to favor only those applicants who have social media accounts (yes, AI can scan your resume, and then scan the internet searching for your social media accounts, all within a matter of seconds). A hiring practice like this could have a discriminatory impact on older job applicants who aren’t on social media. That is why employers should be required to check their AI’s algorithms for any factors that might be neutral on their face, but that could have a discriminatory impact on certain groups. Take these other two examples: If AI is not smart enough, it could reject an applicant with Tourette syndrome because of his facial expressions during an interview or an applicant who suffers from PTSD and anxiety disorder because of his body language.

Therefore, employers should carefully examine the algorithms their AI systems use. But that alone won’t be enough. Employers should also periodically review the AI’s performance and its results. Afterall, AI “learns.” That means it can easily “learn” illegal lessons that end up discriminating against one group or another. A real-life example is Amazon’s AI hiring system. After 10 years of analyzing resumes submitted to Amazon, Amazon’s AI “learned” to identify resumes submitted by women and discriminate against them. Amazon was forced to scrap the entire project. Take this other example: An AI system scans tens of thousands of resumes over a certain period. Over time, the system “learns” that most of the successful candidates chosen for the jobs have common American names. To improve its efficiency and results, the system could then begin to overlook resumes of people with foreign names. In other words, the AI system could teach itself that applicants with foreign names aren’t as qualified as those with domestic names.

Will an employer using these “biased” AI systems be liable for illegal discriminatory practices? The answer isn’t straightforward. That is why the law needs to catch up on the use of this technology in workplaces. Illinois was one of the first states to address this issue. In 2019, Illinois passed the Artificial Intelligence Video Interview Act (AIVIA), which required employers who use AI to notify applicants that AI may be used before the interview, explain to the applicants how the AI works, and even obtain the applicant’s consent. The law, however, doesn’t define “artificial intelligence” and is silent on enforcement, remedies, and penalties for violations. While it is a good start, it is not enough. In 2020, a bill to amend AIVIA failed in Illinois. The bill would have required employers to report certain demographic information to the Department of Commerce and Economic Opportunity, and would have required the department to analyze the data and report to the governor if the data discloses a racial bias. That is why we need Congress to act by establishing a federal, national standard sooner rather than later. Of course, using AI in workplaces carries numerous other concerns, such as privacy concerns over the use of personal and biometric information, but that is a discussion for another day.

Contact Information