By: Lisanne L. Mikula, Esquire
“I’m sorry, Dave. I’m afraid I can’t do that.”
These are the chilling words of HAL-9000, the sentient computer controlling the Jupiter-bound Discovery One, when refusing an astronaut re-entry to his spacecraft in Stanley Kubrick’s film, “2001: A Space Odyssey.” What makes HAL a great science fiction villain is that HAL taps into the audience’s greatest fear of artificial intelligence (“AI”) – that AI will one day usurp human beings and replace human judgment with an amoral algorithm which works to the detriment of humans.
While today’s AI lacks HAL’s malevolence, there is little question that AI is being used with increasing frequency to perform tasks which once were solely the domain of humans, including those which require the exercise of sound judgment and consideration of moral, ethical, and legal parameters in decision-making. In the employment context, the use of AI is growing among employers seeking the most qualified job applicants in an increasingly competitive marketplace. In a survey conducted in 2017 by CareerBuilder, approximately 55 percent of U.S. human resources managers said AI would be a regular part of their work within the next five years
AI can be useful when crafting a targeted advertisement to fill a job opening. AI can analyze information from internet searches and determine the terms most commonly used by job seekers to create an advertisement which is more likely to appear at the top of the search results list generated by an internet search engine. AI can also effectively sort out duplicative job applications and resumes which are unmistakably unresponsive to the job opening, such as one which reflects no prior work experience where the position sought requires extensive experience. There are also helpful AI “bots” which use text messaging and email to coordinate job interviews and ask simple follow up questions to gather additional information from applicants, thus freeing up human resources personnel to complete more complex tasks.
But what happens when AI is used to read resumes and job applications and select the “best qualified” applicants? Recently, reports surfaced that Amazon had quietly discontinued use of an AI system it had used to screen job applicants when it was discovered that the computer program was purposefully excluding female candidates.
In 2014, an Amazon team in Edinburg began to develop a computer program which would review job applicants’ resumes in the hopes that the search for top talent could be mechanized. The experimental recruitment engine tool used AI to rate candidates on a “1 to 5” scale, similar to the way Amazon’s shoppers can rate purchases. Amazon’s goal was to be able to rely on the AI’s “5 star” review of an applicant in its hiring decisions. By 2015, however, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.
The recruitment engine was being trained by its human developers to observe patterns in resumes of candidates who had successfully obtained technology positions with Amazon over the past 10 years and to use those patterns to select the best-qualified candidates. Because of the gender gap in the technology field, however, most of these “successful” resumes came from men. Therefore, while the human developers did not train the system to exclude female applicants, the recruitment engine taught itself to recognize men as “preferred” job candidates, and it began to exclude resumes which indicated that an applicant was female. Amazon’s AI recruiter learned to penalize resumes which included the word “women’s”, such as “women’s lacrosse team” or “women’s studies,” and downgraded applicants who were graduates of all-women’s colleges.
Although Amazon attempted to edit the recruitment engine’s software program to make it gender-neutral, Amazon was not satisfied that its AI would not devise other ways of sorting candidates in a discriminatory fashion. Amazon ultimately abandoned its development of the recruitment engine last year.
Amazon’s experience highlights the potential dangers of allowing artificial intelligence to screen job candidates based upon patterns of successful past candidates—particularly in fields which are striving to correct past gender, racial, or ethnic imbalances among job candidates. While human beings certainly are capable of allowing prejudice or bias to creep into their decision-making, a well-functioning workplace should provide a system of human checks-and-balances which helps to prevent discrimination in the workplace. There is something chilling in the possibility that AI may develop a facially neutral algorithm which screens job applicants in an unlawfully discriminatory fashion—and that this discrimination could go undetected if humans fail to recognize and correct the problem.
The Law Firm of DiOrio & Sereni, LLP is a full-service law firm in Media, Delaware County, Pennsylvania. We strive to help people, businesses and institutions throughout Southeastern Pennsylvania solve legal problems – and even prevent legal problems before they occur. To learn more about the full range of our specific practice areas, please visit www.dioriosereni.com or contact Lisanne L. Mikula, Esquire at 610-565-5700 or at email@example.com