On January 31st, 2023, the U.S. Equal Employment Opportunity Commission (EEOC) held a public hearing to examine the use of artificial intelligence in employment decisions. Topics addressed included using artificial intelligence in recruitment and hiring, as well as in monitoring and firing workers. As employers increasingly use artificial intelligence in the workplace, the EEOC seeks to educate employers on the pros and cons of the technology. At the same time, the EEOC continuously identifies ways to prevent discrimination when using artificial intelligence. Just last month, the EEOC submitted its 2023-2027 strategic enforcement plan, which included a strong focus on the role of artificial intelligence in employment decisions.

Overview of the Public Hearing

During the EEOC’s hearing on artificial intelligence in employment decisions, the agency heard from twelve expert witnesses from the public. These witnesses included computer scientists, civil rights advocates, legal experts, employer representatives, and an industrial-organizational psychologist. Witnesses spoke on how artificial intelligence decision-making may inadvertently discriminate against protected classes. The discussion also touched on ways that artificial intelligence might help or hinder diversity, equity, inclusion, and accessibility in the workplace. Overall, the hearing builds on the EEOC’s AI and Algorithmic Fairness Initiative, which first discussed the civil rights impact of artificial intelligence in employment settings.

Pros and Cons of Artificial Intelligence in Employment Decisions

As a part of the abovementioned initiative, the EEOC also released a technical assistance document that examines ways that artificial intelligence in hiring may disadvantage disabled applicants. While these kinds of tools save employers time and effort during the application process, they can lead to inadvertent discrimination. These tools risk unfairly screening out disabled people that, otherwise, may have been capable of performing the job. The EEOC’s recent public hearing further examined the pros and cons of using artificial intelligence in employment decisions.

Ways Artificial Intelligence May Hinder Equal Employment Opportunity

Witnesses brought up many ways artificial intelligence in employment may lead to bias and discrimination. Discussions yielded the following points:

  • Overrepresentation of people of color in certain “negative or undesirable” data within some areas like credit history, arrests, and evictions.
  • Underrepresentation of people of color in the artificial intelligence tool’s training data.
  • Possible limited access to reliable internet access among some protected groups, which can limit a qualified applicant’s ability to engage with an employer digitally in the first place.

Ways Artificial Intelligence May Help Promote Diversity and Equality

Many witnesses also pointed out how using artificial intelligence in employment decisions may offer potential benefits. According to witnesses, these types of tools are not inevitably discriminatory. Employers should explore how artificial intelligence hiring tools may actually help remove human bias. In one example, if hiring criteria are set in advance, using an algorithm instead of the individual manager, artificial intelligence can ensure uniformly applied criteria. In the end, however, witnesses agreed that employers should continue to self-audit and ensure their company’s hiring procedures continue to comply with federal and state equal employment opportunity laws.

Suggestions to Improve Artificial Intelligence in Employment

During the discussion, the twelve expert witnesses had several suggestions for how developers may improve the automated artificial intelligence tools employers use. One such suggestion proposed that developers test hiring algorithms for discrimination. Briefly, such discrimination may occur when artificial intelligence tools use binary “reject/advance” decisions or numerical scores that negatively affect certain protected classes. If the tool fails the discrimination test based on an accurate model of the population, the developer can rebuild or modify the tool.

Meanwhile, other witnesses suggested ongoing monitoring of artificial intelligence in employment decisions. Automated artificial intelligence systems tend to “drift” away from their initial training over time. Therefore, developers should periodically retrain the mitigations guiding the tool’s algorithm based on new data. Finally, witnesses suggested using highly transparent, easy-to-follow algorithms, as well as human oversight and contingencies should the system fail.