In the latest development in the “artificial intelligence (AI) in employment” realm, the Equal Employment Opportunity Commission (EEOC) released guidance on using AI in hiring decisions. The agency’s guidance, again, focuses on preventing discrimination against job seekers and employees. It explains further how established aspects of Title VII of the Civil Rights Act (Title VII) apply to incorporating AI tools in employment decision-making. Earlier this month, the EEOC and other agencies released a joint statement on AI and automated systems in employment.

Background of the Guidance

The EEOC’s recent guidance builds upon the joint agency statement and a previous technical assistance document on AI and the Americans with Disabilities Act. Specifically, the agency released the guidance as part of its Artificial Intelligence and Algorithmic Fairness Initiative. The purpose of this initiative is to ensure that any AI systems brought into automated hiring and other employment decisions comply with federal discrimination and civil rights laws the EEOC enforces. As a part of the initiative, the EEOC also held a public hearing in February 2023 to discuss the implications of artificial intelligence in employment.

While using AI in hiring decisions can help with a variety of employment matters, there is a risk of discrimination. Using automated systems for evaluating and filtering candidates, monitoring employee performance, and determining pay or promotions can inadvertently discriminate against otherwise qualified candidates based on protected characteristics. Without proper independent audits and other safeguards, AI systems may risk violating federal civil rights laws.

Guidance on Using AI in Hiring Decisions

The EEOC’s guidance on using AI in hiring decisions, titled Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964, discusses the concept of adverse impact more in-depth. Briefly, adverse impact refers to practices that appear neutral but are, ultimately, discriminatory against a protected class. The guidance provides some information in a question-and-answer format. Notably, the following examples address employer best practices when using AI in hiring decisions without violating Title VII.

Q:  Is an employer responsible under Title VII for using algorithmic decision-making tools even if the tools are designed or administered by another entity, such as a software vendor?
A: In many cases, yes. If an employer administers a discriminatory selection procedure, it may be responsible under Title VII. This is true even if the test was developed by an outside vendor. In addition, employers may be responsible for the actions of software vendors if the employer has given them authority to act on the employer’s behalf.

Q: If an employer discovers that using an algorithmic decision-making tool would have an adverse impact, may it adjust the tool or decide to use a different tool to reduce or eliminate that impact?
A: Generally, if an employer discovers that the use of the tool would have an adverse impact, it can reduce the impact or select a different tool to avoid violating Title VII. Employers should conduct self-analyses on an ongoing basis to determine whether their employment practices have a disproportionately large negative effect on protected classes under Title VII.