As the capabilities of artificial intelligence (AI) continue to expand, the U.S. Equal Employment Opportunity Commission (EEOC) released a joint statement with other agencies on AI and automated systems. Joining the EEOC on the statement are the Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), and the Federal Trade Commission (FTC). The joint statement discussed how the agencies will address the proliferation of AI and automated systems in respect to “fairness, equality, and justice.” The statement follows the EEOC’s January 2023 public hearing to discuss the implications of using artificial intelligence in employment decisions.

The EEOC’s Initiative on AI and Automated Systems

While AI and automated systems offer new opportunities and convenience to employers, they carry the risk of potentially discriminating against several protected classes. In 2021, the EEOC introduced an initiative to ensure that emerging AI tools in employment decision-making comply with the equal employment opportunity (EEO) the agency enforces. As part of the EEOC’s Artificial Intelligence and Algorithmic Fairness Initiative, the agency stated it will:

  • Issue guidance on algorithmic fairness (or ensuring that an AI tool’s algorithms do not unintentionally filter out otherwise suitable applicants due to protected characteristics);
  • Identify practices and controls that reduce the risk of discrimination when using AI in employment;
  • Organize public hearings with stakeholders about AI tools and their implications for employment; and
  • Continue to gather information on AI in the workplace.

How Other Agencies’ Enforcement Applies to AI

In fact, all four agencies have expressed concerns about the potentially negative effects of using AI and automated systems in the workplace without necessary controls in place. Each agency has promised to exercise its authority to monitor the development and use of AI tools and systems in the employment realm. In the agencies’ Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems, each agency identified their enforcement duties regarding AI thusly:

  • The CFPB confirmed that consumer financial laws and adverse action requirements apply when using AI tools. Notably, the CFPB stressed that the complexity or novel nature of AI tools is not a defense if employers violate the laws the agency enforces.
  • The DOJ’s Civil Rights Division, which enforces laws prohibiting discrimination in many areas, including housing, applied the Fair Housing Act to AI algorithm-based screening.
  • Meanwhile, the EEOC applied the Americans with Disabilities Act (ADA) to AI and automated systems in employment decision-making.
  • Finally, the FTC warned that AI tools may violate the Federal Trade Commission Act if they show discriminatory biases or rely on invasive commercial surveillance.

Discrimination in AI and Automated Systems

AI and automated systems receive and organize data, find patterns or correlations within the stored dataset, and use those results to make recommendations or predictions. Applying this to employment decision-making, the tools will collect applicant information, use its algorithm to filter or rank candidates, and make hiring or promotion recommendations. However, the algorithms may potentially filter out applicants or employees based on protected characteristics. Potential discrimination can stem from the following:

  • Unrepresentative or imbalanced data and datasets – AI tools may even correlate data with protected classes, leading to discrimination.
  • Lack of expertise or ability to adjust the AI tool – Employers, businesses, and even developers may not understand how the tool works well enough to ensure it is operating fairly.
  • Disconnect between design and use – AI tool developers may not understand or account for requirements under employment law, the tasks the tools will perform, or the employment context in which the tools will operate.

In their statement, the agencies communicated a resolve to monitor the development and use of AI and automated systems. Simultaneously, the agencies promoted responsible innovation to encourage developers to improve AI tools with respect to federal antidiscrimination, privacy, and fair competition laws.