Beware AI’s Potential to Trigger Discrimination Claims

The use of artificial intelligence in the workplace has prompted the Equal Employment Opportunity Commission to evaluate the potential use of AI by employers for hiring, promotion, and firing decisions. To that end, the EEOC recently released a technical assistance document titled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964” that is designed to help prevent the use of AI as a tool of discrimination.

The EEOC focused on whether the selection procedures employers use have a disproportionately negative effect on the use of factors prohibited by Title VII. This type of discriminatory practice is referred to as “disparate impact,” which is generally understood to be the creation of unintentional discrimination. It occurs when an employer’s policies, practices, and procedures that would seem to be neutral on their face, but when put to use instead result in a disproportionately negative impact on a protected group under Title VII. Title VII’s protected groups include gender, race, color, religion, and national origin.

The EEOC listed five examples of different types of software that may use algorithm-based decision-making during the hiring process and employment:

  • Resume scanners that prioritize applications using certain keywords.
  • Chatbots or virtual assistants that ask applicants about their qualifications and reject those who do not meet pre-defined requirements.
  • Video interviewing software that evaluates candidates based on their facial expressions and speech patterns.
  • Testing software that aggregates “job fit” scores for candidates or employees regarding their personalities, aptitudes, cognitive skills or perceived cultural fit based on their performance on a game or on a more traditional test.
  • Employee monitoring software that rates employees on the basis of their keystrokes or other factors.

If the employer’s selection procedure has a disparate impact based on gender, race, color, religion, or national origin, the employer is required to show that its selection procedure is job-related and consistent with business necessity. The employer can meet this standard by demonstrating the criteria used by the selection procedure is necessary to the safe, efficient, and successful performance of the job. In that scenario the selection procedure is then focused on the evaluation of the individual’s skills as they relate to the particular job, as opposed to simply measuring the person’s skills in general. However, once the employer shows that its selection procedure is job-related and consistent with business necessity, the hiring entity must still determine whether a less discriminatory alternative is available.

The EEOC encourages employers to routinely conduct internal analyses to ensure their selection procedures do not use AI in a manner that could be perceived as discriminatory. If the possibility seems open to interpretation, the employer should take measures to reduce the impact by using other evaluation methods. Failure to do so may subject the employer to liability.

Even as the use of AI grows, so do the risks. Employers should therefore consult with their human resources department and employment law counsel to make sure all related practices are in compliance.

This article first appeared in The Journal Record on July 8, 2023, and is reproduced with permission from the publisher.


Practice Area:

Labor & Employment