Employers’ Use of Artificial Intelligence May Trigger Discrimination Claims

The Equal Employment Opportunity Commission (EEOC) recently released a technical assistance document, “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” which is aimed at preventing discrimination against job applicants and employees.

The EEOC focused on whether the selection procedures that employers use to make hiring, promotion, and firing decisions have a disproportionately large negative effect on a basis that is prohibited by Title VII. This type of discriminatory practice is referred to as “disparate impact,” which is generally understood to be the creation of unintentional discrimination. Disparate impact occurs when an employer’s policies, practices, and procedures that appear to be neutral on their face instead result in a disproportionate negative impact on a protected group under Title VII. The protected groups include gender, race, color, religion, and national origin. For instance, if an employer requires that all applicants pass a strength test, does the test disproportionately screen out women?

The EEOC listed five examples of different types of software that may use algorithmic decision-making during the hiring process and employment:

  • resume scanners that prioritize applications using certain keywords;
  • virtual assistants or chatbots that ask applicants about their qualifications and reject those who do not meet pre-defined requirements;
  • video interviewing software that evaluates candidates based on their facial expressions and speech patterns;
  • testing software that provides “job fit” scores for candidates or employees regarding their personalities, aptitudes, cognitive skills, or perceived cultural fit based on their performance on a game or on a more traditional test; and
  • employee monitoring software that rates employees on the basis of their keystrokes or other factors.

In disparate impact cases, if the employer’s selection procedure has a disparate impact based on gender, race, color, religion, or national origin, the employer is required to show that its selection procedure is job-related and consistent with business necessity. The employer can meet this standard by showing it is necessary to the safe, efficient, and successful performance of the job. Therefore, the selection procedure focuses on the evaluation of the individual’s skills as they relate to the particular job, as opposed to simply measuring the person’s skills in general.

Once the employer shows that its selection procedure is job-related and consistent with business necessity, it must determine whether a less discriminatory alternative is available. To illustrate, is there another test available that would be comparably effective in predicting job performance but would not disproportionately exclude people on the basis of their gender, race, color, religion, or national origin?

The EEOC encourages employers to conduct ongoing self-analyses to ensure that their selection procedures do not use AI in a manner that could result in discrimination. If the employer discovers that its use of an algorithmic decision-making tool would have a disparate impact, it should take measures to reduce the impact or select a different tool so that it does not engage in employment practices that may be violative of Title VII. The failure to do so may subject the employer to liability.

The use of AI by employers is growing as are the legal risks. Employers should be prudent when conducting self-analysis and audits, which includes working with experienced counsel to identify and address any issues while minimizing interruptions to the employer’s business. For assistance or questions related to artificial intelligence in the workplace, contact Tanya Bryant or another member of the firm’s Labor & Employment Practice Group.

Print version.

Share:

Practice Area:

Labor & Employment