Close
Updated:

EEOC Issues Guidance Highlighting the Risks to Employers Using AI In Employment Selection Procedures

BACKGROUND

Title VII prohibits employers from using neutral selection procedures that disproportionately exclude individuals on the basis of race, color, religion, sex or national origin unless the employer can show the procedures are “job related for the position in question and consistent with business necessity.”   In 1978, the U.S. Equal Opportunity Commission (EEOC) adopted Uniform Guidelines on Employee Selection Procedures providing guidance for employers to determine whether selection procedures commonly used in making employment decisions ran afoul of Title VII’s protections.

In response to the increased use of algorithmic decision-making tools (commonly referred to as artificial intelligence or “AI”) to assist in making a wide array of employment decisions, in May 2023, the EEOC issued new guidance entitled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.”  While the EEOC’s guidance does not have the force of law and is not binding upon employers, it serves as a warning that the EEOC will be monitoring AI use to ensure that these decision-making tools do not adversely impact protected groups in violation of Title VII.

WHAT EMPLOYERS NEED TO LEARN FROM THE GUIDANCE

With the rush to embrace the efficiencies of AI to assist with HR decisions, many employers fail to consider the legal implications of employing these tools. The recent guidance discusses new and long-standing principles that will apply when assessing AI-assisted selection procedures for compliance with Title VII.

  • The use of AI to assist in employment decisions is a “selection procedure” that may violate Title VII. The EEOC cautioned that the use of AI when making employment-based decisions may trigger violations of Title VII if individuals in protected classes are disproportionately screened out or otherwise adversely affected.   The agency provided the following examples of AI technology that could land an employer in its crosshairs:
    • resume scanners that prioritize applications using certain keywords;
    • monitoring software that rates employees based on their keystrokes or other factors;
    • virtual assistants or chatbots that ask candidates about their qualifications and reject those who do not meet pre-defined requirements;
    • video software that evaluates candidates based on facial expressions or speech patterns; and
    • testing software that provides “job fit” scores for applicants on their personalities, aptitudes, cognitive skills or “cultural fit.”

As explained in the guidance, employers can assess whether AI decision-making tools have an adverse impact on a protected class by checking whether use of the procedure causes a selection rate for individuals in that group that is “substantially” less than the selection rate for individuals in another group.

  • Employers may be liable under Title VII for AI decision making tools even if the tools are designed or administered by an outside software vendor or other third party. The guidance warns that employers cannot escape liability for a discriminatory selection procedure by shifting blame to a third-party vendor who developed or administered the discriminatory testing procedures.   The guidance cautions that employers who rely on software vendors to develop AI decision making programs “may want to ask the vendor, at a minimum, whether steps have been taken to evaluate whether use of the tool causes a substantially lower selection rate for individuals with a characteristic protected by Title VII.”   If the vendor affirms that a lower selection rate can be expected, the employer must assess whether the use of the AI tool is “job related and consistent with business necessity, or whether there are alternatives that may meet the employer’s needs with less of a disparate impact.”  Even if the vendor incorrectly assesses that its tool does not have an adverse impact, the employer may still be liable. 

 

  • Questionable application of the “four-fifths rule.” The four-fifths rule is a long-standing rule of thumb for determining whether a selection rate for one group is “substantially” lower than the selection rate for another group.   Under the rule, a selection rate is substantially different  than another if their ratio is less than four-fifths (or 80%). However, the guidance notes that reliance on the four-fifths rule is not always appropriate in certain circumstances, especially when it is not a reasonable substitute for a test of statistical significance.  Thus, the EEOC might not consider compliance with the rule sufficient to pass muster under Title VII.

The guidance suggests that employers may want to ask its AI vendor if it relied upon the four-fifths rule when determining whether the tool might have an adverse impact on a protected group, or whether it relied upon a statistical significance standard that is often relied upon by the courts.  Unfortunately, the guidance does not provide further clarity for assessing the legitimacy of AI selection procedures.

  • Employers should conduct ongoing self-analyses to validate the legitimacy of AI decision making tools. Finally, the EEOC encourages ongoing self-analyses to determine whether their employment practices have a disparate impact.  When an employer discovers that its AI tool would adversely impact a protected group, it can take steps to reduce the impact or develop a different tool to avoid Title VII violations, adding that the failure to adopt a less discriminatory algorithm considered during the development process may give rise to liability.  

NEXT STEPS

Not surprisingly, many employers are ill-prepared for the legal impacts of using emerging AI technology to assist with employment decisions.  However, one thing is perfectly clear – the EEOC has placed employers on notice that they cannot blindly select AI vendors without taking steps to validate that these tools do not have a discriminatory impact. Employers should adopt effective oversight protocols, including the designation of a qualified chief AI officer with the skill set to understand these emergent technologies and conduct ongoing adverse impact analyses of any selection programs.