2024 Vol. 108 No. 3

T Artificial INTELLIGENCE LEGAL RISKS WITH USE OF AI FOR EMPLOYMENT PURPOSES BY DEBRA A. MASTRIAN, AMUNDSEN DAVIS LLC The use of artificial intelligence and algorithmic decision-making tools by employers in connection with employment matters, such as recruiting, hiring, evaluating productivity and performance, and monitoring employees, is becoming more prevalent. What are some of the ways the technology is being used? An employer may, for example, scan a resume or application for certain keywords and exclude those that do not contain the specified criteria, or use ChatBot or other AI to conduct an initial screening before determining who to interview. Employers are also using technology to track productivity and to rank applicants or employees based on identified criteria. Similar to personality assessments, various technology is also designed to select applicants or employees for preferred personality traits, aptitudes, physical or cognitive abilities, etc. Last May, the Equal Employment Opportunity Commission issued a technical assistance bulletin, “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” to provide guidance to employers on how anti-discrimination laws apply to the use of technology. Discrimination based on protected characteristics, such as race, age, gender, disability, national origin, gender identity, etc., is prohibited by federal and state law. The EEOC has made it clear that it is not trying to prevent the use of AI and other computer-assisted technologies but warns employers about the legal risks and the potential for liability. Employers will be held liable if the AI or any other tools the employer is using have a disparate impact on the basis of protected characteristics, such as race, age, gender, disability, national origin, gender identity, etc. Disparate impact is a way of proving discrimination absent evidence of intentional discrimination. It is often referred to as “unintentional discrimination.” The criteria being used by the employer are not discriminatory on their face but have a “disproportionately large negative effect” on the basis of a protected characteristic. In other words, the criteria exclude people with a certain protected characteristic at a disproportionately higher rate. Unless an employer can show that the criteria that has a disparate impact is job-related and consistent with business necessity, and that no similarly effective but less discriminatory alternatives are available, the practice is unlawful. The practice is also unlawful if the criteria is designed to reveal a protected characteristic (e.g., a mental or physical impairment), screen for personal traits connected with some other protected characteristic or make decisions based on those traits. These same issues have been raised when employers use personality tests or assessments, and employers have been sued under the Americans with Disabilities Act, Title VII and the Age Discrimination in Employment Act. Those lawsuits involved allegations over: ▶ questions that implicated protected characteristics; ▶ bias in administration and scoring; ▶ impermissible medical questions; ▶ whether the test suggested a disability or perceived disability based on the results; and ▶ administration of the test was discriminatory based on age. Similar to the use of personality assessments or tests, AI and algorithmic decision-making tools must not be used to weed out persons based on protected characteristics, and the use of the technology (based on the instructions or criteria utilized) must not result in excluding applicants or employees based on a protected characteristic. In one seminal case, the Seventh Circuit (the federal court of appeals which covers Indiana) found that the use of the MMPI test by an employer had the likely effect of excluding HUMAN RESOURCES MAY/JUNE 2024 59

RkJQdWJsaXNoZXIy MTg3NDExNQ==