effective — and by far, the key benefit has been increased efficiency. Businesses also report improved accuracy and cost savings. You should note, however, that federal enforcement agencies are keenly aware of the uptick in AI popularity. “We have come together to make clear that the use of advanced technologies, including artificial intelligence, must be consistent with federal laws,” said EEOC Chair Burrows. 2. CONSIDER CONDUCTING AN AUDIT TO IDENTIFY AND ADDRESS POTENTIAL BIAS While many excellent tools are available for streamlining recruiting and other workplace processes, relying on such technology to make employment decisions might unintentionally lead to discriminatory employment practices. Although we’re sure to see new laws at the federal, state and local level regarding the use of AI in the workplace, federal authorities have stated that existing laws already apply to potential AI biases. “There are very important discussions happening now about the need for new legal authorities to address AI,” Burrows said on a press call, as reported by Law360. “But in the meantime, I want to be absolutely clear that the civil rights laws already on the books govern how these new technologies are used in the meantime.” So, you should be sure to review your recruiting and other workplace tools for possible bias before you use them and continue to do so periodically thereafter. Like hiring managers, AI algorithms do not intentionally screen out candidates based on a protected category, but the AI algorithm may unintentionally screen out a disproportionate number of qualified candidates in a protected category. This could happen, for example, if the screening is based on the qualities of the employer’s topperforming employees and if these workers are primarily from a specific demographic group. As another example, if your system automatically rejects candidates that live more than 20 miles from the worksite, you may be unintentionally limiting the ethnic and racial diversity of the candidates you consider, depending on the demographics of the area. Although many technology vendors may claim that the tool they have is “bias-free,” you should take a close look at what biases the technology claims to eliminate. For example, it may be focused on eliminating race, sex, national origin, color or religious bias, but not necessarily focused on eliminating disability bias. You should also review the vendor’s contract carefully (specifically the indemnification provisions) to determine whether your company will be liable for any disparate impact claims. Additionally, employers that use third-party vendors to conduct background investigations have certain obligations under the Fair Credit Reporting Act (which is enforced by the Consumer Financial Protection Bureau). “Technology marketed as AI has spread to every corner of the economy, and regulators need to stay ahead of its growth to prevent discriminatory outcomes that threaten families’ financial stability,” said CFPB Director Rohit Chopra in a statement. He added that the CFPB “will work with its partner enforcement agencies to root out discrimination caused by any tool or system that enables unlawful decision-making.” 12 SAN DIEGO DEALER
RkJQdWJsaXNoZXIy MTg3NDExNQ==