2025 Vol. 109 No. 5

A The Growth of AI IN HIRING IS IT ACTUALLY INTELLIGENT? BY JOEY K. WRIGHT, AMUNDSEN DAVIS LLC Across all sectors and industries, artificial intelligence is rapidly transforming the hiring process, enticing hiring managers with the promise of streamlining recruitment, eliminating inefficiencies and uncovering hidden talent that is not apparent to the human eye. From scanning resumes to conducting virtual interviews, AI tools are beginning to play a central role in how companies identify and evaluate job candidates. However, while these technologies can offer considerable benefits, they also bring significant risks, predominantly around bias, transparency and accountability. So when the question inevitably arises, either in a board meeting or a weekly staffing video conference, hiring managers must be prepared to answer: How can our financial institution legally and ethically make AI work in our hiring processes? How AI Tools Are Used in Hiring AI-powered hiring tools use artificial intelligence, including machine learning and natural language processing (NLP), to streamline, automate and enhance parts of the recruitment process. Specifically, these NLPs and machine learning algorithms are trained on an institution’s historical data to identify patterns associated with successful employees. These tools can analyze thousands of resumes in seconds and then provide a score and rank candidates based on how well their qualifications match a job description. They can also weed out unqualified candidates by using chatbots and/or resume screening. Some platforms go further, using NLP to evaluate communication skills in video interviews or to predict job performance based on facial expressions and tone of voice. These tools can also recommend individuals for promotion or internal mobility. Employers see AI as a way to reduce human error and make the hiring process more efficient for all involved. Potential Pitfalls Despite these advancements, AI in hiring is far from risk-free. One of the most pressing concerns is the potential for algorithmic bias. AI systems learn from historical data, which may reflect past implicit or explicit discriminatory practices, such as favoring candidates from certain schools, neighborhoods or demographic groups. If these biases are embedded in the training data, the AI will likely perpetuate them. For example, in 2018, Amazon infamously ditched its AI job recruiting tool after it became apparent that it was biased against women. Amazon attempted to train its algorithms to rate resumes based on patterns in past applicant history, but because women were historically absent from that dataset, the algorithm believed men were preferable and poorly rated female resumes. This scenario is not the result of a technical glitch or the storyline of a sci-fi movie where robots are taking on a discriminatory personality of their own, but rather an extension of historical inequalities in the hiring process and one that many companies have sought to change. Another critical issue is transparency. Many AI systems used in hiring lack transparency, making it difficult to understand the rationale behind their decisions and resulting in many referring to the systems as “black boxes.” Employers may not fully understand AI or how the algorithm is weighing different factors, making it difficult to justify or explain hiring decisions. This lack of transparency becomes especially problematic when certain candidates are rejected or when legal questions arise regarding discrimination or equality. Legal compliance is also evolving quickly. Jurisdictions across the U.S. and abroad are beginning to regulate the use of AI in employment. New York City’s Local Law 144, for example, requires employers to audit and disclose the use of HUMAN RESOURCES 38 HOOSIERBANKER

RkJQdWJsaXNoZXIy MTg3NDExNQ==