Pub. 6 2024 Issue 3

AI CAN BE A WEAPON FOR HACKERS WHAT BUSINESSES SHOULD DO BY MATTHEW P. MILLER, PRINCIPAL, ADVISORY, CYBER SECURITY SERVICES, KPMG ARTIFICIAL INTELLIGENCE has transformed the way we live and work, offering immense potential for innovation and progress. However, as with any technology, AI also has its drawbacks. Emerging technology like deepfakes, which are AI-generated synthetic media that can convincingly manipulate or fabricate audio, video, and images, have rapidly gained popularity among cyber criminals as a potent tool for cyberattacks. By leveraging deepfakes, they can easily manipulate information, deceive individuals, and exploit vulnerabilities within organizations. The consequences of these attacks can be severe, ranging from financial losses to reputational damage. As technology continues to advance at a rapid pace, both the opportunities and risks in terms of detection and defense against deepfakes are expanding. Many businesses now have access to the technology and can potentially use it to defend themselves against an attack. Implementing these tools can be challenging, however, due to external regulations and internal barriers such as skill gaps within the workforce and financial constraints— giving the advantage to malicious actors who may exploit the opportunity first. A May 2024 KPMG survey found that 76% of security leaders are concerned about the increasing sophistication of new cyber threats and attacks. Hackers have found various ingenious ways to use deepfakes as part of their existing cyberattack strategies to make them more credible, such as business email compromise (BEC) scams, insider threats, and market manipulation. BEC scams involve impersonating high-ranking executives or business partners to deceive employees into transferring funds or sharing sensitive information, while phishing attacks are used to trick individuals into revealing sensitive information. Deepfakes make these scams even more convincing, as hackers can manipulate audio or video to mimic the voice and appearance of the targeted individual. This increases the likelihood of victims falling for the scam, leading to data breaches, financial fraud, and identity theft. Meanwhile, as far as insider threats, deepfakes can be used to create fake videos or audio recordings of employees, which can then be used to blackmail or manipulate them. Hackers exploit these deepfakes to gain unauthorized access to sensitive information or compromise the integrity of a business or financial entity. Insider threats pose a significant risk, as they can cause substantial financial and reputational damage. Deepfakes can also be employed to spread false information or manipulate stock prices, resulting in financial gains for hackers. By creating realistic videos or audio recordings of influential figures, hackers can create panic or generate hype, causing significant fluctuations in the market. This can lead to investors making uninformed decisions and suffering financial losses. As the threat continues to rise, acquiring the necessary funding needed to detect advanced deepfake technology—which requires maintaining the necessary computing power, forensic algorithms, and audit processes—has been a major challenge. 26 Nebraska CPA

RkJQdWJsaXNoZXIy MTg3NDExNQ==