From an accuracy perspective, we often forget that AI is as inherently biased as the humans who created it. From an accuracy perspective, we often forget that AI is as inherently biased as the humans who created it. When fed distorted datasets — especially when the dataset is small — the precision of AI suffers greatly. Even more alarming is when faulty conclusions are drawn within industries where lives and livelihoods are at stake. One algorithmic error has the potential to produce the false negative that gets your organization breached. Assets and users can be tagged as businesscritical or high-risk to aid with risk scoring, but there is little nuance beyond this weighting. A Model for AI and Human Coexistence Thankfully, effective models for AI/human coexistence are available. Human-in-the-loop (HITL) reinforcement learning works like call centers — AI is confined to assist with only a handful of predetermined actions, beyond which the situation escalates to an analyst.2 AI does the heavy lifting when it comes to monitoring, alerting human analysts when a possible threat is detected. They can then address and resolve issues more efficiently. The benefits speak for themselves. MIT researchers discovered that when HITL was included in a security system, detection rates more than tripled and false positives were reduced fivefold when compared to an unsupervised security system.3 Cybersecurity Foundations for AI With a Human Touch Follow these recommendations when incorporating AI into your cybersecurity program: 1. Build a strong foundation. “If you’re not already doing something, you can’t automate it,” says John Pescatore, Director of Merging Security Trends at SANS. Ensure the basics of your cybersecurity program are strong before entertaining the idea of AI. 2. Select use cases wisely. Pick the low-hanging fruit by opting for large, quality data sets with readily available subject matter experts who can train the algorithm well. Capgemini Research Institute recommends the following to start: malware detection, intrusion detection, risk scoring, fraud detection and user/ machine behavior analytics.4 3. Deploy SOAR workflows. Security, orchestration, automation and response (SOAR) tools allow analysts to implement automatic workflows with manual decision points where appropriate. 4. Educate cyber analysts on AI technology. It’s essential for your staff to understand basic machine learning concepts and how your specific AI tool functions before going live with any AI tool. With the buzz that surrounds new AI technologies like ChatGPT, it’s easy to get carried away with all the good AI could do for cybersecurity. But, like all things in this industry, the smartest course is often the most boring: balanced investments between the sacred triad of people, process and technology are still critical. Sources 1 Gerald Parham, “4 Ways AI Capabilities Transform Security,” Security Intelligence, August 25, 2022. https://securityintelligence.com/posts/aicapabilities-transform-security/ 2 Alessandro Civati, “Human-in-the-loop Model – Why AI Needs Human Intervention,” July 5, 2022. 3 K. Veeramachaneni, I. Arnaldo, V. Korrapati, et al. “AI2: Training a Big Data Machine to Defend,” 2016 IEEE 2nd International Conference on Big Data Security on Cloud (BigDataSecurity). https://people.csail.mit.edu/kalyan/AI2_Paper.pdf 4 Reinventing Cybersecurity with Artificial Intelligence: The new frontier in digital security,” Capgemini Research Institute, 2019. https://www.capgemini.com/wp-content/ uploads/2019/07/AI-in- Cybersecurity_Report_20190711_V06.pdf Loraine Laguerta is a Technical Writer at SEI Sphere. 9 Colorado Banker
RkJQdWJsaXNoZXIy MTg3NDExNQ==