Bloomberg BNA – Companies seeking a quick fix with artificial intelligence (AI)-enabled cybersecurity systems shouldn’t neglect the essential role that humans play in protecting computers, data and networks from attack or unauthorized access, industry professionals told Bloomberg BNA.
“While we are certainly moving toward more automation across all industries, there will always be a need for human intervention in cybersecurity,” Symantec Corp. Chief Technology Officer Hugh Thompson told Bloomberg BNA.
AI systems are excellent at increasing efficiency and productivity by processing large data sets, but they aren’t a cybersecurity panacea, the pros said. Companies concerned with cybersecurity legal compliance and effective real-world solutions should understand that cybersecurity and information technology professionals are best-suited for tasks requiring a human touch, such as risk analysis, policy formulation and cyberattack response.
With the number of cyberattacks growing dramatically, “it is not humanly possible to keep pace with a threat growing at that pace,” Bloomberg Intelligence Senior Analyst Anurag Rana told Bloomberg BNA. Using AI-based systems to combat cybersecurity is part of the “natural evolution of software,” he said.
However, AI-based cybersecurity is still in its infancy, and it is difficult to gauge the size of the market for such AI tools at this point, Rana said.
In general, 62 percent of all enterprises will use AI technologies by 2018, according to a 2016 research report by Chicago-based technology company Narrative Science Inc. Cybersecurity professionals in various industries have started to adopt AI systems to help them catch and prevent threats. Mastercard Inc., for instance, uses AI systems to monitor and scan for “abnormal transactions” and then, cybersecurity professionals assess the gravity of the threat, Ron Green, the company’s chief security officer, said during an FT Cyber Security Summit in Washington.
Need for Human Touch
As companies deal with larger data sets, the ability of AI “to find the cyberattack-needle in the big-data-haystack outstrips the ability of programmers to manually create the code that performs this analysis,” Oliver Tavakoli, chief technology officer of Vectra Networks in San Jose, Calif., told Bloomberg BNA.
AI systems can identify bad situations, but “the question of what it all means in the context of the business that the company transacts still requires human analysis and judgment by employees of the company,” Tavakoli said.
Ofer Amitai, CEO of network access control company Portnox in Herzliya, Israel, agreed, saying, “an AI system that makes an automatic decision might not take into consideration all the information a human would consider.”
‘Only as Good as Its Teacher.’
In addition to acting as the safety net in assessing AI systems’ judgments, human intervention is also necessary for AI systems to learn and evolve, the cybersecurity pros said. “AI is only as good as its teacher,” Amitai said.
According to Uday Veeramachaneni, CEO of PatternEx, an AI and cybersecurity company, AI systems are “successfully learning from humans to figure out the difference between malicious attacks and normal behavior.”
But the AI evolution will be embraced by bad actors as well as those trying to protect their systems and data.
“The bad guys will invariably utilize AI to automate their attacks as well,” Tavakoli warned. In the future, “these skirmishes will become a matter of dueling AI foot soldiers being directed by their human generals.”
As long as humans are directing threats, fully automated cybersecurity is—"by definition"—not possible, Idan Tendler, CEO of Fortscale in San Mateo, Calif., told Bloomberg BNA.
Philip Tully, senior data scientist at Baltimore-based social media security company ZeroFOX, agreed. “AI is far better equipped to increase the detection rate,” but “human penetration testers will be needed to try to poke holes in and subsequently fortify AI defenses,” he said.
Others, including Veeramachaneni, Tavakoli, Amitai and CybeRisk CEO Eyal Harari in Tel Aviv, said fully automated cybersecurity may be a possibility one day but not anytime soon. Even if it is eventually possible, “it may not be the best idea,” Rebekah Brown, threat intelligence lead at the Portland, Oregon, office of Boston-based Rapid7, a security data and analytics software and services company, said.
“Fully-automated means that we not only rely on automation to prevent and detect attacks, but we also use the technology to make decisions about what to do when things are identified,” Brown told Bloomberg BNA.