On one hand, artificial intelligence can improve cybersecurity in many ways. On the other hand, it's a devastating tool in the hands of malicious hackers. What's the truth?
Artificial intelligence is going to be (and already is) a great tool to assist the cybersecurity professionals who are currently active. The first and most intuitive reason why AI is going to be critical in the battle against cyberattacks, is that it's going to reduce the workload of the cybersecurity workforce. IT professionals work an average of 52 hours a week, but automation will assist them with many menial tasks, giving them some breathing room between one attack and the next.
Machine learning-based algorithms will also adapt to new threats faster than humans, as they can quickly spot the similarities between the new generation of malware and cyberattacks and other, more familiar threats. As the COVID-19 pandemic made remote working increase from 6% to 35% of employees, the attack surfaces of enterprises have largely expanded.
A full understanding of such diversified user behavior and data activity to staunch these attacks can be provided only through rapid deployment of AI-driven data analysis. AI that has "learned" enough will be able, in due time, to detect and deal with the vast majority of relatively simple threats on its own, freeing up an enormous amount of time for tech employees.
Finally, AI-based analytics platforms that use structured and unstructured machine learning are more flexible and more efficient at correlating and understanding information detected by different tools at once. Talk to a number of cyber professionals, and you will likely find they know very well that their tools presently lack the cohesion and accuracy needed to provide them with reliable data they can trust.
The widespread use of AI comes with its own risks to cybersecurity, as a panel of 26 British and American experts explained in the 101 page-long report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation."
First, it's easy to understand how the same benefits that cybersecurity experts are going to enjoy from the introduction of machine learning algorithms are valid for hackers and scammers as well. Attackers can use automation to make the process of finding new vulnerabilities they can exploit easier and quicker, for example.
But AI can "level the playing field" for attackers who usually rely on a much smaller workforce to coordinate their attacks. By alleviating the existing trade-off between the scale and efficacy of attacks through automation, labor-intensive attacks such as spear phishing will become more efficient and frequent.
However, AI can provide some benefits that are specific to attackers only, such as exploiting using speech synthesis for impersonation, for example. The deep learning language model GPT-3 can be used by cybercriminals to simulate with so much more realism all the nuances and behavior of a real person, to generate much more believable phishing attacks.
More in general, AI-based bots and malware can, right now, pose a much more significant threat to the average user than to the cybersecurity experts. AI can be used to steal users' data, coordinate large botnets and easily poke through the best VPNs that a user can hope to buy. The domino effect of exploiting these vulnerabilities of common people can be truly devastating.
The (Not So) Ugly Truth
The bottom line is that AI is going to be forever changing the cybersecurity scenario as it evolves. It doesn't matter much whether it's more effective for attackers or defenders now. All of cyberwarfare is already evolving around it, so much so that even the U.S. Department of Defense already acknowledged how AI cyberdefense is the best solution against AI cyberattacks.
It's neither "good" nor "evil," it's just a new weapon that, once it has been introduced and established, will revolutionize the field of battle. It's the equivalent of the introduction of rifles in warfare during the Renaissance: Things are never going to be the same.