Part of:

Are Hackers Using AI for Malicious Intentions?

Why Trust Techopedia
KEY TAKEAWAYS

AI has the potential to protect data like never before, but also has the potential to steal data like never before. Security pros and hackers alike are trying to make the most out of this technology.

Cybersecurity professionals are looking at artificial intelligence (AI) with both enthusiasm and trepidation. On the one hand, it has the potential to add entirely new layers of defense for critical data and infrastructure, but on the other, it can also be used as a powerful weapon to thwart those defenses without leaving a trace.

Like any technology, AI has both strengths to be leveraged and weaknesses that can be exploited. The challenge for today’s security experts is to keep one step ahead of the bad guys, which should begin with a clear understanding of exactly how AI can be used as an offensive data weapon.

Hacking AI

For one thing, says Wired’s Nicole Kobie, we should recognize that, just like any data environment, AI itself can be hacked. At the heart of every intelligent process there is an algorithm, and algorithms respond to the data they receive. Researchers are already showing how neural networks can be tricked into thinking a picture of a turtle is actually a picture of a rifle and how a simple sticker on a stop sign can cause an autonomous car to drive straight into an intersection. This kind of manipulation is not only possible after AI is deployed, but when it is being trained as well, potentially giving hackers the ability to wreak all kinds of havoc without having to touch the client enterprise’s infrastructure.

While there is certainly no shortage of malcontents whose only goal is to hurt people and cause terror, the real prize in the hacking game is password detection and all the theft/extortion possibilities that come with it. Last year, the Stevens Institute of Technology created a program to demonstrate the power that AI brings to this process. Researchers infused a number of known password-cracking programs with intelligent algorithms that were trained to guess likely letter-number-special character combinations, and within minutes they had acquired more than 10 million LinkedIn passwords. As more passwords are discovered, of course, they can be used to train these learning algorithms, so they become more effective over time even if common defense measures, such as routinely changing a password, are employed. (For more on passwords, see Simply Secure: Changing Password Requirements Easier on Users.)

Is it possible that these tools are already being used by the criminal underground? With cloud-based AI services readily available and the dark web acting as a clearing house for all manner of crypto software, it would be surprising if this was not the case. Threat analysis firm Darktrace says it is seeing early signs of popular malware programs like TrickBot exhibiting contextual awareness in their quests to steal data and lock down systems. They seem to know what to look for and how to find it by studying target infrastructure, and then decide for themselves the best way to avoid detection. This means the program no longer needs to maintain contact with the hacker through command and control servers or other means, which is usually one of the most effective means of tracking the perpetrator.

Meanwhile, traditional phishing scams are starting to look more and more genuine, in large part because AI tools can make the initial email appear to come from a trusted source. For instance, natural language processing is designed to mimic human speech. When combined with readily available data like executive names and email addresses, it can produce a missive that is so realistic that it can fool even close associates. The average consumer is equally susceptible, given AI’s ability to mine all manner of data to infuse a fraudulent email with personalized information.

Advertisements

Fighting Back

As mentioned above, however, AI is a two-way street. While it may allow hackers to run circles around traditional security systems, it also makes current security systems much more effective. According to the Insurance Journal, Microsoft was able to foil an attempted hack of its Azure cloud recently when its AI-infused security regime spotted a false intrusion from a remote site. The attempt would have gone unnoticed under earlier rules-based protocols, but AI’s ability to learn and adapt itself to new threats should dramatically improve the enterprise’s ability to protect itself even as data and infrastructure push past the traditional firewall into the cloud and the internet of things. All of the top hyperscale cloud providers are aggressively implementing AI on their security footprints, since the sooner it is put into action, the more it will have learned by the time it encounters AI-empowered hacks. (To learn more, see How AI Advancements Are Affecting Security, Cybersecurity and Hacking.)

In this way, AI is merely the latest escalation in the tit-for-tat security war that has been ongoing for decades. As new threats emerge, new defenses rise to meet them, with the same underlying technologies fueling both sides.

If anything, AI will likely speed up this process while at the same time removing many of the hands-on activities from human operators. Will this be a good thing or a bad thing for today’s cyber warriors? Probably a mix of both as both the white hats and the blacks hats give up the nuts and bolts of coding their attacks and defenses and concentrate on the more strategic aspects of modern-day cyber warfare.

Advertisements

Related Reading

Related Terms

Advertisements
Arthur Cole
Technology Writer
Arthur Cole
Technology Writer

Arthur Cole is a freelance technology journalist who has been covering IT and enterprise developments for more than 20 years. He contributes to a wide variety of leading technology web sites, including IT Business Edge, Enterprise Networking Planet, Point B and Beyond and multiple vendor services.