Artificial intelligence (AI) is, by and large, the newest, most influential technology advancement currently affecting everything and everyone in the digital world. Developers and companies around the globe are engineering new ways to implement some machine-learning-based function in every software, platform and tool out there.
It's a rather obvious consequence, then, that AI is affecting security (and cybersecurity) in many positive and negative ways since it's a potent tool in the hands of both security specialists and hackers, in a never-ending game of cops and robbers.
The Good vs Evil AI Cybersecurity Battle
Being a cybersecurity professional is anything but simple. IT professionals are some of the most hardworking employees around, with strenuous work shifts of up to 52 hours a week. Anything that can automate their complicated and tiresome tasks (especially the most menial and repetitive ones) such as a smart AI solution is a welcome boon. Machine-learning-based software is also particularly efficient at spotting the similarities between the various cyber-threats, especially when the attacks are coordinated by other automated programs. The icing on the cake is that the newer AI-based algorithms are becoming better at understanding the data that comes from the various tools, and spotting those critical correlations that humans might miss.
It sounds like AI is making the "good guys" win their battle against the evil hacker, doesn't it?
Well, that's only half the truth, as the very neutral machines are actually helping both sides equally. A panel of 26 experts from the United Kingdom and the United States recently published an interesting paper: "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation." Here they clearly explain how AI can easily become a threat in the wrong hands, as it is a potent weapon to pierce even the most apparently unbreakable cyber-defense. The biggest issue is that attackers usually rely on smaller workforces to coordinate their attacks. But if the automation level brought by AI can increase the scale of their attacks, especially if they're able to recruit vast armies of machine-learning-powered bots, IoT botnets will be a much larger threat. "Smart" malware powered by the newest algorithms can become much less detectable, and labor-intensive attacks such as spear phishing can be performed with significant efficiency even by the smaller teams.
Weaponized AI can also represent a much more serious threat to the average user than to the cybersecurity expert, making the digital world a much less secure place to roam. As an example, how many people know that even some of the best VPNs leak their DNS through Chrome extensions? If all the data that is leaked every day by millions of users is collected through automation, an efficient AI-powered tool can make all the correlations required to coordinate a massive number of attacks against defenseless users. And the domino effect of these strategies might have truly devastating consequences, with cybercrimes costing the world about $650 billion per year. (For more on VPN worries, see Using a Free VPN? Not Really. You're Most Likely Using a Data Farm.)
Fraud Detection and Security
AI-powered biometrics can identify, measure and analyze not just physical and facial features, but specific human behaviors that could raise any kind of red flag. They can easily help to identify any potential criminal that is planning, say, a bank robbery or a theft, and help local security forces prevent it before it even happens. Biometrics can work side by side with text analytics and natural language processing (NLP). These two technologies use machine learning to understand and analyze complex text, as well as understand the structure of sentences and their intentions.
But humans can be understood even beyond their verbal and physical features. Emotion recognition is a fascinating new technology that allows some uniquely amazing software to "read" human emotions through a mix of advanced image and audio processing. Facial expressions are deeply intertwined with mood, personality and human communication, and even "micro-expressions" can be captured by machines to understand what that person is going to do.
Together, all these systems can be integrated for security systems and fraud detection. Law enforcement can use them to detect information during interrogation, predict behaviors, limit risky situations and even fight terrorism. AI and machines are becoming the new "watchdogs" that will assist all kinds of security forces. Beware, though – AI can also be used by people with malicious intent, for example, by exploiting speech synthesis software for impersonation. (For more on fraud detection, see Machine Learning & Hadoop in Next-Generation Fraud Detection.)
When a disaster or an emergency of some kind occurs, security personnel need to react with flexibility and agility, and quickness is of the utmost importance. A management system must be in place to process all the information available, discriminate the most relevant pieces of info from the most useless ones, and collect all the data coming from multiple sources in a quick and reliable way. Personnel must be provided with a safe and actionable solution that is the sum of all this info.
It's easy to understand how hard it is for a human, or even a team of humans, to do all this with all the pressure of knowing how wrong split-second decisions may cost the lives of many people. Artificial intelligence technologies can be applied to disaster response to ease the burden of dealing with emergency situations. Emergencies can be dealt with quickly and efficiently thanks to AI for many reasons.
First, AI is great at making predictions, and to analyze and assess the extent of damage and risk in a given area. This way, teams can prioritize their interventions to provide their help first to those who need it the most. Image recognition, data extrapolation and classification can be done by AI at a much higher speed, using pictures and data coming, for example, from satellites or crunched from crowd-sourced mapping material.
AI systems' speech-to-text and analytics programs such as IBM’s Watson are currently being employed to listen to emergency calls to ease the workflow of contact centers during disasters. AI helps in reducing call times, provides accurate information to the emergency response teams, and can plan the quickest routes. Even images coming from social networks such as Facebook, Instagram, YouTube and Twitter can be analyzed by AI to filter real information from fake, or find missing people.
AI is already being incorporated in many security tools and solutions. From video surveillance cameras to intrusion alarms and even mobile chipsets to provide access control, it's going to be literally everywhere. Rather than a trend of a somewhat distant future, the integration of AI software in security has already become the new market standard.