2023 alone has been a massive year for generative AI, with GPT-4, GPT-4, GPT-4V, Google Bard, PaLM 2, and Google Gemini all launching this year as part of a fast-brewing arms race to automate day-to-day workflows.
To get some insight into what’s next, Techopedia asked some of the top CEOs in enterprise tech to find out how they believe AI will impact organizations in 2024 and the top AI trends they see emerging. The comments below have been edited for brevity and clarity.
The Rise of AI-Fueled Malware
We will soon see the rise of generative AI-fueled malware that can essentially think and act on its own. This is a threat the U.S. should be particularly concerned about coming from nation-state adversaries.
We will see attack patterns that get more polymorphic, meaning the artificial intelligence (AI) carefully evaluates the target environment and then thinks on its own to find the ultimate hole in the network or the best area to exploit and transforms accordingly.
READ MORE: 12 Highest Paid AI Jobs for 2024
Rather than having a human crunching code, we will see self-learning probes that can figure out how to exploit vulnerabilities based on changes in their environment.
Patrick Harr, CEO at SlashNext
Passkey Adoption Will Increase
There’s a dark side of the AI boom that not many consumers or businesses have realized: cybercriminals are now able to make their phishing attacks more credible, frequent, and sophisticated by leveraging the power of generative AI, such as WormGPT. As we enter 2024, this threat will grow in size and scale.
Against this backdrop, we’ll reach the tipping point for mass passkey adoption (although there will still be a significant period of transition before we reach a truly passwordless future).
However, passkeys will ultimately surpass passwords as the status quo technology once the consequences of not adopting a more secure, phishing-resistant form of authentication become clear in the wake of increasingly harmful and costly cyberattacks.
John Bennett, CEO at Dashlane
Adding Safeguards to AI Models
Safety and privacy must continue to be a top concern for any tech company, regardless of whether it is AI-focused or not. When it comes to AI, ensuring that the model has the necessary safeguards, feedback loop, and, most importantly, mechanism for highlighting safety concerns is critical.
As organizations continue to rapidly adopt AI in 2024 for all of the efficiency, productivity, and democratization of data benefits, it’s important to ensure that as concerns are identified, there is a reporting mechanism to surface those, in the same way, a security vulnerability would be identified and reported.
David Gerry, CEO at Bugcrowd
LLMs Will Reshape Cloud Security
In 2024, the evolution of Generative AI (Gen AI) and Large Language Models (LLM), initiated in 2023, is poised to redefine the cybersecurity chain, elevating efficiency and minimizing manpower dependencies in cloud security.
One example is detection tools fortified by LLMs. We’ll see LLMs bolster log analysis, providing early, accurate, and comprehensive detection of both known and elusive zero-day attacks.
- The World Needs 4M More Cybersecurity Experts — Now
- The Best Cybersecurity Certifications for 2024
- The Best Cybersecurity Schools and Classes
The analytical prowess of LLMs will uncover subtle, intricate patterns and anomalies, allowing for the identification and mitigation of complex threats and enhancing the overall security posture.
We’re going to see GenAI intensify the sophistication of both cyber attacks and defense mechanisms, necessitating innovative strategies and fostering the creation of agile, responsive security frameworks.
The amalgamation of AI and LLMs will streamline security operations and protocols, enabling professionals to concentrate on strategic analysis and innovation and ensuring robust detection and counteraction of threats, which will fortify the integrity, confidentiality, and availability of information in the cloud.
Chen Burshan, CEO of Skyhawk Security
Data Security ‘Risk Reduction’ Will Evolve
The concept of ‘risk reduction’ in data security will evolve in the next few years, in line with the rise in the use of Generative AI technologies.
Until recently, organizations implemented data retention and deletion policies to ensure minimal risk to their assets. As GenAI capabilities become more widespread and valuable for organizations, they will become more motivated to hold on to data for as long as possible in order to use it for training and testing these new capabilities.
Data security teams will, therefore, no longer be able to address risk by deleting unnecessary data since the new business approach will be that any and all data may be needed at some point, This will bring about a change in how organizations perceive, assess and address risk reduction in data security.
Liat Hayun, CEO and co-founder at Eureka Security
An Erosion of Trust Surrounding AI Decision-Making
In a rapidly evolving technological landscape, the parallels between the adoption of cloud services and the current surge in artificial intelligence (AI) implementation are both striking and cautionary.
Just as organizations eagerly embraced cloud solutions for their transformative potential in innovation, the haste of adoption outpaced the development of robust security controls and compliance tools.
Consequently, this created vulnerabilities that malicious actors were quick to exploit, leaving enterprises grappling with unforeseen challenges.
As we witness a similar trajectory in the adoption of AI technologies, it becomes imperative to draw lessons from the past and proactively address the looming concerns. The rapid integration of AI into various facets of business operations is undeniably transformative, but the lack of comprehensive visibility and enterprise control raises red flags.
Much like in the early days of cloud adoption, organizations are navigating uncharted territories with AI, often without the necessary safeguards in place. The consequences of insufficient controls are twofold: first, a heightened risk of security breaches, and second, a potential erosion of trust as stakeholders question the ethical implications and transparency surrounding AI decision-making.
Varun Badhwar, CEO and co-founder at Endor Labs
Developers Will Be More Efficient
This is a two-pronged topic for leadership to really think about in 2024. On one hand, CISOS and IT leaders need to be able to think about how we’re going to securely consume it into our own source code “kingdoms” within the tuner-rise.
With the likes of Co-Pilot and ChatGPT, developers and organizations will be a lot more efficient, but it also introduces more risk of potential vulnerabilities we need to worry about.
“On the other side, we need to be able to think about how Application Security vendors in the space will allow CISOS and IT leadership to leverage generative AI in their tools to be able to run their programs a lot more efficiently and drive productivity in terms of using AI to speed up security outcomes like security policy generation, identifying patterns and anomalies, finding and prioritizing vulnerabilities a lot faster, and assisting with the incident response process.
Lior Levy, CEO and co-founder at Cycode
Video Generation Goes Mainstream
Over the past year, video generative models (text-to-video, image-to-video, video-to-video) became publicly available for the first time.
In 2024, we’ll see the quality, generality, and controllability of those models continue to improve rapidly, and we’ll end the year with a non-trivial percentage of video content on the internet incorporating them in some capacity.
Additionally, as large models become faster to turn and we develop more structured ways of controlling them, we’ll start to see more kinds of novel interfaces and products emerge around them that go beyond the standard prompt-to-X or chat assistant paradigms.
Much of the focus of conversation over the past year has been on the capabilities of individual networks trained end-to-end. In practice, however, a pipeline of models usually powers AI systems deployed in real-world settings, and more frameworks will appear for building modular AI systems.
We will also see AI powering research — while LLM code assistants like Copilot have received a lot of adoption, there hasn’t been a lot of tolling that targets speeding up AI research workflows specifically, e.g., in automating a lot of the repetitive work involved in developing/debugging model code, training and evaluating models, etc. We’ll likely see more of those tools emerge in the coming year.
Anastasis Germanidis, CTO and co-founder at Runway
If even just a few of this panel’s top AI trends come true, we are in for interesting times over the next 12 months and a transformative time for enterprises and individuals.
While organizations are still getting to grips with the security concerns of AI-generated malware and phishing attempts, there is also plenty of opportunity for employees and stakeholders to automate workflows and unlock value in 2024.