How Cybercriminals Think in 2024 — What We Have Learned

Why Trust Techopedia

Proactive cybersecurity — that which thinks like attackers —  while gaining ground is also transforming. As cybercriminals reverse engineer artificial intelligence and generative AI systems to launch attacks, security disciplines, such as pen-testers, red teams, and bug bounty programs, adapt to the new ways of the digital criminal underworld.

One main battlefront in this evolving threat landscape is identity security, an attack vector rising in popularity among bad actors.

When exploited, identity security failures can lead to countless attacks, including file-less attacks, privilege escalation attacks, and social engineering.

Key Takeaways

  • Traditional security tools are ineffective against AI-powered attacks, which are becoming increasingly common.
  • A new approach to security is needed,  focusing on identifying and eliminating potential attack paths before they can be exploited (AI-powered security platforms).
  • Identity breaches are a major vulnerability, so organizations need to continuously monitor activity for suspicious behavior.
  • Ethical hackers and security professionals using AI tools can help organizations identify weaknesses in their AI systems before they are exploited by malicious actors.

Proactive Defense in the Age of Automation

A recent research, “Cybersecurity in the age of offensive AI” found that two-thirds (65%) of security leaders expect offensive AI to be the norm for cybercriminals and used in most cyber attacks.

Techopedia talked to Sunil Gottumukkala, CEO and Co-Founder of Averlon, a proactive cybersecurity company, to get the inside story on how the industry is changing.

“In contrast to traditional tools that inundate teams with reactive alerts, while overlooking the true root cause, Averlon’s AI-powered platform pinpoints specific cloud security issues that pave the way for real-world attacks.”

The End of Red Team Blue Team?

Gottumukkala said that CISOs historically relied on the red team-offensive security teams within their organization or hired professionals to perform offensive security exercises against critical services.

Advertisements

“While this can be beneficial in discovering potential threats in the critical services, it doesn’t scale well. You cannot hire enough offensive security engineers to target a large enterprise.”

Gottumukkala warned that when the scale of attacks increases, it becomes exponentially harder for humans to comprehend and reason across many data points.

Graphs and Lists: Here Comes AI

As the digital attack surface expands and bad actors turn to AI to scale their attacks, a big question emerges: How can CISOs and cybersecurity teams get inside the head of an AI-focused criminal?

Responding to the question, Gottumukkala said that there is a well-known saying in the cybersecurity field that ‘Defenders think in lists and attackers think in graphs’.

Graphs are much better at representing the current state of an environment, whether it is the relationships among assets on the basis of network connectivity, access policies, and more. Cybersecurity teams can also use graphs to represent vulnerabilities using graphs, Gottumukkala explained.

“AI can play a major role in reasoning among different aspects of these relationships within this large graph.

“If we can teach the AI the basic fundamentals of the breach (MITRE ATT@K is a good framework for this), it can discover all the paths the attackers can take in the graph that would result in a successful breach,” Gottumukkala revealed.

“This means that defenders with the benefit of AI can discover potential attacks that exist in their environment and eliminate them.”

Identity Security Trends and Risks Evolving

On May 28, the Identity Defined Security Alliance (IDSA), a nonprofit that provides vendor-neutral resources to help organizations reduce the risk of a breach by combining identity and security strategies, released the 2024 Trends in Identity Security report.

The report found that 84% of identity stakeholders reported direct business impacts, an increase from 68% in 2023. Additionally, 90% of companies reported an identity-related incident in the last year, with phishing being the highest at 69%, followed by stolen credentials at 37%.

The types of attacks listed in the report can be extremely challenging to spot, remediate, or patch, as cybercriminals are effectively executing illegal actions using “authorized” IT tools and resources such as stolen credentials. From a digital forensics and live threat analysis perspective, these techniques raise no red flags for suspicious behaviors.

Seth Geftic, Vice President of Product Marketing at Huntress, a company that provides managed detection and response for endpoints and identities, recognized the value of the report and highlighted the need to focus on preventing identity compromises to reduce risk.

“The report would be more comprehensive if it also touched on the need to go beyond protecting the identity perimeter and start monitoring suspicious activity by implementing an identity threat detection and response strategy (ITDR).”

Geftic added that challenges like top identity security challenges, including complexity, lack of budget, lack of people, and lack of expertise (mentioned in the report) are the exact same challenges they are seeing in other areas of security, like endpoint security. These are not necessarily unique to identity.

“Identity security is starting to go through a similar evolution to that experienced with endpoint security over the past decade,” Geftic explained, referring to the transformation of endpoint security to EDR and later MDR.

“Identity security is now changing similarly, starting with the assumption that credentials and sessions can be compromised.

 

“By leveraging an identity threat detection and response strategy, defenders can identify indicators of compromise related to identity-related hacker tradecraft, such as account takeover and business email compromise.”

Bug Bounty Programs, Ethical Hackers, and HackerOne

One of the most cost-efficient tools that companies use to gain to access a large pool of cybersecurity expertise is through bug bounty programs. These programs have become incredibly popular among big tech.

For example, Netflix recently announced that since its program began they have already paid out over $1 million via bug bounty prices and rewards.

Michiel Prins, Co-Founder and Director of Solutions Architecture at HackerOne, the largest community of ethical hackers in the world, spoke to Techopedia to explain how AI is changing the “hacker mentality”.

“The concept and value of the hacker mentality haven’t changed for AI, just the techniques to find flaws are slightly different.”

Prins spoke about how new tech like GenAI brings unknown risks associated with its implementation. Even if organizations actively work to maximize security and adopt a “secure by design” mindset, catching everything is challenging when you aren’t fully aware of its risks and edge cases.

“Most security mature organizations understand this and work with ethical hackers to help find those unknown risks to anticipate vulnerabilities before they become a real problem for their business,” Prins said.

AI-Red Teaming Attacks

Prins spoke of the new technical concept of AI red-teaming. AI red-teaming and adversarial testing of AI systems help mitigate AI-related risks.

A HackerOne survey found that 37% of organizations have already implemented AI red teaming initiatives to fortify AI systems against malicious attacks.

The White House, playing catch up with AI risks, released in October 2023 an Executive Order that endorses red teaming. Prions explained that red teaming is enabled through programs like vulnerability disclosure and bug bounty programs, to increase the secure adoption of GenAI.

“Ethical hackers bridge the gap between the capabilities of automation and the nuanced, ever-evolving challenges of cybersecurity, ensuring a more comprehensive and adaptive defense strategy.”

As Prins explained, ethical hackers’ human intuition, ingenuity, and adaptability make them indispensable in security initiatives, especially with the introduction of GenAI. This talent, combined with new technologies, is proving to be the path forward.

The Bottom Line

Traditional security tools are struggling to keep up with the ever-increasing sophistication of AI-powered attacks. To combat this evolving threat, a new approach is emerging that integrates several key elements.

AI-powered security platforms can analyze vast amounts of data to identify potential attack paths and proactively address them before they become a vulnerability, while human expertise is the main differentiator.

AI red teaming incorporates ethical hackers who utilize AI tools to test an organization’s AI systems for vulnerabilities. This approach mimics the tactics malicious actors might use, helping organizations identify and address weaknesses before they can be exploited in a real attack.

By combining these techniques, organizations can build a more comprehensive and adaptable defense strategy that can keep pace with the ever-changing threats of the future.

Advertisements

Related Reading

Related Terms

Advertisements
Ray Fernandez
Senior Technology Journalist
Ray Fernandez
Senior Technology Journalist

Ray is an independent journalist with 15 years of experience, focusing on the intersection of technology with various aspects of life and society. He joined Techopedia in 2023 after publishing in numerous media, including Microsoft, TechRepublic, Moonlock, Hackermoon, VentureBeat, Entrepreneur, and ServerWatch. He holds a degree in Journalism from Oxford Distance Learning, and two specializations from FUNIBER in Environmental Science and Oceanography. When Ray is not working, you can find him making music, playing sports, and traveling with his wife and three kids.