AI agents are no longer a far-off idea reserved for keynote speeches delivered by futurists and visionaries. They’re here, taking on business-critical roles and acting with increasing autonomy. But as enterprises embed these systems, a new kind of identity crisis emerges.
In our recent conversation with David Higgins, Senior Director at CyberArk, Techopedia discussed how identity security is under growing strain, as machines now outnumber humans in the workplace by a staggering 100-to-1 ratio.
Key Takeaways
- AI agents now outnumber humans 100 to 1 in enterprise environments.
- Traditional authentication methods can’t detect behavior-based identity threats.
- Deepfakes and impersonation tactics are bypassing even tech-savvy users.
- Identity-first strategies must include machine credentials, not just employee logins.
- Misconfigured AI agents may trigger the next breach, not external attackers.
- Show Full Guide
Machines Behaving Like Humans
“We’re not talking about the future. We’re living it,” David Higgins told Techopedia. That one line set the tone for the entire discussion. What happens when these agents start talking to each other, accessing sensitive systems, and making decisions with minimal oversight?
The answer isn’t as simple as deploying more security tools. It means rethinking the way identity is understood, managed, and protected.
David didn’t hesitate when asked about what’s changed. He said:
“Two years ago, it was 45 machine identities to one human. Now, it’s 100 to 1.”
These aren’t just stats for internal reports. They represent a massive shift in how digital environments operate.
Machine identities used to be simple. Static. Predictable. They did one job repeatedly. Now, AI agents operate with more flexibility, more goals, and less predictability.
“They’re not flesh and blood, but they learn and adapt. You give them a goal, and they figure out how to get it done,” Higgins explained.
This flexibility means they don’t fit into the traditional models of user or machine identities. They exist in a grey space, part user, part automation. And that means existing security frameworks can no longer account for the risk they bring.
Why We Still Reuse Passwords
CyberArk’s latest research shows that 71% of UK workers experienced a cyberattack in the past year. But despite increased awareness, risky behaviors like password reuse and skipping updates continue.
David chalked it up to convenience. He said:
“We know password reuse is a problem. We’ve talked about it for years. But the reason it keeps happening is because it’s the path of least resistance.”
When employees face clunky logins or lack alternatives, security shortcuts happen. It’s not necessarily laziness; maybe there isn’t a better option that provides a more secure way without slowing them down.
And when 80% of people use personal devices for work but delay security updates, that creates an open door. It’s not enough to tell people to be more careful. Organizations need to build systems that encourage secure behavior by design.
https://t.co/9ehBCBRkAD
“We saw about 45 machine identities per human user on average in 2024. With the new #AI bots and agents, it will exponentially grow. You need to govern them,” warns Omer Grossman, Global CIO, @CyberArk. #Digital #Authentication #Copilots #Security pic.twitter.com/hlXf4HHm24— CDOTrends (@CDOTrends) February 23, 2025
The phishing email from a fake prince is long gone. Today’s scams are laser-focused, often aided by artificial intelligence (AI). Higgins shared how generative AI has taken these attacks to another level:
“It’s not just fake emails anymore. It’s fake voices, fake faces. You’ve got North Korean hackers landing remote jobs using deepfakes. That’s the level we’re operating at.”
Even if an attacker can’t trick the end user, they may still find a way to circumvent them. Helpdesks and third parties are being targeted as softer entry points. David warned:
“If you’re tech-savvy, attackers may try to get around you by targeting someone else – a third-party helpdesk, for example. Get them to reset your MFA token. Suddenly, they’re in.”
This is the reality of modern identity threats. It’s not just about credentials anymore. It’s about context, behavior, and anticipating how attackers will adapt.
From Static Rules to Behavioral Signals
Traditional methods of verifying identity, such as passwords and tokens, as well as multi-factor authentication, aren’t enough on their own. David advocated for behavior-based detection:
“They might have your credentials, but they can’t mimic your behavior. Where do you log in from? What time of day? Which systems do you normally access?”
In this new model, identity isn’t just about access. It’s about patterns. Anomalies. It’s about knowing when something feels off, even if the credentials appear to be in order. This requires investment in adaptive authentication and AI-powered analytics, not to create friction but to add a layer of intelligence where it’s needed most.
Building Culture, Not Just Compliance
Too often, identity discussions end with users being blamed. David pushed back against the “weakest link” narrative. He said:
“Everyone hates those click-through training sessions. You do them once a year. You don’t remember any of it. That’s not education. That’s compliance theater.”
He highlighted running awareness weeks, interactive sessions, and internal phishing tests as more effective strategies for enhancing cybersecurity awareness. “No one wants to be the one who falls for the test phish,” he said, smiling.
However, he was clear: this isn’t just a problem for users. Security teams also bear responsibility.
Higgins said:
“Even with the best training in the world, someone will eventually slip. That’s why security teams need to own part of that, too. Why did that malicious link even reach their inbox?”
The Agentic AI Dilemma
Our conversation turned toward the future and, arguably, the most uncharted area of identity security. Agentic AI systems are already taking on tasks that span multiple workflows and systems.
David offered a hypothetical scenario: one agent screens CVs, another shortlists candidates, and a third sends follow-ups to recruiters. Each interaction involves access. Permissions. Decision-making. “You’ve got a new type of insider threat. One you trained and gave access to,” he said.
These agents blur boundaries. They don’t have fixed behaviors. They learn. They optimize, which makes securing them far more complex than securing traditional software bots.
What happens when thousands of agents have read/write access to systems?
A new way is needed – the race is on@CyberArk gets it…lots of new cos emerging to address this problem
Investor day pres:https://t.co/KKJPG3uXcD pic.twitter.com/jDpbF8GxCj
— Ed Sim (@edsim) February 25, 2025
Identity-First Strategy, Not Just Security Controls
Many organizations still treat identity as a technical function rather than a strategic priority. David thinks that needs to change. He said:
“We patch the symptom. We don’t fix the root cause. That user may have too many standing privileges. Maybe they were dropped into five admin groups on day one and never reviewed again.”
He recommends just-in-time privilege models that assign access only when necessary. It reduces the attack surface and improves audibility without slowing things down.
“Don’t wait. Secure what’s being created now. Don’t let the sprawl get worse,” Higgins highlighted.
Beyond the agents, there’s a bigger question of what happens when the AI infrastructure is compromised. David raised a concern that sticks:
“AI tools are influencing how people think. You ask a chatbot for advice, and you act on it. But how do you know the data feeding that model hasn’t been poisoned?”
He pointed out that the next wave of manipulation may not target users directly but rather the models that those users rely on. “Fake news was the problem ten years ago. Now, it’s fake facts baked into your assistant,” he said.
This isn’t a fringe concern. It’s a clear and present risk that security leaders must address if AI is to be trusted within their walls.
The trajectory is clear. More agents. More automation. More identities. David warned:
“It’s not just AI agents. It’s IoT, APIs, and background services. Every smart sensor, every connected camera, that’s another identity. Another access point. Another potential risk.”
Companies need to start with visibility. Map identities and track permissions. Set guardrails. Because the reality is this: the subsequent breach may not come from a hacker at all. It may come from an AI agent acting on flawed instructions, outdated policies, or simply too much freedom.
When asked about what keeps him thinking on the way home, David said:
“Honestly, it’s not the attack vectors. It’s the misinformation angle. If someone corrupts the model that your AI assistant runs on, how long before bad advice starts to feel like truth?”
The Bottom Line
Every enterprise is facing not just a security challenge but a societal one now that identity extends beyond access. It extends into influence, intent, and trust.
In the age of AI agents, it’s not just about logging in anymore. It’s about knowing exactly who or what is on the other end of the connection. If you’re not building an identity-first foundation now, you’re not ready for what comes next.
FAQs
What is identity security?
What makes AI agents a security risk in modern enterprises?
Why aren’t passwords and multi-factor authentication enough anymore?
How should companies rethink identity security in the age of AI?
References
- CyberArk’s David Higgins on the Real Risks Behind AI in the Enterprise (Apple Podcasts)
- 2025 Identity Security Landscape (CyberArk)