Fighting Deepfakes With ZK & Biometrics: 2025 Insights

Why Trust Techopedia

As generative AI blurs the line between real and fake, AI-generated scams are a growing threat.

This article looks at the accelerating threat of identity-based scams powered by AI-generated personas and deepfakes, and how digital identities using zero-knowledge proofs (ZK proofs or ZKPs) and biometric verification offer a potential solution.

Key Takeaways

  • Around one-third of messaging app users report receiving contact from suspected AI-powered bots daily, and over 43% have been hit by AI-driven scams personally.
  • AI-generated deepfakes are becoming increasingly convincing, advancing into voice cloning, face generation, and mimicking human behavior.
  • AI scams threaten the trustworthiness of popular Web2 messaging platforms, as traditional identity verification methods are not sufficient to fight deepfakes.
  • ZK proofs enable users to verify their identity using facial recognition or fingerprints, without disclosing their actual biometric data.
  • Combining ZK proofs with biometrics makes it difficult for bots or fake accounts using automated scripts to pass verification.

How Deepfakes Are Fueling a New Wave of Identity Scams

Scammers are increasingly using AI to create deepfakes, which combine deep learning with images, sounds, or videos to create convincing hoaxes. Deepfakes have been used to spread misleading information or propaganda, such as showing a world leader doing or saying something that is not real, to influence the public.

But AI-generated deepfakes are also being used in identity-based scams, and traditional identity tools like static IDs, CAPTCHAs, email verification, or even government IDs are proving insufficient. Deepfakes can pass basic biometric security checks, such as facial or voice recognition.

Going beyond creating memes depicting politicians dancing, generative AI is rapidly evolving in voice cloning, face generation, and behavioral mimicry, creating personas that are indistinguishable from real humans. Fraudsters can use this ability to pose as trusted figures to trick people into giving up sensitive information or even sending them money.

Exploiting Trust

Popular messaging apps like Telegram and WhatsApp have become hotbeds for fraud, where scammers employ cloned voices and fake images to deceive users.

Over 43% of people say they’ve been hit by AI-driven deepfake scams personally, mostly for financial reasons, according to the State of AI & Deepfakes 2025 report from Humanity Protocol, which is building an identity verification blockchain.

Respondents reported that scammers are primarily using:

  • Telegram – 61.7%
  • Email – 54%)
  • WhatsApp – 38%
  • Meta – 19%

Over 67.1% of respondents are contacted by fake or bot identities at least monthly, and 27.5% report daily contact from suspected AI-powered bots.

Financial scams make up 62.5% of these encounters, in which scammers often impersonate authority figures or customer support agents.

Visual and audio deception is advancing rapidly: 60% of users flagged AI-generated images, while 39% caught imitated human voices.

A significant 10% of respondents admitted the scam was “extremely convincing,” and 17% reported losing between $500 and $2,500.

Terence Kwok, CEO and Founder of Humanity Protocol, told Techopedia:

“These scams are thriving not just in niche corners of the internet but on mainstream Web2 platforms where trust is assumed. The accelerating sophistication of generative AI is pushing Web2 platforms into uncharted and dangerous territory. The tools that built the social internet, such as usernames, CAPTCHAs, centralized verification, and government IDs, can no longer guarantee that the person on the other side of the screen is human or who they say they are. And scammers know it.”

Current fraud prevention systems rely on phone numbers or emails as identifiers, but these are easy to spoof. Government-issued IDs are non-scalable, high-friction, and privacy-invasive. Centralized verification methods are honeypots for data breaches. And CAPTCHAs are increasingly solvable by bots and LLMs.

Humanity Protocol’s report notes that these systems are not only vulnerable to AI manipulation but also exclusionary to the billions of people around the world who do not have formal identification.

There is an urgent need for a scalable, privacy-preserving digital identity solution that separates real humans from machines, the report states. As attackers get smarter, the current “report and block” safeguards fail.

“The underlying issue is clear. These platforms have no scalable, privacy-preserving way to verify who is real. That absence of trust infrastructure opens the door to widespread exploitation. Traditional identity systems fail under this pressure. They are too easy to spoof, too centralized, or too invasive,” Kwok said.

“The danger for Web2 platforms goes far beyond bad headlines. If users cannot trust that they are interacting with real people, social media, commerce, and community platforms will become unusable. AI scams are not a fringe risk. They threaten the viability of the platforms that billions use every day.”

How Zero-Knowledge Proof Can Tackle AI Scams

Digital identity verification needs to evolve to meet the challenges posed by increasingly sophisticated AI-driven scams. One method is to incorporate ZK proofs into digital ID systems.

A zero-knowledge proof is a cryptographic protocol that can verify a piece of sensitive data without revealing the data. It has applications in providing privacy and security in conducting blockchain transactions, identity verification, secure communications, data ownership, and more.

Users can prove they are human and unique through zk-proofs generated from their biometric authentication data without revealing any sensitive information, while validators and zkProofers can ensure the user is not registering multiple identities.

Once a user creates a human ID, they hold custody of their credentials, which can be verified off-chain using ZKPs, making them portable, interoperable, and revocable.

This means a self-sovereign identity (SSI) framework can use ZKPs to selectively disclose information and verify credentials while preserving privacy, limiting AI identity fraud.

As ZKPs are generated from cryptographically signed credentials, any change to the credentials invalidates the proofs. This ensures that the verifier can trust the integrity of the data and the holder cannot manipulate the claim. Since each ZKP is unique to a specific interaction, verifiers cannot link interactions together. This preserves the user’s anonymity across different platforms or services.

When it comes to tackling AI-driven deepfake scams, users can receive unforgeable verification that bots or deepfakes cannot replicate. ZK-enabled liveness checks, such as requesting live camera footage showing the users blinking or moving their head, ensure that the biometric input is provided by a live person rather than an AI-generated image or video.

ZKPs are being used in this way to build decentralized ID systems in which individuals can control their digital identities and choose how they share their information.

Zero-knowledge transport-layer security (zkTLS) enables clients and servers to verify identity credentials and proofs without exposing them in transit. This is critical for verifying human identities in real-time, trustless contexts like messaging platforms or decentralized finance (DeFi) interfaces.

A decentralized blockchain identity stack combining ZKPs and zkTLS makes it impossible for bots or large language models (LLMs) to impersonate real users across platforms, from chat apps to marketplaces, according to Human Protocol.

This is key to building platforms that are resistant to Sybil attacks, in which attackers use multiple fake identities to gain control of a network, and re-establishing trust online.

For instance, rather than store biometric data on-chain, Humanity Protocol’s Proof of Humanity (PoH) system uses ZKPs to verify a user’s uniqueness without revealing any sensitive data, and zkTLS to prove the legitimacy of connections and identities during web interactions, which is essential for platform integrations across Telegram, Discord, and on-chain applications.

Kwok said:

“As deepfakes, voice cloning, and behavioral mimicry continue to evolve, the line between human and machine will only blur further. The platforms that survive will be the ones that can prove their users are real.”

The Bottom Line

With more than a third of messaging app users contacted by AI bots every day, blockchain identity solutions are key to providing digital identity systems that are verifiable and resistant to AI manipulation while respecting users’ privacy.

Zk-proofs offer a tamper-proof cryptographic method of identity verification that cannot be replicated by bots or deepfakes. They can ensure data integrity and trustworthiness so that platforms such as social media apps and e-commerce marketplaces do not become overrun by increasingly sophisticated scammers.

FAQs

What are digital identity solutions?

Can biometrics stop AI deepfake scams?

How is Humanity Protocol using blockchain for ID verification?

Related Reading

Related Terms

Advertisements
Nicole Willing
Technology Specialist
Nicole Willing
Technology Specialist

Nicole is a professional journalist with 20 years of experience in writing and editing. Her expertise spans both the tech and financial industries. She has developed expertise in covering commodity, equity, and cryptocurrency markets, as well as the latest trends across the technology sector, from semiconductors to electric vehicles. She holds a degree in Journalism from City University, London. Having embraced the digital nomad lifestyle, she can usually be found on the beach brushing sand out of her keyboard in between snorkeling trips.

Advertisements