In early 2024, employees at a multinational company in Hong Kong participated in what they believed was a video conference with their CFO. The CFO gave instructions to transfer tens of millions of dollars USD to several accounts. The CFO was not on the call. Every participant — every face and voice in that meeting — was a deepfake generated by AI. The employees transferred the money. This was not a scenario from a cybersecurity thriller. It was a documented, real-world attack that succeeded because the employees trusted what they saw and heard.

By 2026, the tools needed to execute this kind of attack are no longer restricted to well-funded nation-state actors. Open-source voice cloning models, accessible deepfake video generation platforms, and AI-powered real-time face-swapping are available to organized crime groups and individual attackers at minimal cost. Every small business whose executives have publicly available audio or video — conference talks, podcast appearances, YouTube interviews, social media videos — is potentially a target. This guide explains how deepfake-based social engineering works, how to recognize it, and what controls actually protect your business.

Understanding Deepfake Attack Techniques in 2026

Deepfake attacks against businesses take several distinct forms, each requiring different defenses.

AI Voice Cloning for Vishing

Voice cloning models like ElevenLabs, RVC, and various open-source alternatives can replicate a person's voice from 15 to 30 seconds of audio. Attackers train models on publicly available recordings — podcasts, interviews, conference presentations, TikTok videos, even a voicemail greeting — and use the cloned voice to make phone calls impersonating executives, IT staff, banks, or government agencies. The resulting audio is convincing enough to fool people who know the target personally.

Real-Time Deepfake Video Calls

Tools exist that can overlay a deepfake face onto a live video call using a regular webcam, in real time. Attackers use these to impersonate executives in Zoom, Teams, or Google Meet calls. The Hong Kong tens of millions of dollars fraud used a more sophisticated version of this technique, but the underlying capability has become accessible to a much wider range of attackers since then.

Synthetic Audio in Voicemail and Recordings

An attacker may not need real-time deepfake capabilities. Pre-generated voice clips can be played via phone call or left as voicemails, leaving the recipient a convincing audio message that appears to be from a trusted executive or colleague asking for action — credentials, wire transfers, or sensitive information.

Synthetic Identity in Written Communications

AI-powered writing tools generate correspondence that perfectly mimics a specific person's writing style. Attackers study email history (from a compromised account or leaked data) and generate emails that match the target's vocabulary, tone, and communication patterns — making impersonation far more convincing than ever before.

Industries and Roles Most at Risk

While deepfake attacks can target any business, certain roles and industries face elevated risk.

Industries with frequent high-value transactions — financial services, legal, real estate, construction, and healthcare — are particularly attractive targets because the potential fraud amounts are large.

Building Human Defenses: Verification Protocols

Technology cannot be the sole defense against deepfakes — the attacks are specifically designed to exploit human trust in audio and video. Procedural controls are the most reliable protection.

Out-of-Band Verification

Any request for a financial transaction, credential change, or sensitive action that arrives via phone or video call must be verified through a separate, pre-established channel. Call the person back using the number you have on file — not a number provided in the suspicious communication. This single control stops the majority of vishing and deepfake fraud before it completes.

Pre-Established Code Words

High-risk roles (finance, IT, HR) should establish pre-agreed code words or phrases with key executives that must be included in any unusual request. This is a low-tech but highly effective control: a deepfake cannot know your internal code word. Rotate the code word monthly and share it only via a verified channel.

Video Call Anomaly Training

Train employees to look for deepfake artifacts in video calls: unnatural blinking or eye movement, slight lag between audio and lip movement, distortion around the face edges especially when the person moves, and inconsistent lighting or shadow behavior. While AI deepfakes continue to improve, real-time deepfakes on video calls still show these artifacts under scrutiny.

Policy: Never Approve Sensitive Requests on First Contact

Build a policy that no sensitive request — regardless of the medium — is approved on the first contact. Any wire transfer, access change, or sensitive data disclosure requires a second touchpoint through a different verified channel before action is taken. This creates a natural pause that disrupts the urgency manipulation central to social engineering attacks.

Want help putting this into practice?

Book a free 30-minute strategy call — I'll review your current setup and map out the next 3 high-impact steps for your business.

Book a Free Strategy Call →

Technical Controls That Support Deepfake Defense

Technical measures complement procedural defenses and provide detection and verification capabilities.

Employee Training: What to Watch and What to Question

Training employees to maintain healthy skepticism about audio and video communications — without creating paralysis — is a delicate balance. The goal is not to make employees distrust everything, but to recognize the specific patterns that indicate a potential deepfake or social engineering attempt.

Key training points:

Run annual tabletop exercises that simulate a deepfake vishing or video call fraud scenario. These exercises create muscle memory for the verification response and surface gaps in your policies before a real attack exploits them. For a broader look at social engineering defenses, see our guide on employee security awareness training.

Incident Response: What to Do If You Suspect a Deepfake Attack

If an employee suspects they have been targeted by a deepfake attack — or if a fraudulent transaction has already occurred — the response must be immediate and structured.

  1. Do not complete the requested action: If the transaction or credential change has not been executed, halt it immediately. Contact the apparent requestor through your verified channel to confirm the request was legitimate.
  2. Alert your IT team and leadership: Deepfake attacks targeting your business are a significant threat signal. Your IT team needs to know immediately so they can assess scope, check for other compromised channels, and brief the executive being impersonated.
  3. If fraud has occurred, call your bank immediately: Wire transfer fraud recovery has a narrow window. Call your bank's fraud line within minutes. Document everything: the call, the video recording if available, the time, the account numbers.
  4. Preserve all evidence: Record the suspicious call if possible, screenshot the video call, save all related communications. This evidence is critical for law enforcement, cyber insurance claims, and your internal post-incident review.
  5. Notify relevant authorities: In the US, report to the FBI IC3. In Canada, to the Canadian Anti-Fraud Centre. In the UK, to Action Fraud. Internationally, your local cybercrime reporting body. Law enforcement can sometimes assist with fund recovery if notified quickly.

For a complete incident response framework, see our guide on building an incident response plan. If you want help assessing your deepfake exposure and putting the right verification protocols in place, contact us.

Frequently Asked Questions

How can I tell if a video call is using a deepfake?

Current real-time deepfakes typically show subtle artifacts: slight unnaturalness in eye blinking or gaze, minor audio-to-lip sync delays, distortion or blurring around the face edges especially during movement, inconsistent lighting or reflections, and unnatural skin texture. Ask the person to perform an unexpected action — turn their head quickly, look down, hold up a specific finger. These sudden movements are harder for real-time deepfakes to render convincingly. However, deepfake technology continues to improve, so technical detection should be paired with procedural verification for high-stakes communications.

Can deepfake attacks happen to small businesses, or only large corporations?

Deepfake attacks are increasingly targeting small and mid-sized businesses. The tools required are widely accessible and inexpensive. Small businesses are attractive targets precisely because they often have less formal verification procedures than large enterprises. Any business whose executives have publicly available audio or video — even a brief LinkedIn video or podcast appearance — has the raw material for a voice clone. The verification protocols and training recommended in this guide are as important for a 20-person company as they are for a corporation.

What is a reasonable verification protocol for wire transfer requests?

A practical protocol: any wire transfer above your defined threshold requires (1) an initiating email or invoice from the requestor, (2) a callback to a pre-registered phone number for that person — not a number provided in the request — and (3) a second approver sign-off in your accounting or ERP system. For very large transfers, consider adding a mandatory cooling-off period of several hours. Document this policy, train your finance team on it, and enforce it without exceptions — even for the CEO.

What should we do about executive social media presence given deepfake risks?

There is a balance between maintaining a legitimate public presence and limiting deepfake training data. Practical steps include: avoiding posting long continuous audio and video of executives (a 5-minute podcast clip provides ample voice training data), watermarking official video content, ensuring employees know what official channels your executives use, and monitoring for unauthorized use of executive likenesses. Complete removal from public-facing content is rarely practical or desirable, but being aware of what is available and training staff accordingly is important.

Are there tools that automatically detect deepfake audio on phone calls?

Real-time deepfake audio detection for live phone calls is an emerging capability. Some enterprise voice security platforms are beginning to offer liveness detection and synthetic voice indicators. For 2026, the most reliable defense remains procedural — callback verification through a pre-established number — rather than automated technical detection, which is still maturing. Expect real-time detection capabilities to improve significantly over the next 12 to 24 months.

Want to protect your business from AI-powered fraud?

Book a free 30-minute strategy call and we will assess your current deepfake and social engineering exposure, review your verification procedures, and give you a clear action plan to protect your team.

Book a Free 30-Minute Strategy Call →