AI Fraud Just Cost a Company $25M

Insurance Might Not Cover It

3/17/20263 min read

Cybercrime in 2026 is being reshaped by AI, and real companies are already seeing the impact. In one of the clearest examples, engineering firm Arup confirmed that an employee in its Hong Kong office was tricked into sending about $25 million after joining a video call filled with deepfake versions of senior executives.

The attackers used AI to copy faces and voices so well that the employee believed they were speaking to real leaders inside the company. This wasn't a phishing email, it was a full simulation of a leadership meeting, and it led to one of the largest known AI-driven fraud losses tied to a single employee action.

Law enforcement says this is no longer rare. The FBI reported that cybercrime losses reached about $16.6 billion in 2024, and officials have warned that AI is accelerating that number into 2026. A large share comes from business email compromise, where attackers impersonate executives, founders, or vendors.

With AI, these messages now match tone, writing style, and context, making them almost impossible for employees to spot. In many cases, finance staff, controllers, or operations managers are the ones approving transfers, which puts them directly in the line of attack.

Global agencies are also raising alarms. Interpol said in its 2026 Global Financial Fraud Threat Assessment that financial fraud is now one of the most serious global crime threats, driven in part by AI tools that can automate phishing, impersonation, and scam operations across multiple countries.

These are organized networks, not lone hackers. They target real roles inside companies such as CFOs, finance managers, and accounts payable teams because those roles control money movement.

These AI-driven attacks are exposing specific gaps in business insurance. In the Arup case, the loss would likely fall under crime insurance because the funds were willingly transferred by an employee. Many crime policies require strict verification steps such as call backs or dual approvals.

If a finance manager or employee skips that step because the request looks real, the insurer can deny the claim. That means a single mistake by one employee can lead to a loss of $25 million with no payout.

Cyber insurance often doesn't respond to these cases. Policies are built to cover hacking, ransomware, and data breaches. But in AI fraud cases, there's often no system breach. The attacker convinces a real employee to act.

That means insurers may classify the loss as fraud and deny cyber coverage. This creates a clear gap between what companies think is covered and what actually pays.

Social engineering coverage is supposed to fill that gap, but it often falls short. Many policies cap this type of coverage at amounts far below real losses. A company could face a $10 million or $25 million fraud event and only have a few hundred thousand dollars in coverage.

As AI makes impersonation attacks more realistic, this gap is becoming more dangerous for companies of all sizes.

At the same time, AI-driven cyber attacks are increasing operational risk. In 2023, MGM Resorts was hit by a cyberattack that shut down systems across its hotels and casinos and led to estimated losses of over $100 million.

While that attack involved social engineering and ransomware rather than deepfake video, it shows how quickly operations can stop when attackers gain access. In 2026, security experts warn that AI can make these attacks faster to launch and harder to stop, increasing the chance of large scale downtime and business interruption losses.

The pattern across all these real cases is clear. AI is helping attackers target specific roles like CFOs, finance teams, and employees with payment authority. Losses are happening through real actions taken by real people, not just system hacks.

Insurance policies weren't designed for this level of realism in fraud. As a result, companies are facing situations where tens of millions of dollars can be lost, and coverage is limited, disputed, or denied because the event doesn't fit cleanly into cyber or crime definitions.

Companies are tightening how money moves after cases like the $25 million deepfake scam at Arup. Many now require two people to approve large payments, call back using trusted phone numbers before sending money, and block any bank detail changes unless they're confirmed a second way.

Finance teams like CFOs and accounts payable staff are being told to treat urgent requests, even from executives, as risky. At the same time, insurers are making rules stricter by requiring things like multi-factor authentication and clear payment procedures, and they're lowering how much they will pay for scams like this.

Because of that, companies are going through their policies line by line, checking how cyber and crime coverage work together, and testing real scenarios like a fake CEO request to see if they would actually be covered or not.

Image credit: Elinaetly