TVNF Making media simple
  • Home
  • About Us
    • Meet The TVNF Team
  • MirrorMe AI Avatars
  • Services
    • Video Production
    • 2D Animation
    • 3D Animation
    • Live Streaming
    • TV Studio & Equipment Hire
    • Agents for ITV-X and AdSmart from Sky
  • Executive Media Training – Age of AI
  • News
  • Contact Us

Countering CEO Deep Fake Fraud

Countering CEO Deep Fake Fraud

The Emerging Risk

Artificial Intelligence (AI) is revolutionising industries, but its misuse creates unprecedented risks for businesses and the Public Sector from cybercrime, which costs our UK economy £27 Bn per annum - equivalent to more than half of what we spend on National Defence!

CEO Fraud

For many years, criminals have targeted companies by hacking emails, gathering intelligence to learn how to emulate individuals and corporate governance to trick finance departments into authorising fraudulent payments. These schemes have cost businesses millions of pounds. The list of international organisations impacted shows this criminality operates on an industrial and Global scale:

FACC AG (2016): This Austrian aerospace parts manufacturer lost a staggering €50 million ($54 million) when scammers impersonated the CEO in emails to the finance department.

Ubiquiti Networks (2015): The U.S.-based tech company specialising in networking technology fell victim to a $46.7 million scam. In this case, the attackers went a step further – they not only impersonated executives, but also posed as employees from the company's outside legal counsel.

Crelan Bank (2016): This Belgian bank became the victim of one of the largest reported CEO fraud cases in Europe, losing €70 million ($75.8 million). The attack involved sophisticated social engineering tactics to impersonate top executives.

Toyota Boshoku Corporation (2019): The Japanese car parts manufacturer (a member of the Toyota Group) lost $37 million to a BEC attack. In this case, the scammers posed as business partners rather than company executives, showcasing how these attacks can exploit vulnerabilities in supply chain and vendor relationships.

Facebook and Google Wire Fraud (2013-2015): Even tech giants aren't immune to these scams. Over two years, Lithuanian national Evaldas Rimasauskas orchestrated an elaborate scheme that defrauded Google of $23 million and Facebook of $98 million.

Nikkei (2019): The Japanese media company Nikkei fell victim to a BEC scam that cost them $29 Tomillion. An employee of Nikkei's U.S. subsidiary was tricked into transferring the money to a bank account purportedly belonging to a business partner.

Pathé (2018): The French cinema company lost over €19 million ($21.5 million) to a BEC scam. Scammers impersonating the company's CEO convinced the CFO and another executive to make several large transfers for a supposed confidential acquisition.

Big Corporations are not the prime target; small to medium-sized businesses are far easier to attack en masse. I have seen cases of successful family businesses ruined overnight, and even the suicide of a Solicitor who lost all their Clients' funds. No one should be complacent. Today's CEO has two questions to answer:

1. How to leverage AI, and 2. how to defend their company from the next generation of AI CEO Fraud and the ease with which their identities and personality can be cloned.

AI is Escalating the Scale of the Problem

What used to take months, even years of shadowing and hacking, is now possible in days. One alarming example is the rise in the use of AI to clone expense claims and invoices. This is just the tip of the iceberg. Recent UK case studies highlight the urgency of addressing these risks. For instance, a 2024 Deloitte poll revealed that 25.9% of executives experienced deepfake incidents targeting financial and accounting data. The Financial Crimes Enforcement Network (FinCEN) observed a rise in fraud schemes using deepfake media. These statistics, along with the Hong Kong case, underscore the need for a media and Web 3.0 AI avatar service to safeguard businesses.

The threat escalates with the emergence of powerful AI voice and video tools capable of creating realistic avatars of executives. These avatars mimic the appearance, voice, and even interactive behaviour of real individuals. Imagine a scenario where a fake executive appears in a video message, instructing the finance team to authorise transactions. The sophistication of such deepfake technology makes it increasingly difficult to distinguish between genuine and fraudulent communications.

A recent case in Hong Kong highlights the severity of this risk. In February 2024, a multinational firm fell victim to a sophisticated deepfake scam, losing $25.6 million. Scammers used AI technology to create convincing video deepfakes of the company's Chief Financial Officer (CFO) and other executives. During a video conference, the fake CFO instructed an employee to transfer funds to multiple accounts. The employee, believing the video call participants were genuine, authorized the transactions. This incident underscores the growing threat posed by deepfake technology in corporate environments.

Traditional Security is Not Enough

Traditional security systems and measures play a vital role in protecting organisations from a wide array of threats. However, all of the above companies had such systems, and it is merely a matter of time for any hacker to find a means to overcome even the most robust platforms like Microsoft Azure.

One common technique is to place a rogue employee inside the organisation opens a critical gap that sophisticated attackers exploit through CEO impersonation scams.

The most secure Government organisation faces a daily battle to prevent leaks and breaches and tries to enforce strict policy across its organisation. Yet we see they all suffer constant spectacular breaches because it's only a matter of time before the so-called 'Human Firewall' fails.

Web 3.0 Assures Secure Delivery and Authentication of Sensitive Corporate Media

To tackle these risks, companies must apply security and total control of access to any sensitive AI-generated media. Web 3.0 technology offers a perfect solution by enabling blockchain-encrypted and tokenised delivery of all media, including AI Avatars, ensuring they carry a certificate of genuine authenticity and have absolute auditable control over who can access media and what they are allowed to do with it, and guarantees that the content is genuine and unaltered. This prevents criminals from cloning and spoofing voices and videos.

Additionally, media production services creating and delivering such Web 3.0 AI-generated content must adhere to high standards of certified security, tested annually by penetration testing, such as Cyber Essentials Plus, to safeguard communications and AI media.

Once these measures are in place, businesses can securely protect sensitive content and media. This opens limitless opportunities for executives to save time and all the associated costs of creating mission-critical personalised videos for investors, internal staff communications, security training, compliance, and major bids without wasting time and cost on filming executives.

To learn more about implementing these solutions, contact the author via MirrorMe for a consultation. Protect your company from emerging AI risks and embrace secure, innovative methods to enhance efficiency and communication.

If any of our recent projects resonate with your needs or if you have a similar project in mind, we'd love to chat! We're eager to collaborate and bring your ideas to life.

Let's Talk
TVNF Making media simple

CONTACT US


100, Avebury Boulevard, Milton Keynes, MK9 1FH, UK

0843 289 2398

THE LEGAL BITS


Terms and Conditions
Privacy Policy

Copyright © 2025  The Video News Factory.
All Rights Reserved.

The Video News Factory Chat today to see how best TVNF can help you :-)

 

Send us a question or your initial brief  by WhatsApp.

We don't use 'Bot's and will get back to you usually within an hour.

JP Allard

JP Allard

TVNF Director

Vishakha

Vishakha

TVNF Communications Director

🟢 Online | Privacy policy