Deepfakes
AI-generated synthetic media — video, audio, or images — that realistically portray a person doing or saying something they never did, used increasingly in fraud and social engineering.
What Are Deepfakes?
Deepfakes are synthetic media — video, audio, or images — created using deep learning techniques to realistically depict a person doing or saying something that never happened. The term combines "deep learning" and "fake." Early deepfakes required significant technical expertise and computational resources; modern AI tools have made them accessible to nearly anyone.
How Deepfakes Are Created
Generative adversarial networks (GANs) and diffusion models can synthesise convincing video and audio of real people by training on existing footage or voice samples. Voice cloning — creating a synthetic voice model from a few minutes of real audio — is now achievable with consumer AI tools.
Security and Fraud Use Cases
CEO fraud / BEC evolution: Attackers use cloned voices or video to impersonate executives, authorising fraudulent wire transfers. A 2024 incident saw a finance employee in Hong Kong transfer $25 million after a video call with what appeared to be their CFO — all deepfake.
Vishing attacks: Cloned voices of IT support, banks, or colleagues used in phone-based social engineering.
Identity verification bypass: Deepfake video used to defeat video-based KYC (Know Your Customer) verification processes.
Disinformation: Fabricated videos of public figures making statements they never made — used to manipulate markets, elections, or public opinion.
Synthetic identity fraud: AI-generated faces and documents used to create fraudulent identities.
Detecting Deepfakes
Detection is increasingly difficult as generation quality improves. Current detection signals include:
- Unnatural blinking or facial micro-expressions
- Inconsistent lighting or skin tone at edges
- Audio artefacts or unnatural speech rhythm
- Metadata inconsistencies
AI-based deepfake detection tools exist but engage in an arms race with generation capabilities.
Defending Against Deepfake-Enabled Attacks
- Establish verbal verification protocols: For any request involving money, credentials, or sensitive data, use a callback to a known number or a pre-agreed code word
- Training and awareness: Educate staff that video and audio calls are no longer inherently trustworthy
- Multi-step approval: Financial transfers above a threshold should require multiple independent approvals
- Out-of-band verification: Confirm unusual requests through a separate, pre-established channel