A deep dive into deepfakes

Deepfake technology, which uses Artificial Intelligence (AI) to create highly realistic but fake images, audio and videos, is rapidly emerging as a significant threat in the cyber security landscape. As cyber criminals adopt this technology to exploit vulnerabilities, businesses must understand its implications and strengthen their defences.
Two men and women professionals are sitting in an office looking at a computer. The man is wearing a navy blue suit and the woman is wearing a blue jacket and black top.
Photograph: mediaphotos

What are deepfakes?

Deepfakes involve the use of AI algorithms to create convincing forgeries of individuals' likenesses.

This technology can fabricate realistic images, videos, and audio recordings, making it increasingly difficult to distinguish between authentic and fake content. Initially developed for entertainment and artistic purposes, deepfakes have quickly been acquired by cyber criminals for malicious activities.

Cyber security implications for deepfakes

The adoption of deepfake technology by cyber criminals presents several alarming implications.

1. Sophistication of attacks

AI enables a higher level of sophistication in cyber attacks. Deepfakes can be used to create realistic phishing emails, voice messages, or video calls, making it harder for individuals to identify scams.

2. Ehanced social engineering

Deepfakes take social engineering to a new level. Cyber criminals can impersonate senior executives or trusted colleagues to deceive employees into divulging sensitive information or authorising large financial transactions.

For instance, an employee at a multinational firm was tricked into transferring $25 million to fraudsters who used deepfake technology to impersonate the company’s CFO in a video call.

3. Bypassing security measures

Deepfakes can bypass traditional security controls. For example, AI-generated voice deepfakes can fool voice recognition systems, and AI-manipulated images can deceive facial recognition software.

4. Rapid development

The speed at which AI technology evolves means that deepfakes will become even more convincing and harder to detect. Cyber criminals can continually improve their methods, making it essential for businesses to stay ahead of these developments.

Deepfakes in the real world

Several high-profile cases highlight the real-world impact of deepfake technology:

  • in the financial sector, deepfake incidents surged by 700% in 2023. Criminals are using AI to imitate vocal patterns, successfully issuing fraudulent instructions over the phone
  • the legal sector has also been targeted, with the Solicitors Regulation Authority (SRA) warning lawyers about the risks of using video calls for client identification due to the threat of deepfakes
  • the CEO of a leading advertising firm narrowly avoided falling victim to a deepfake scam. Cybercriminals used a fake WhatsApp account, voice cloning, and doctored YouTube footage to create a convincing virtual meeting. Thanks to the vigilance of the firm’s staff, the attack was unsuccessful
  • popular culture has not been spared either, with manipulated videos of celebrities like Taylor Swift being used to spread misinformation. These videos are widely shared on social media, illustrating the challenges in moderating such content
  • a deepfake video of Ukrainian President Volodymyr Zelenskyy was circulated, showing him supposedly telling Ukrainian troops to surrender. Zelenskyy quickly debunked the video, but the incident highlighted the potential use of deepfakes in war propaganda

Strengthening defences against deepfakes

To combat the threat of deepfakes, businesses must adopt a multi-faceted approach:

  • staff training: train your staff to stay vigilant to enable them to recognise and react appropriately to suspected attacks
  • frequent simulated attacks: test your training by conducting regular simulated attacks that mimic techniques used by cyber criminals. This helps in identifying vulnerabilities and improving response strategies
  • enhanced authentication: implement stronger authentication measures, such as multi-factor authentication and conditional access, to reduce the risk of unauthorised access using stolen credentials
  • layered defence strategy: establish multiple layers of protection. If one control is breached, ensure that there are additional safeguarding measures and alerting mechanisms to prevent further progression of an attack
  • assessment and assurance: regularly assess and audit security measures to ensure their effectiveness. Engage independent experts to provide an unbiased evaluation of your security posture

Conclusion

Deepfake technology represents a considerable challenge in the realm of cyber security. However, by understanding the implications and adopting proactive measures, businesses can better protect themselves against the sophisticated threats posed by deepfakes.

Staying informed and vigilant, coupled with robust security practices, will be crucial in safeguarding against the evolving landscape.

Find out more

The Law Society has partnered with Mitigo to offer technical and cyber security services with exclusive discounts for our members.

Find out more about Mitigo’s cybersecurity services.

To book a free no-obligation consultation or for more information contact Mitigo on 020 8191 9205 or lawsociety@mitigogroup.com


Maximise your Law Society membership with My LS