Boosted by AI, imaging techniques can imitate reality with extreme precision. Their malicious use makes businesses particularly vulnerable to scams and cyberattacks. Experts share their advice on preventing these risks.

Increasingly realistic, "deepfakes" can faithfully replicate a person’s appearance and voice using images, video recordings, or audio clips. As a collateral effect of large-scale artificial intelligence deployment, these manipulated contents have become a powerful tool for online fraud. For businesses, it is crucial to quickly recognize the threats posed by such falsified content and implement protective measures, such as internal protocols or detection software.
"Today, the technology to create deepfakes is accessible to everyone," explains Petar Tsankov, a former researcher at ETH Zurich and founder of LatticeFlow, a spin-off specializing in detecting machine learning model errors. According to the expert, fraudulent phone calls pose a major risk to businesses. "Audio deepfakes are easy for fraudsters to create. They are also harder for the human brain to detect. In some cases, criminals even add background noise to make the deception more convincing."
Training staff
For the Zurich-based entrepreneur, the reliability of detection software is improving at the same pace as the threat itself. "Some fraudsters still manage to slip through the cracks. As with standard cybersecurity measures, raising employee awareness remains the best way for businesses to prevent potential scams or cyberattacks."
Companies must help their employees recognize the deceptive and particularly formidable nature of deepfakes, which are still relatively unknown. "People have already learned to be wary of suspicious emails. Now, they must understand that seeing someone on a screen or hearing them on the phone is no longer proof that they are actually communicating with that person," explains Touradj Ebrahimi, professor at EPFL and specialist in digital signal processing.
Revising governance
To address these challenges, businesses must adapt their governance structures to counter emerging threats. Touradj Ebrahimi recommends implementing basic security measures in every organization. The expert advocates for the establishment of clear protocols to mitigate the most severe consequences of deepfake attacks.
"All sensitive operations, from money transfers to the exchange of confidential data, must undergo thorough verification. It is essential to ensure that such actions require at least two confirmations via two different communication channels – for example, by phone and email. A single-channel verification is no longer sufficient." In the absence of 100% reliable detection software, these fundamental procedures remain the most effective way to safeguard against threats.
Identifying electronic signatures
To strengthen security measures, businesses should also use software capable of detecting deepfake attack attempts. There are two main approaches to detection. The first, proactive approach relies on digital signatures voluntarily embedded by content creation software. For example, Google DeepMind places a watermark-like electronic signature on the content it generates. This trace can be detected by the public using Google's SynthID tool.
Comprising major tech companies such as Adobe, Arm, Intel, Microsoft, and Truepic, the Coalition for Content Provenance and Authenticity (C2PA) has developed a technical standard that enables the tracking of a digital file’s origin – whether a photo, video, or audio recording – through a publicly accessible history in the form of metadata. This method allows creators to label their content and users to verify its authenticity.
Enhancing digital detection capabilities
The reactive approach focuses on protecting against unsigned synthetic multimedia content, which is more commonly used in fraud, through specialized software. Programs have been developed to identify typical characteristics of manipulated files. "These solutions can now be integrated into the cybersecurity service offerings of companies. However, some providers do not yet offer them. It is up to client businesses to create demand so that the supply emerges," says Touradj Ebrahimi.
Once these detection tools are in place, regular updates are essential to maintaining a high level of security. "The sophistication of deepfakes is advancing rapidly, and the number of attacks will increase exponentially in the near future. Their nature will also evolve," the expert warns.
Deepfakes pose a real and growing threat to businesses, which should already be considering investments in detection technologies, training programs, or clear security protocols to mitigate risks. Far from being optional, cybersecurity has become an absolute necessity – one that only a comprehensive approach can effectively ensure.
Informations
On the theme
Evaluating information sources
Currently, several free and paid tools allow businesses to assess the authenticity of the content they receive. The Swiss platform Q-Integrity uses artificial intelligence to detect digital manipulations. The tool analyzes photos and videos to determine whether they have been altered, assigning confidence scores. Other similar tools available on the international market include Microsoft Video Authenticator, Sensity AI, Reality Defender, and Pindrop.
In discussion
Last modification 05.02.2025