Article

How can you make sure a voice call is truly authentic?

Interview with Dennis Buchinhoren, Account Director at Sectra Communications BV

While deepfake technology in itself seems to be harmless, it can become a dangerous tool in the wrong hands. During a phone call, the human voice is one of the most natural ways to authenticate the person you are talking to, but as deepfake technology advances, it also poses a larger security risk. In the near future, deepfake voice calls might be close to introducing interactivity in conversations—it is expected to feel just as natural as speaking to a real live person. Interesting for helpdesk services you might say, but worrying at the same time in case of malicious usage. How can one really make sure the caller is authentic?

In this article, Dennis Buchinhoren, Account Director at Sectra Communications BV, explores the potential future threats of this technology, particularly when working with classified information, and also offers advice on how to protect your information.

What is voice deepfaking?

First of all, we need to clarify what voice deepfaking actually is: voice deepfaking refers to the practice of using generative artificial intelligence (AI) technology to create, recreate, or manipulate a person’s voice in a way that makes it sound authentic and indistinguishable from the original person’s voice. This technology works by analyzing and learning from a dataset of the target person’s speech, extracting patterns, and using them to generate synthetic speech that mimics their tonality, accent, pitch, and speaking style.

Voice deepfakes can be used for various purposes, such as generating realistic-sounding voices for movies, video games, or virtual assistants. However, there are also potential malicious uses, like impersonating someone’s voice to spread false information, conduct scams, or discredit someone’s reputation.

As the technology behind voice deepfakes continues to advance, what future developments do you foresee?

Experts predict that advanced generative AI will result in voice deepfake interactivity in near future.  A recipient of a deepfake voice call will be able to interrupt the caller by asking questions. As a natural experience, the technology behind it will start to evaluate the question, give an answer, and revert back to the original message. A deepfake conversation will feel very close to a natural experience, as if speaking to a real live person. This might sound very helpful for helpdesk applications, but it also introduces serious new risks of malicious usage. Imagine an employee at a finance department receiving a deepfake call, pretending it’s their CEO asking them to transfer money to a bank account. The phone number is the same as their CEO’s and the voice sounds just like the CEO, making it easier to believe that it is, in fact, the CEO calling. When presented with confirmational questions (“Is this really the CEO?”), AI technology interprets the question immediately and reformulates its output on the fly, pretending to have a live conversation with the victim. This can be done automatically or through a person typing text that directly generates the synthetic answer.

This is a worrying development that we need to be aware of in order to not fall into the trap in the future. It is vital for individuals working with classified information, who might be targeted for information collection, to be extra careful when communicating over the phone. At the same time, it is important not to place all the responsibility on the individual; the employer must provide their employees with the best possible conditions and equipment to avoid making mistakes.

How might these developments impact individuals handling classified information?

For individuals handling classified information over the phone, the security risks associated with voice deepfaking require special attention. Since voice deepfaking will soon reach a level that closely resembles the experience of talking to a real live person, authentication of a voice caller is key. Security measures, such as two-factor authentication, closed user groups and stringent procedures for (re)activation of communication services handling classified information, will become vital to continue keeping information secure. These security measures are valuable for a number of reasons:

It’s equally important to classify information and continuously focus on end-user awareness of the measures required to handle information. In the case of SECRET classified information, end-user authentication mechanisms need to be set at the highest possible level. The end-users might perceive security measures as roadblocks to using technology. By creating an awareness of potential risks, usage procedures, and security measures, they will be more easily accepted, and organizations can better protect their sensitive data against potential threats posed by advances in voice deepfake technology.

How can users establish trust in the equipment they use when exchanging classified information?

Establishing trust is fundamental when working with classified information. You must feel confident that the equipment you use to exchange classified information provides the utmost protection for your sensitive data. Devices equipped with multiple layers of authentication can assist users in this crucial task, guaranteeing that only authorized individuals can access and utilize the equipment. It is also crucial to have an understanding of actual security risks, such as voice deepfaking. Ignorance of these risks can lead to a reluctance to adopt these security measures, particularly when functionality differs from everyday tools.

How to protect yourself against voice deepfaking

First and foremost, using the right technology for the right purpose is key. Sectra offers state-of-the-art secure communication solutions for encrypted speech, messaging and data sharing. Our solutions are approved up to the classification levels NATO and EU SECRET, which assures our customers that the right security measures are implemented. When using our solutions, end-users do not need to think about voice being the strongest authenticator for proving one’s identity. This is because there are several authentication measures built into the systems by design. For example:

If you do not have a secure communication solution for exchanging classified information, here are three tips to achieve a higher level of protection:

  • Inform your organization about the risks of voice deepfaking. Make sure the entire staff is aware of the risks and agrees never to deviate from corporate communication methods and instruments.
  • Implement communication equipment for sharing classified information according to the classification level (ensuring caller authentication).
  • Always try to ask a personal question; an AI would likely struggle to provide a reliable answer to these types of questions.
  • If in doubt, hang up and call the person in question back using their known contact details.

If you want to discuss this further, feel free to reach out to us!

We recognize and respect the importance of your privacy. By submitting this form, you agree to our Privacy policy >>

Author: Dennis Buchinhoren, Account Director at Sectra Communications BV

Related reading

Related products