Voice Cloning a new threat?
An Article Review
If the amount of media coverage over the last year is any indication, it would seem that the public has finally taken note of the latest incarnation of AI–a term that seems to resurface roughly every decade, always referring to some new technology (or technologies) that attempt to mimic in some way how humans think.
In its 2020s getup, AI usually refers to various machine learning algorithms with wide ranging applications: chat bots can erriely mimic a human conversation partner, cars can be made to automatically identify and avoid obstacles on the road, and images can be generated from text prompts. All of this and more has become possible in such a relatively short period of time that it can be difficult to keep up on just what the latest AI technology is capable of doing.

The bad guys are keeping up with the technology though, and have already applied it to conduct real-world attacks using AI voice cloning… and these tools are not hard to come by or difficult to operate, with several AI voice tools being available to the public from companies like Microsoft. With low barriers to entry and rapidly evolving technology, the threat posed by various AI technologies can’t be ignored any longer.
Unfortunately the companies developing this technology didn’t wait for regulatory agencies to come up with frameworks for securing it before unleashing it on the public, but fortunately there are steps that can be taken to minimize the risk. Financial institutions should carefully consider the risks of voice banking services, and consider holding back on deploying such systems until their security can be thoroughly evaluated. Finally, multi-factor authentication should be used whenever possible, allowing someone’s voice to be paired with an additional piece of information.
Original article by Neil C. Hughes writing for Cybernews
This Article Review was written by Vigilize.
![]()
Matt Jolley is the current Vigilize, he is also the recipient of the 2023 Cyb3rP0e+ designation!