“Deep Fake” Technology is Behind The New Trend of AI-Assisted Fraud…
An article review.
It’s a scenario that probably happens far more often than we’d like to admit: an employee gets a phone call from a superior requesting they perform some task that is out of the ordinary. There are policies in place requiring more authorization before that task is performed, but when a superior is on the line and saying they authorize it…well, many employees would probably do as they were asked. After all, they know what their boss sounds like!
Until now that employee would probably have been correct in identifying that it was their boss on the phone, but a recent fraud case suggests that these days hearing is not believing. The case alleges that “deep fake” technology—which uses AI to fabricate video and audio of a targeted person—was used to trick an unnamed CEO into thinking he was talking to the President of his parent company…and to wire funds to one of their suppliers overseas.
It wasn’t until the promised reimbursement for the transfer failed to arrive—and a call for additional transfers once again came from the “President”—that any suspicions were raised, and by then the fraudsters had cleared out the account and vanished. According to the insurance firm that covered the unnamed victim’s claim, it was the first reported loss due to an AI-related crime. Additionally, the software believed to be used in the attack was evaluated by the insurer and was found to indeed produce realistic speech.
While your organization likely already has policies in place to verify the identity of callers, this case should highlight how important it is that those policies not be set aside…even if the caller sounds like who they claim to be. Having to decline a request from what sounds like your frustrated boss may result in an awkward conversation, but these days you can’t be too cafeful.
Original article By Ephrat Livni, writing for Quartz.