New tools could allow unskilled attackers to launch increasingly sophisticated attacks…
An article review.
Imagine a world where you receive a call from your boss asking you to assist them with something… only it’s not your boss, but an AI being used by an attacker. This isn’t science fiction, it’s an actual attack that has been seen in the wild according to an article in The Washington Post, captured by archive.is. Using audio clips from public speeches, attackers were able to mimic the voice of an executive at an Indian technology firm in an attempt to get an employee to transfer money to their bank account. In this case the attack was thwarted by an alert employee who suspected things were not as they seemed, but the incident shows the degree to which artificial intelligence can be used by malicious actors.
The potential attack vectors go beyond phone calls, with one firm citing AI generated phishing emails causing a massive rise in the number of attacks seen in the last year. AI tools allow hackers to create more realistic, personalized phishing messages that lack the telltale signs of a phish that employees have been trained to notice, and that anti-phishing tools have been developed to identify. Additionally, experts are concerned about the use of AI tools by relatively unskilled attackers to write code that can exploit security vulnerabilities in a targeted site, greatly increasing the risk faced by organizations.
Fortunately, AI tools can also help those looking to defend organizations against attacks as well. The National Science Foundation is working with numerous groups to study and respond to the threats posed by AI tools, though they may have their work cut out for them. In the mean time, organizations looking to secure themselves against cyber criminals will need to remain vigilant and focus on security in depth, making sure a failure of any single system doesn’t lead to a successful attack.
Original article by Joseph Menn writing for The Washington Post, captured by Archive.is.