Warning :- Deepfake Attacks Are Set to Surge!

Warning :- Deepfake Attacks Are Set to Surge!

New deepfake products & services are emerging across the Dark Web.

AI & the rise of deepfake technology is something cyber-security researchers have cautioned about for years & now it is officially arrived.

Cyber-criminals are increasingly sharing, developing & deploying deepfake technologies to bypass biometric security protections, & in crimes including blackmail, identity theft, social engineering-based attacks & more, experts warn.

Recorded Future

A drastic increase in deepfake technology & service offerings across the Dark Web is the 1st sign a new instalment of fraud is just about to appear, according to a new report from Recorded Future, which predicted that deepfakes are increasing alongside threats with an enormous range of goals & interests.

“Within the next few years, both criminal & nation-state threat actors involved in disinformation & influence operations will likely gravitate towards deepfakes, as online media consumption shifts more into ‘seeing is believing’ & the bet that a proportion of the online community will continue to be susceptible to false or misleading information,” the Recorded Future report observed.

Like most new technology, deepfakes 1st’ ‘incubator’ was pornography, the report pointed out, but now that it is ‘bouncing around’ the criminal corners of the internet, its development is getting boosted by hardened cyber-criminals.

Dark Web

The researchers commented that discussions among threat players about deepfake products & technologies are largely concentrated in English & Russian-language crime forums, but related topics were also observed on Turkish-, Spanish- & also Chinese-language forums.

Much of the talk in these underground forums is focused on how-tos & best practices, according to Recorded Future, which appears to demonstrate a widespread effort across cyber-crime to sharpen deepfake tools.

Free Software Downloads

“The most common deepfake-related topics on dark web forums included services (editing videos & pictures), how-to methods & lessons, requests for best practices, sharing free software downloads & photo generators, general interests in deepfakes, & announcements on advancements in deepfake technologies,” the report added.

“There is a strong Clearnet presence & interest in deepfake technology, consisting of open-source deepfake tools, dedicated forums, & discussions on popular messenger applications such as Telegram & Discord.”

Malicious Synthetic Media

Last summer, FireEye used the Black Hat USA 2020 event to warn the audience about how widely available open-source deepfake tools are, with pre-trained natural language processing, computer vision & speech recognition — just about everything a threat player might need to develop what they called malicious “synthetic media.”

FireEye’s Staff Scientist Philip Tully outlined then that the world was in the “calm before the storm.”

The storm seems to be growing just across the horizon.

Financial Cyber-Crime

Experian also released a report recently calling synthetic identity fraud the fastest growing type of financial cyber-crime.

“The progressive uptick in synthetic identity fraud is likely due to multiple factors, including data breaches, dark web data access & the competitive lending landscape,” the Experian “Future of Fraud Forecast” suggested.

“As methods for fraud detection continue to mature, Experian expects fraudsters to use fake faces for biometric verification. These ‘Frankenstein faces’ will use AI to combine facial characteristics from different people to form a new identity, creating a challenge for businesses relying on facial recognition technology as a significant part of their fraud prevention strategy.”

Hao Li

The rising threat of deepfake technology has been a worry for years. In 2019, deepfake artist Hao Li sounded an alarm that AI in the hands of cyber-criminals would be a formidable security threat.

“I believe it will soon be a point where it isn’t possible to detect if videos are fake or not,” Li explained in the Autumn of 2019. “We started having serious conversations in the research space about how to address this & discuss the ethics around deepfake & the consequences.”

There have been a few cases of successful deepfake cyber-crimes in the past. In Sept. 2019, cyber-criminals created faked audio of a CEO to call his company & ask them to transfer $243k to their bank account.

Protecting Against Deepfakes

Cyber-expert Brian Foster (a strategic advisor to Awingu) recently explained that protecting against deepfakes is going to require a drastic re-think of the traditional approach. Foster envisioned an automated, zero-trust system that likewise leverages AI & machine learning to analyse multiple security parameters.

“Overall, the more we can automate & use intelligence to accomplish verification processes, the better, Foster advised. “This approach relies less on humans, who, let’s face it, make lots of mistakes, & more on innovative best practices & tools that can be implemented far faster & more successfully than any static corporate policy.”

https://www.cybernewsgroup.co.uk/virtual-conference-may-2021/

 

SHARE ARTICLE