Deepfake Ransomware Threat Highlighted 

Multinational IT security company ‘Trend Micro’ has highlighted the future threat of cybercriminals making and posting (or threatening to post) malicious ‘deep fake’ videos online in order to cause damage to reputations and/or to extract ransoms from their target victims.

What Are Deepfake Videos?

Deep fake videos use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create an embarrassing or scandalous video such as pornography or violent behaviour. The AI aspect of the technology means that even the facial expressions of those individuals featured in the video can be eerily accurate, and on first viewing, the videos can be very convincing.

An example of the power of deepfake videos can be seen on the Mojo top 10 (US) deep fake video compilation here: https://www.youtube.com/watch?v=-QvIX3cY4lc

Audio Too

Deepfake ‘ransomware’ can also involve using AI to manipulate audio in order to create a damaging or embarrassing recording of someone, or to mimic someone for fraud or extortion purposes.

As an example, in March this year, a group of hackers were able to use AI software to mimic (create a deep fake) of an energy company CEO’s voice in order to successfully steal £201,000.

Little Fact-Checking

Rik Ferguson, VP of security research and Robert McArdle, director of forward-looking threat research at Trend Micro recently told delegates at Cloudsec 2019 that deepfake videos have the potential to be very effective not just because of their apparent accuracy, but also because we live in an age when few people carry out their own fact-checking. This means that by simply uploading such a video, the damage to reputation and the public opinion of the person is done.

Scalable & Damaging

Two of the main threats of deepfake ransomware videos is that they are very flexible in terms of subject matter i.e. anyone can be targeted, from teenagers for bullying to politicians and celebrities for money, and they are a very scalable way for cybercriminals to launch potentially lucrative attacks.

Positive Use Too

It should be said that deepfakes don’t just have a negative purpose but can also be used to help filmmakers to reduce costs and speed up work, make humorous videos and advertisements, and even help in corporate training.

What Does This Mean For Your Business?

The speed at which AI is advancing has meant that deepfake videos are becoming more convincing, and more people have the resources and skills to make them. This, coupled with the flexibility and scalability of the medium, and the fact that it is already being used for dishonest purposes means that it may soon become a real threat when used by cybercriminals e.g. to target specific business owners or members of staff.

In the wider environment, deepfake videos targeted at politicians in (state-sponsored) political campaigns could help to influence public opinion when voting which in turn could have an influence on the economic environment that businesses must operate in.

Posted by Andrew Sewell,

Comments