When AI goes awry

Little consideration has been given to the potential challenges and threats AI can pose to society

Credit: Dreamstime

Once thought to exist only in the realm of fiction and futuristic movies, artificial intelligence (AI) has already established use cases across a host of industries, including the manufacturing and retail sectors. AI has ingrained itself into our operational procedures and business processes, creating efficiencies that we now deem as necessary for everyday functions.

Despite its prevalence, little consideration has been given to the potential challenges and threats AI can pose to society. As a new technology with immense potential, there is still so much unknow about AI, and careful supervision is required to control and contain it. While the technology is still in its infancy, AI presents opportunities to be abused with malware and leveraged in attacks that can impact businesses and society.

According to the Office of the Australian Information Commission’s Notifiable Data Breaches Scheme 12‑month Insights Report, there were more than 900 eligible data breaches from 1 April 2018 to 31 March 2019, 60 per cent of which were a result of malicious or criminal attacks. The top sectors reporting a breach were the health, finance, legal, accounting and management service sectors; industries that contain highly-confidential information, making them ideal targets for AI-powered breaches.

With the threat of malware expected to heighten as AI evolves, businesses must be on high alert and bolster their defences. Over the next one to three years, Malwarebytes estimates that AI will be implemented or used for malicious purposes. While there are currently no examples of AI-enabled malware, the potential of a cyber-threat that is actively evolving and learning can be detrimental, and it is only a matter of time before this becomes a reality.

Artificial intelligence – a weapon and weakness

AI has the potential to create new variants of malware that when used with malicious intent, can be harder to detect, more precisely targeted, convincing, destructive, effective and widespread. AI-enabled malware can hinder detection by cybersecurity vendors by changing its behaviour and characteristics based on its environment. It can also delete itself when it suspects it’s being analysed, changing shape and form along the way, and making decisions to target specific files.

While AI-enabled malware can be used be as a form of attack, it can conversely also be used as a defence. The same traits that make it a threat can be leveraged to combat the growing number of malware variants that are deployed each day, supporting short-staffed IT workers. AI can be used to evaluate, organise, and condense threat variants, automating mundane tasks at scale. Processes can be automated to detect future versions of malware with its improved detection capabilities.

Exploiting AI-enabled malware in cybercrime

Despite the opportunities to leverage AI to combat security concerns, the reality is that it opens up a new avenue for cybercriminals to profit, leading to the introduction of new forms of malware. An example is DeepFakes, a method of creating fake videos of real people based on AI. Criminals simply feed a computer data consisting of a person’s facial expressions and find someone who can imitate that person’s voice. The AI algorithm is then able to match the mouth and face to synchronise with spoken words.

Recently, a DeepFake video of Game of Thrones actor, Kit Harrington emerged, apologising for the final season of the hit television series. While many fans were fooled by the video, we can only imagine the impact of leveraging AI maliciously in DeepFakes can have on businesses.

Already, machine learning (ML) has successfully solved CAPTCHA programs, finding its way through websites as easily as a human would. AI and ML can also be used to scan social media platforms, identifying users associated with organisations and gathering intel to create more effective spear phishing campaigns. Spam can also become more convincing as it can use ML to adapt its message to the receiver.

Looking to the future, other realistic examples include worms that have evolved to avoid discovery because they can learn from detection events and evolve characteristics for their next infection event. Trojans have also already developed early malware variants where they can create new files of themselves, a process which can be made more efficient with AI.

Creating a smart and secure line of defence

To combat these constantly evolving and adapting threats, businesses need to be proactive and arm themselves against this new wave of cybercrime. It is important for businesses to understand the threat landscape, and even more so for those charged with an organisation’s cybersecurity.

Proactive planning is crucial however it can be hard to prepare for threats that are constantly evolving. As a result, businesses need to align themselves with cybersecurity vendors who are acknowledging the threat of AI and developing both AI and machine learning-capable technologies as part of a defensive strategy. These companies are already considering the possible implications of cybercriminals attempting to use technology that leverages AI in their security programs to anticipate and combat adapting threats.

Evolving with the threat landscape

With AI-enabled malware expected to be fully formed in the next few years, it is crucial for businesses and those in the cybersecurity industry to be observant and keep on top of the threat landscape. It is important to have a 360-degree view on how it can be used as a weapon, but equally as a defence. Preparing now, years ahead of full impact will be the best way we can combat AI-enabled malware.

Jim Cook is regional director, Malwarebytes ANZ.

 

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags artificial intelligence (AI)

More about ANZMalwarebytes

Show Comments
[]