Cybersecurity: how artificial intelligence can turn against you?

Artificial intelligence is positively impacting our world in previously unimaginable ways across many different industries. 

The use of AI is particularly interesting in the cybersecurity industry because of its unique ability to scale and prevent previously unseen, aka zero-day, attacks.

But this is only possible because hackers use classic offensives. How will they use Artificial Intelligence? Most of cybersecurity experts guarantee that cyberattacks based on artificial intelligence have already begun. This article details the possible operating modes.

 

What is AI?

We must first develop a basic understanding of how AI technology works. The first thing to understand is that AI is comprised of a number of subfields. One of these subfields is machine learning (ML), which is just like human learning, except on a much bigger and faster scale.

In order to achieve this type of in-depth learning, large sets of data must be collected to train the AI on in order to develop a high-quality algorithm, which is basically a math equation that will accurately recognize an outcome or characteristic. This algorithm can then be applied to text, speech, objects, images, movement, and files. Doing this well takes vast amounts of time, skill, and resources.

So what is it not? AI is really a marketing misnomer that sounds awesome and futuristic, which is why the phrase is currently slapped onto everything in order to boost sales, from cars to automatic juicers. What it currently is not, is a self-motivated, conscious technology, so there is no Matrix or Terminator scenario to fear. (Not at the moment anyways).

If someone does create that in future, we will have to revisit that statement. But for now, each AI product made is simply a really useful and powerful tool that is made to have a very narrow purpose. Like every tool, AI has the potential to be used for evil as well as good.

 

Examples of Evil AI

Image CAPTCHAs leverage humans to teach a machine what an image is. When you click on the CAPTCHA images and choose boxes where letters are shown or which contain vehicles, you are actually helping the neural network to learn how to recognize a letter or vehicle. Bad actors on the Dark Web can take advantage of this same idea for their forums to develop their own AI algorithms that will accurately recognize what letters and vehicles look like, in order to create their own CAPTCHA-breaking AI services.

In fact, researchers were already able two years ago to create their own CAPTCHA breaking bot that is up to 90% accurate. This will then be scalable and profitable because the machine will be able to effectively deceive the CAPTCHA into categorizing it as human, and so it will then be able to easily bypass this type of two-factor authentication (2FA). There are more difficult CAPTCHAS, such as sliding puzzle pieces and pivoting letters, but these are not quite as popular or widespread yet.

Another AI driven attack could be finding vulnerabilities. Vulnerabilities are labeled by CVE numbers and describe what exploits exist in a piece of software or hardware. As mentioned before, reading these falls into the field of AI. A bad actor could train the AI to become effective at reading the vulnerability details and from there, automate exploiting those vulnerabilities in organizations at scale.

AI solutions can also be defrauded if you understand what that particular AI is looking for. For example, there are AI solutions that are very good at determining whether or not traffic to their site is legitimate human traffic. It bases this on a variety of factors, such as Internet browser type, geography, and time distribution. An AI tool built for evil purposes could collect all of this information over time and use it in conjunction with a batch of compromised company credentials.

 

Why There is Hope

The good news is that for once, the good guys are years ahead of the bad guys by already having their own AI solutions ready to meet these threats. This is due to the high barrier to entry from a resource and talent pool perspective. However, these barriers are smaller for certain groups, such as organized crime groups and nation states.

The good guys need to keep creating and improving their AI tools. If we rest on our laurels, the bad guys will not only catch up to us, but they will come out ahead.

François BARAËR 

Sales Engineer - Southern Europe - BlackBerry Cylance

Why Deezer Relies on AI for Cybersecurity?

Discover how BlackBerry Cylance AI keeps Deezer safe in this case study. Watch the video