The datapine Blog
News, Insights and Advice for Getting your Data in Shape

How Artificial Intelligence Can Be Used Maliciously in Cybersecurity

Security risks of AI

Cybersecurity has become the foremost focus in the realm of technology. Unfortunately, it is readily apparent that it’s impossible for the field to keep up with the continued development of malware and cyber-attack methods. Frighteningly, those who prefer to use their skills maliciously have found a new tool that could prove invincible – artificial intelligence (AI).

Last year, a cybersecurity company called Darktrace discovered a new form of a security breach. This breach used the formidable trait of AI known as machine learning. The intelligent system watched and learned the tendencies of the network it was residing in. Once the AI felt it had enough information, it began mimicking the behaviors and became almost impossible for advanced security to identify.

AI has recently been explored for its beneficial purposes, like scanning networks and systems for infiltrations. Given the report from Darktrace, it is safe to say that the artificial intelligence can be used by the “other side” as a self-teaching tool for attacks, too.

As cybersecurity enters the new realm of advanced AI, it is vital to be aware of the malicious threats AI can pose and how it can be controlled.

Hackers Weaponizing Artificial Intelligence

While cybersecurity firms have been studying AI as a tool for attack prevention, hackers have the opposite in mind. AI and its ability to learn can provide malicious actors with the opportunity to get around typical defense mechanisms.

One major threat to the cybersecurity of companies and individuals is user error. A popular example of this vulnerability is the phishing technique used by hackers. All it takes is one well-crafted email that convinces an unsuspecting user to click on a malicious link. If this is possible currently, imagine the surge in phishing success rates when an AI learns to mimic trends found amongst typical emails.

Cybersecurity measures use programs that scan for irregularities in code. These irregularities are then removed, neutralizing the threat to a system. By introducing AI, hackers could create far less detectable code that can reside in a system without detection. As a result, AI presents an opportunity for hackers to become far more efficient with their attacks.

Using AI Against Itself

The beauty of AI as a tool for cybersecurity is its ability to learn patterns and tendencies. This “machine learning” allows the AI to find inconsistencies in code, which is very useful in detecting malware. Once AI becomes proficient at identifying these issues, it can do so far more efficiently than humans.

The AI learns using its algorithm to study vast amounts of data about malware. Over time, the AI becomes proficient at spotting various forms of malware based on what it learned. However, what happens when a hacker finds a way to change the algorithm or the data it is learning from? The changes could be teaching the AI to pretend to look for problems while ignoring them.

AI could be used to DoS a system by convincing security protocols there is a problem. For instance, AI could signal an overwhelming number of apparent attacks that encourage the total shutdown of a system.

Unfortunately, the development of machine learning techniques has outpaced the understanding of protecting it from those with ill intent. Like most aspects of cybersecurity, the battle to keep up with hackers is frequently lost, meaning AI can and will be used against itself.

The Introduction of the Chatbot

The chatbot has revolutionized customer service for many companies. From Facebook to online banks, chatbots reply to inquiries with an efficiency that human agents could never attain. This is widely seen as a boon for the service industry and a way to ensure customer satisfaction.

However, as with any AI technology, the chatbot has become a tool for malicious attacks. One example of such a use is a bot released on Facebook in 2016. This bot tricked thousands of Facebook users and convinced them to unknowingly install malware that allowed remote access to their Facebook accounts.

Conversations with a chatbot can be alarmingly revealing. For instance, imagine a conversation with a customer service bot at the bank. Those communications often include details like address, phone number, and financial information. A malicious bot, with AI programming, could extract significant information from an unknowing victim.

The most advanced AI bot uses conversational methods. Google Assistant and Amazon Alexa have become widely popular in homes. These bots are constantly listening and waiting for commands or inquiries. However, their lack of security is vulnerable to malicious access and could divulge significant information about the owner.

The Two Sides of De-Anonymization

De-anonymization is the process that combines multiple data points to determine someone’s identity. The process can identify information, combine it with other pieces of information, and determine who the information belongs to.

De-anonymization could have negative outcomes, too. One real-life example was hackers’ ability to successfully identify Netflix users and their attached information. De-anonymization is especially disturbing when considering areas like research.

Alternatively, de-anonymization can play a role in the discovery of bad actors like hackers. The ability to trace the author of a malicious code could prove invaluable to those who aim to identify hackers.

AI is used for all forms of de-anonymization by rapidly learning a data set and figuring out how the data relates.

Is Cybersecurity Possible in the World of AI?

The threat of AI-powered malicious attacks is becoming more problematic each day. While it poses a significant challenge, the realm of cybersecurity has always been fraught with the realization that hackers may be one step ahead. So how do the “good guys” move forward in such a threatening environment?

  • Practice Makes Perfect

One aspect of cybersecurity is the performance of exercises to determine vulnerabilities. Cybersecurity companies and proponents have often hosted opportunities for practicing intentional attacks with the purpose of learning about how they work.

This method will certainly help in the realm of artificial intelligence, providing insight into how algorithms and data sets interact to produce results.

  • Honesty About Issues

It is a harmful common practice that software companies fail to disclose potential vulnerabilities. With the increased presence of AI, it is more important than ever for these vulnerabilities to be revealed quickly so that the appropriate action can be taken to protect users.

  • Prepare Accordingly

While the understanding of artificial intelligence is an ongoing process, those developing AI can be mindful of the potential vulnerabilities of the systems moving forward. Having a progressive lens can ensure AI is not deployed without some protections already in place.

  • Educate AI

One of the best defenses against cyber-attacks via artificial intelligence is creating AI that can detect these intrusions. Giving AI practice in “good versus bad” data streams can create an educated AI that is more proficient at identifying issues.

AI can teach itself to identify and even counter-attack an opponent AI. Developing high-level machine learning systems can create cybersecurity AI systems that are smart enough not to be fooled.

  • Individual Protections

AI may sound like a problem for big corporations or government agencies. However, like all cybersecurity issues, it is vital for individuals to understand that malicious attacks can happen to anyone at any time. AI presents a fearsome tool for hackers to use against unsuspecting, innocent people. There are ways to ensure you are doing what you can to prevent being vulnerable to an AI-guided attack.

  • Protect Your Data – Use a VPN

People are using public Wi-Fi and other public networks on a frequent basis. The first step in preventing an AI attack is the same as preventing any form of malicious attack. If your laptop or mobile device is connected to a public network, you should be using a Virtual Private Network (VPN).

The VPN provides encryption from your device to the server you are contacting. A VPN can protect from a hacker using the public network to infiltrate systems connected to the network.

  • Be Aware of Your Surroundings

This advice is applicable to both the physical and digital realm. Using strong usernames, passwords, and practicing situational awareness are all easy ways to protect yourself from AI infiltration. Do not click on suspicious links and approach emails from all parties with a wary eye.

Following the same sensible cybersecurity behaviors that should be practiced by all users can go a long way in protecting individuals from threats of AI-powered malicious attacks.

Conclusion

In the digital realm, progressive ideas are used for both positive and negative purposes. AI is not immune to use by those seeking to harm others and take advantage of vulnerabilities. Cybersecurity must turn its focus to the capabilities of AI and develop an understanding of how to protect systems from negative uses of the groundbreaking technology.

From large corporations to individuals sitting in a café, all are vulnerable to a digital attack. AI presents the opportunity for hackers to quickly obtain information and for cybersecurity agencies to protect systems from attack. As always, whoever gains the digital upper hand wins.

Author Bio: 

Harold is a cybersecurity consultant and a freelance blogger. His passion for virtual security extends way back to his early teens when he aided his local public library in setting up their anti-virus software. Currently, Harold's working on cybersecurity campaign to raise awareness regarding virtual threats that businesses have to face on a daily basis.