As cyber warfare ratchets up across the globe, how can we be sure our national security network is up to the challenge? Human beings cannot always detect anomalous patterns in security data and it is for this reason why more and more cyber security firms are looking to artificial intelligence to provide the solutions to the ever evolving cyber threats they face.

One of the most regularly occurring technological news topics of recent times is that of Artificial Intelligence or AI. Apple, Facebook, Google, Microsoft and Uber are but a few of the big name tech companies currently investing in AI research and development and there are hundreds, if not thousands, of smaller start-up companies fighting to get in on the action. The applications that artificial intelligence has the potential for are mind-boggling and so it should come as no surprise that such big names in the technology industry are all gearing up for a race to become the first to crack machine learning and develop general artificial intelligence.

Recently, however, it has been cyber security firms that have taken the most interest in AI and machine learning systems. Nowadays, advanced hackers are able to bypass security systems and find loopholes in them using increasingly sophisticated methods, and it is becoming ever more difficult to detect these intrusions before they cause internal damage or steal data from the targeted system. Firewalls are no longer an effective defence against a determined hacker. As it stands, people working in cyber security firms are trained to detect patterns in user behaviour and predict what actions potential hackers may take before they take them. With the implementation of machine learning and AI, however, this could all change.

Man Vs. Machine

Machine learning is something humanity has been working on for a while now, and, with innovations in the field coming ever more frequently, it would seem it is only a matter of time before advanced systems are capable of running autonomously, constantly prepared for the next cyber attack. Machine learning works by identifying patterns in existing data and using these patterns, as well as the data as a whole, to improve its own functionality and strategic maneuverability. By doing so, the machine learns to identify “normal” user behaviour and can detect deviations from these patterns to a much greater degree of accuracy than a human normally would, and a lot faster too.

Governments and corporations all over the world are now beginning to look into more stringent methods for the detection and prevention of cyber attacks on their networks, and AI is fast becoming the go-to technology. At the 2016 Black Hat Security Conference, Spark Cognition, a US security firm based in Texas, revealed what it describes as “the world’s first fully cognitive, anti- malware system”. The system uses machine learning to analyse files and detect threats from malicious content and is also able to do so at speeds greater than that of any human. And Spark Cognition aren’t the only company looking to deploy AI based systems in the near future.

Implementing AI-based Systems

Darktrace, a UK-based security startup believe that, in the near future, all cyber security systems will be based on fully automated AI. In an interview with Computer Weekly, co-founder and director of technology for Darktrace, Dave Palmer, stated: “We believe we are the only ones at the moment who focus only on learning from the behaviours of people and systems within the business rather than on algorithms that look for known types of attacks,”. Mr Palmer also went on to state that “An entirely AI security operations centre is not an unreasonable objective for us to have as researchers, and is certainly one of our goals, especially considering how quickly technology is moving in areas such as self-driving cars, which not long ago were considered to be pure fiction.”

And the move towards AI-based systems certainly does seem to be the common trend of many organisations working in the cyber security field. However, not everyone working in the field of artificial intelligence displays such confidence about AI and its potential. Nick Bostrom, director of the Strategic Artificial Intelligence Research Centre, isn’t convinced we should charge haphazardly into building AI systems without adequate control measures in place. In his book, Superintelligence: Paths, Dangers, Strategies, Bostrom considers the possibility of a potential “intelligence explosion” catalyzed by advances in machine learning and AI. An explosion in machine intelligence could come to be a detriment to humanity as, once machines reach the level of general intelligence, it could already be too late to stop further increases and we would lose control of the super-intelligent machine we had built.

Weighing Up The Risks

The accidental catalyzing of an intelligence explosion may sound like science-fiction, but the reality is we are already fast approaching the point in which many experts in the field of machine learning and AI believe a functioning artificial general intelligence is mere decades away. With the focus of cyber security firms being diverted to that of AI-based systems and the need for such systems increasing, it is not outlandish to think that such scenarios could indeed become a reality sooner than we think. Bostrom believes that, between now and that time, our efforts should be focused on devising suitable control mechanisms in order to drastically decrease the likelihood of such an event. However, as we’ve seen throughout history, when a technology is sought after by a large enough group of people, the race to attaining it begins.

Cybersecurity and other tech firms are currently engaged in one such race with AI-based cyber security systems and, while their plans for such a technology may be benevolent, it doesn’t take much to begin to see a wider security problem that could arise due to the accidental creation of a seed AI. A seed AI is an artificial general intelligence that continuously improves itself by rewriting its own source code without the need for human intervention, similar to how the AI-based systems constantly improve themselves based on data already available to them. The risk here is that, by looking to further improve machine learning and increase autonomy, developers and engineers run the risk of creating something they cannot control without realising it.

Cyber security is, without a doubt, a much-needed asset in our ever evolving world. However, basing our future security on machines that have the capability to learn autonomously and, perhaps one day, even think for themselves, may, in fact, be the very thing that becomes our undoing.

More From This Author:

Advertisements

6 thoughts on “AI and Cyber Warfare: Can Machine Learning Protect Our National Security?

    1. Hi there, thanks for taking the time to read them, I’d be more than happy to share them. As long as correct attribution is given to the author of whatever articles you use, knock yourself out!

      Like

      1. hi thank you very much would you like me to send you a link when i post them on my site. are you on twitter or facebook

        Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s