Cyberspace is rapidly becoming a hostile environment in which to do business. The 2016 Global Threat Intelligence Report revealed that there were 6.2 billion cyber-attacks in 2015 alone, while the International Data Corporation predicts that by 2018, two thirds of corporate networks will experience an Internet of Things security breach as malicious hackers continue to target business networks, systems and devices. Their purpose? To steal data that can be exploited. This puts enormous pressure on businesses to defend their digital assets and employee devices against criminals armed with the latest technology. As hacker tools become more advanced, businesses are having to find newer, smarter ways to fight off these attacks. One way of effectively addressing cybercrime is for organisations to apply Artificial Intelligence (AI)-led security technology.

 By making use of AI in cybersecurity it becomes possible for organisations to shift the focus onto cyberattack prevention and detection, finally making security proactive and predictive instead of reactive. As rapidly as AI is becoming a growing trend, so too is the Internet of Things (IoT) expanding and as the proliferation of devices and sensors connected to the IoT continues, the sheer volume of data being created will skyrocket to an unbelievable level. While this data can give us valuable insight into the operational reality of what is working well within an organisation and what is not, there are a few things that organisations need to know about the benefits of AI and how to steer clear of the associated potential pitfalls to successfully prevent cyber-crime.

Defining AI

Realistically speaking, AI is a system or a program that has built-in algorithms designed to work out probabilities for risk materialisation, based on different types of mathematical calculations. Intelligence implies that such a system is capable of doing more than just logging and collecting information as it passes through the system. It is capable of automatically conducting certain analyses on information as it is received after which the system can then action. Previously a security system was only capable of responding with an alert trigger based on pre-defined thresholds which would then require human intervention to deal with the threat, but with AI it is now possible for the system to respond without being prompted. Businesses are already using Advanced Threat Protection systems and due to integration with other security and network entities, such a system can take action and can communicate with the firewall to initiate a block protocol in response without having to wait for a human to make the call.

The benefits of leveraging AI to prevent cyber-attacks are numerous. The IoT is (and will continue) to amass huge amounts of data – the challenge lies in finding ways to analyse and identify trends in this information that produced by devices and collected by sensors connected to the IoT.  Because it simply is not humanly possible for us to review and understand this volume of data, AI becomes a way for us to make sense of it all by improving the speed and accuracy of data analysis. With AI, an organisation can pre-program thresholds and different parameters within the system itself to enable it to act without requiring human input. While it’s not currently possible to get levels of complete automation, if the system is intelligent enough to handle 90% of security incidents, that will go a long way to reducing time to identify threats and enable a much faster response as well.

Getting to know AI

Since the intelligence we are talking about is artificial which means it is created by a human. This is where weakness lies, or at least the potential to make a fatal mistake. If there is a problem the algorithm could trigger the wrong action or alert within the system and that is where matters are most likely to go awry. There is also the very real possibility that because such systems are customised based on certain parameters, a flawed system will result from incorrect configuration and will accordingly generate a lot of false positives. So it’s important to stress that the customisation of the AI security system must fit the environment precisely. This means it is possible to instruct the system as to exactly what parameters it is operating within, and this process is known as ‘profiling’.

Advertisement

The first step in the journey to adopting security measures led by AI software is to profile the system, network and environment as a whole to define the parameters of normalcy within which the intelligent system will operate. From there, if the profiling has been done correctly and by continuously working to tweak and fine-tune parameters, it will become possible for an organisation to automate about 95% of cyber security tasks. AI will take some time to build up to full efficacy given that the system will need to get to know the environment through behaviour profiling, but the more data a learning algorithm receives the smarter it becomes. Bear in mind that the early stages of implementation will involve false positives (making up the 5% of scenarios requiring human intervention) but these will be the exceptions and at the very least will draw attention to a problem that needs to be addressed within the system itself.

Approach with caution

To minimise the space for human error, it is advisable to make use of a cybersecurity specialist who has the experience and qualifications to understand each environment and follow the best practices in terms of deployment of AI solutions. It would not be prudent to take a big bang approach to deployment and simply plug in the system and expect it to work. It is critical to have followed the right process and this starts with profiling. Only once profiling is done is it possible to fully understand the traffic on the network and system, after which the parameters can be defined. Once the parameters are clear, only then should an organisation start gradually implementing various controls and making the switch from passive monitoring to actively responding to threats.