Knowing a cyberattack’s going to occur before it actually happens is very useful but tricky to achieve in practice. Now MIT’s built an artificial intelligence system that can predict attacks 85 per cent of the time.
Cyber-attack spotters work in two main ways. Some are AI that simply look out for anomalies in internet traffic, but they often throw up false positives – warnings about a threat when actually nothing’s wrong. Other software systems are built on rules developed by humans, but it’s hard to create systems like that which catch every attack.
Instead, researchers from MIT’s Computer Science and Artificial Intelligence Lab have built a new AI – called AI² – that combines the two approaches.
AI² uses three different machine learning algorithms to detect suspicious events. Like any AI system, though, it needs some feedback from a human to tell it whether those events are actually suspicious or not. But most of us wouldn’t be able to tell the difference between, say, a DDoS from a legitimate surge in traffic — so it shows its first set of results to an expert.
With that feedback, it takes on board whether or not it should be classifying the events as attacks or not, then refines its internal models. Over time, it becomes better able to differentiate signal from noise, showing fewer incorrect results and ini turn saving the expert’s time.
In tests carried out using 3.6 billion log lines of internet activity, AI² was able to identify 85 per cent of attacks ahead of time. It also created five times fewer false positives than existing cyber-attack spotting AIs. The work was presented last week at the IEEE International Conference on Big Data Security in New York City.
Over time, the researchers explain, it only gets more effective. “The more attacks the system detects, the more analyst feedback it receives, which, in turn, improves the accuracy of future predictions,” explained Kalyan Veeramachaneni in a press release. “That human-machine interaction creates a beautiful, cascading effect.”