If companies don't adequately protect their data, cyber attacks can do a lot of damage. Fortunately, a solution to this issue can be found in new applications of the artificial intelligence for predicting hacker attacks.
From the security breach leaving VTech toys vulnerable to ransomeware holding hospital records hostage, lately cyber attacks have been in the news a lot. Companies make efforts to better protect their data, but oftentimes they cannot detect that a system is compromised until it's too late.
Automated detection systems have the inconvenient that they tent to generate too many false alarms, while human analysts alone could miss the evidence. A better solution can persist in advanced applications of the artificial intelligence (AI) systems.
CBS News reports that machine-learning startup PatternEx together with a research team from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed an AI platform called AI2. Working together with input from human analysts, the AI2 platform is able to predict cyber attacks 85 percent of the time.
According to the same publication, the performance set by the AI2 cyber security platform is about three times better than past systems' benchmarks. The developer group declared that AI2 can reduce the number of false positive results by a factor of five.
PatternEx was founded two years ago and started the development of the AI2 cyber security system. CSAIL research scientist Kalyan Veeramachaneni developed AI2 together with Ignacio Arnaldo, a former CSAIL postdoc and chief data scientist at PatternEx.
Until very recently AI systems were just not advanced enough to work as cyber security platforms, lacking this kind of prediction accuracy according to Veeramachaneni. Due to advancements in the infrastructure of processing technologies, increased storage capacity, and machine deep learning technology, systems such as AI2 are possible today.
However, the recent emergence of deep learning has also its negative side, according to MIT Technology Review. The need for big data in deep learning may involve private consumers' information or information held by organizations that are unwilling to share it.
Some consumers are already concerned their data privacy. In the near future the industry and federal regulators will need to figure out a balance between the need of cyber security and protecting consumers' privacy.