Artificial Intelligence (AI) is a very popular buzzword at the moment. Not unlike big data, the cloud, IoT, and every other “next big thing”, an increasing number of companies are looking for ways to jump on the AI bandwagon. But many of today’s AI offerings don’t actually meet the AI test. While they use technologies that analyze data and let results drive certain outcomes, that’s not AI; pure AI is about reproducing cognitive abilities to automate tasks.

Here’s the crucial difference:

  • AI systems are iterative and dynamic.They get smarter with the more data they analyze, they “learn” from experience, and they become increasingly capable and autonomous as they go.
  • Data analytics (DA), on the other hand, is a static process that examines large data sets in order to draw conclusions about the information they contain with the aid of specialized systems and software. DA is neither iterative nor self-learning.

Understanding AI terms

AI refers to technologies that can understand, learn, and act based on acquired and derived information. Today, AI works in three ways:

  1. Assisted intelligence, widely available today, improves what people and organizations are already doing.
  2. Augmented intelligence, emerging today, enables people and organizations to do things they couldn’t otherwise do.
  3. Autonomous intelligence, being developed for the future, features machines that act on their own. An example of this will be self-driving vehicles, when they come into widespread use.

Although still in its infancy, AI can be said to possess some degree of human intelligence: a store of domain-specific knowledge; mechanisms to acquire new knowledge; and mechanisms to put that knowledge to use. Machine learning, expert systems, neural networks, and deep learning are all examples or subsets of AI technology today.

  • Machine learning uses statistical techniques to give computer systems the ability to “learn” (e.g., progressively improve performance) using data rather than being explicitly programmed. Machine learning works best when aimed at a specific task rather than a wide-ranging mission.
  • Expert systems are programs designed to solve problems within specialized domains. By mimicking the thinking of human experts, they solve problems and make decisions using fuzzy rules-based reasoning through carefully curated bodies of knowledge.
  • Neural networks use a biologically-inspired programming paradigm which enables a computer to learn from observational data. In a neural network, each node assigns a weight to its input representing how correct or incorrect it is relative to the operation being performed. The final output is then determined by the sum of such weights.
  • Deep learning is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Today, image recognition via deep learning is often better than humans, with a variety of applications such as autonomous vehicles, scan analyses, and medical diagnoses.

Applying AI to cybersecurity

AI is ideally suited to solve some of our most difficult problems, and cybersecurity certainly falls into that category. With today’s ever evolving cyber-attacks and proliferation of devices, machine learning and AI can be used to “keep up with the bad guys,” automating threat detection and respond more efficiently than traditional software-driven approaches.

At the same time, cybersecurity presents some unique challenges:

  • A vast attack surface
  • 10s or 100s of thousands of devices per organization
  • Hundreds of attack vectors
  • Big shortfalls in the number of skilled security professionals
  • Masses of data that have moved beyond a human-scale problem

By 2021, there are estimated to be an astounding 3.5 million unfilled cybersecurity positions worldwide.

Some early AI adopters

Google: Gmail has used machine learning techniques to filter emails since its launch 18 years ago. Today, there are applications of machine learning in almost all of its services, especially through deep learning, which allows algorithms to do more independent adjustments and self-regulation as they train and evolve.

“Before we were in a world where the more data you had, the more problems you had. Now with deep learning, the more data the better. Elie Bursztein, head of anti-abuse research team at Google

IBM/Watson: The team at IBM has increasingly leaned on its Watson cognitive learning platform for “knowledge consolidation” tasks and threat detection based on machine learning.

“A lot of work that’s happening in a security operation center today is routine or repetitive, so what if we can automate some of that using machine learning?”Koos Lodewijkx, vice president and chief technology officer of security operations and response at IBM Security.

Juniper Networks: The networking community hungers for disruptive ideas to address the unsustainable economics of present-day networks. Juniper sees the answer to this problem taking shape as a production-ready, economically feasible Self-Driving Network™.

The world is ready for autonomous networks. Advances in artificial intelligence, machine learning, and intent-driven networking have brought us to the threshold at which automation gives way to autonomy.” Kevin Hutchins, Sr. VP of strategy and product management.

Balbix: BreachControl platform uses AI-powered observations and analysis to deliver continuous and real-time risk predictions, risk-based vulnerability management and proactive control of breaches.

“Enterprises need to build security infrastructure leveraging the power of AI, machine learning, and deep learning to handle the sheer scale of analysis.” Gaurav Banga, Founder and CEO.

Some cautionary notes

AI and machine learning (ML) can be used by IT security professionals to enforce good cybersecurity practices and shrink the attack surface instead of constantly chasing after malicious activity. At the same time, state-sponsored attackers, criminal cyber-gangs, and ideological hackers can employ those same AI techniques to defeat defenses and avoid detection. Herein lies the “AI/cybersecurity conundrum.”

As AI matures and moves increasingly into the cybersecurity space, companies will need to guard against the potential downsides of this exciting new technology:

  • Machine learning and artificial intelligence can help guard against cyber-attacks, but hackers can foil security algorithms by targeting the data they train on and the warning flags they look for
  • Hackers can also use AI to break through defenses and develop mutating malware that changes its structure to avoid detection
  • Without massive volumes of data and events, AI systems will deliver inaccurate results and false positives
  • If data manipulation goes undetected, organizations will struggle to recover the correct data that feeds its AI systems, with potentially disastrous consequences


In an ideal future world, AI will be an enabling technology that transforms our lives. Embedded in our homes, cars, and devices, it will make everything “smarter” and more efficient. In security, it will be able to instantly spot any malware on a network, guide incident response, and detect intrusions before they start. In short, it will allow us to form powerful human-machine partnerships that push the boundaries of our knowledge, enrich our lives, and drive cybersecurity in a way that seems greater than the sum of its parts.