We don't need more InfoSec analysts: We need analysts to train AI infrastructures to detect attacks

Addressing the skills shortage with virtual analysts

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

Everyone says there is an information security talent gap. In fact, some sources say the demand for security professionals exceeds the supply by a million jobs. Their argument is basically this: attacks are not being detected quickly or often enough, and the tools are generating more alerts than can be investigated, so we need more people to investigate those alarms.

Makes sense, right?

Wrong.

We believe that, even if companies aroaund the world miraculously hired a million qualified InfoSec professionals tomorrow there would be no change in detection effectiveness and we would still have a “talent gap.” The problem isn’t a people issue so much as it is an InfoSec infrastructure issue.

In order to explain why, we need to take a step back.

How do we classify a person as a criminal in the real world? By their actions. We observe their behavior and apply context and intuition to decide if they are a criminal or not.

In cyberspace we try to do the same thing-- we look at the infrastructure logs to identify user behaviors, and then we apply context and intuition to decide if a behavior is an attack or not.

But in cyberspace, this task is more difficult. There is simply too much data. Finding the attack behaviors in an ocean of legitimate behaviors is impossible for a person-- or a team of people-- to accomplish. We turn to InfoSec technology to help detect attacks, but current Infosec solutions are inherently flawed: they are rules-based. Spotting attackers often requires context and intuition, and these concepts are impossible to replicate using if-then rules.

Writing more sophisticated rules, or constantly tuning older rules isn’t the answer. Rules themselves are the problem. Rules attempt to correlate events, but instead end up spewing out more alerts than your team can handle -- the vast majority of which are false-positives.

The analysts then go back to the rules and write new rules. The conclusion is more analysts will simply generate more rules which will generate more alerts and more false-positives...requiring more analysts. It’s a vicious cycle caused by a rules-based infrastructure.

Help is on the way

The good news is that we can do a much better job of approximating context and intuition using Artificial Intelligence (AI). Certain models called “supervised learning” models can be trained by humans to mimic the context and intuition of humans by forming abstractions of behaviors.

A “behavior abstraction” is the totality of all the logged information about an entity over some time period. For example, things like packets sent, packets received, length of connection, periodicity of connections, bytes sent, bytes received and so on. There are hundreds of logged actions that, together, describe the behavior of an entity over time. The supervised learning model calculates many different distributions and many different input variable combinations that ultimately express the attack in the abstract. 

Once the model forms the behavioral abstraction, it must be classified as either “malicious” or “benign.” The AI model does not have the capability of assigning meaning to a pattern; only a human, using context and intuition, can classify a behavior as an attack or not. The human reviews the behavioral pattern and classifies it as an attack or not.

That classification step is called “labeling,” and when that label is attached to a behavior, you have a potential game changer. Now you have a system that knows what to look for (the labelled behavior abstraction) and it can process the massive volume of logs. The AI examines all behaviors against the behavioral abstraction and sends alerts when it finds a behavior that is the same or similar to the abstraction. All alerts are sent back to the human analyst for reinforcement or correction, continuously training the system to become more precise.

Man and machine together. Fighting crime!

Given the constant changes involved in attack detection, humans will always be needed. For example - within your company, risk policies change overnight. M&A happens. Your infrastructure changes. Or your company decides to add mobile as a distribution channel. Meanwhile attackers change the type and volume of attacks.

This is too dynamic a reality for static rules to be effective. The one entity that can figure out which behaviors are malicious and which are benign-- given your current risk profile-- is the InfoSec analyst. However the analyst needs an AI infrastructure to not only capture his/her context, nuance, and intuition, but to also scale that across the entire enterprise. In real time. 

To be clear, the humans are still in high demand - there aren’t enough to train AI systems.

That’s the true gap.

Veeramachaneni, is co-founder and CEO of PatternEx.  Prior to founding PatternEx, Uday led Product Management for Riverbed Stingray and created the first ever L7 SDN Controller that enabled service providers and enterprises to offer elastic web application firewall and L7 services.  

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about Riverbed

Show Comments
[]