The fundamental problem with cyber security today comes down to the simple fact there isn’t enough time in the day to discover and remediate all the potential vulnerabilities that exist within any IT environment.
A recent Ponemon Institute survey of 600 cyber security professionals (on behalf of Balbix), found that 67 percent of respondents admit they don’t have the time and resources required to mitigate all the vulnerabilities in their environment. Sixty percent admit they don’t even have visibility across all their IT assets.
As any cyber security professional knows, an organisation can’t patch what it doesn’t know it has. In fact, over half say they only run vulnerability scans either on an ad hoc basis or once a quarter.
What’s surprising is that the research also revealed that over two-thirds of respondents acknowledge they are at least a month behind on fixing known software vulnerabilities. In fact, less than half consider patching to be a proactive approach to avoiding breaches.
It’s easy to blame cyber security professionals for this mess. But the truth of that matter is that cyber security professionals aren’t the ones who created this mess in the first place.
Most of the vulnerabilities are only discovered in a component of an application after it gets deployed in a production environment. Sometimes a developer might employ a component with a known vulnerability through carelessness.
Regardless of how they are introduced, there will always be vulnerabilities in software. The real challenge is finding a way to reduce the number of vulnerabilities to a point where they become manageable.
That’s why there is so much interest these days in adopting best DevSecOps practices. As a discipline, DevSecOps essentially incorporates cyber security issues into the quality assurance phase of any application development project.
Better still, developers are required to address vulnerabilities as part of a set of DevOps processes enabled by a continuous integration/continuous development (CI/CD) platform.
In effect, responsibility for implementing patches is removed from the cyber security team. They still need to discover vulnerabilities, but once they share that information with developers, the cyber security team can focus more of their efforts on identifying potential threats and hunting for known vulnerabilities.
In effect, developers now have a much higher vested interest in cyber security. As a result, many developers are quickly adopting a microservices-based approach to building applications using Docker containers.
While the code inside those containers isn’t any more secure, the containers themselves are much simpler to replace. This allows developers to more easily rip and replace components of an application without having to patch the entire application. That makes the processing of patching applications much less disruptive than it is today.
Of course, the challenge most cyber security teams have today when it comes to deploying containers is that they lack visibility into those containers. But at least, developers are being truly held accountable for cyber security.
Continually cleaning up someone else’s mess doesn’t provide the right set of incentives for behaviour to be changed. By shifting more responsibility for cyber security to the proverbial left, the number of vulnerabilities any cyber security team should have to help clean up should begin to decline over time.
Legacy applications aren’t going to go away any time soon, so there will always be a need to patch an application for the foreseeable future. But any time someone in the organisation starts talking about replacing one of those legacy applications with a modern application that’s going to be fundamentally more secure thanks to containers and DevSecOps, the cyber security team should do everything in their power to encourage that transition.
Andrew Huntley is the regional director for ANZ and the Pacific Islands for Barracuda Networks.