
The first gap we’re going to look at is the gap that exists between the output of your favourite vulnerability scanning tool, and whatever process, person or tooling your organisation is using for remediation.
Some organisations have literally millions of open vulnerabilities reported by the scanners. It’s simply not humanly possible to action complete remediation across a dataset like that, when the reality of the organisation’s IT estate is likely to be complex, fragmented, and sometimes simply unpatchable.
I’ve seen organisations ‘start at the top’ and try to work down through the vulnerabilities or work in chronological order. That wouldn’t seem to make a lot of sense. Or I’ve seen organisations paralysed by the sheer volume and don’t know where to start. Or worse, I’ve seen organisations that know they have an issue, but simply don’t prioritise the remediation.
Sometimes this is simple delinquency. Sometimes it’s wanton risk taking. Occasionally it’s just being ill informed. “We have nothing connected to the internet, so we’re fine.” Mmm, so no interest in horizontal movement for actors already inside? No interest in insider threat, phishing, lost credentials, etc etc.
So how do you plug this gap?
Well, first, the dataset can be refined by severity. Organisations can set their own thresholds here, and begin to decide what their risk tolerance is.
Second, the dataset can be enriched with business application asset data (hopefully from your CMDB) to add context. What service or app does this piece of infrastructure underpin? How critical is it to our business? What are it’s points of interconnection? Is it exposed to untrusted networks or the internet? Does it contain any legacy or unpatchable infrastructure? What data does it host, how critical and how valuable is it? What regulatory regimes apply?
Third, the dataset can be sliced by platform, OS, OS version, kernel/patchset etc, to identify quick wins. For example, if eradicating a certain version of Windows Server gets you the biggest bang for your buck, then a focused project can be undertaken to look at upgrading those specific servers. Or patching can be mandated on a certain OS to a minimum level.
Fourth, the organisation needs to be encouraged to leverage this data in some way. Gamification works, with directors of various departments competing to see who can get their portion of the estate ‘cleanest’ the fastest. You might argue that gamification is just a modern way to say ‘name and shame’, and you’d kind of be right, but it can be positioned in a positive way. Businesses also need leadership on the issue. What’s our risk tolerance? What’s our stance? What’s our policy? What does good look like? Is this important?
I guarantee that if organisations work on their governance/leadership, agree a risk tolerance to severity, enrich their data to include context/criticality, and slice the data to allow sensible project planning around tech-refreshes and so on, it’s far more likely to make significant headway into really rather chunky numbers of vulnerabilities quite quickly.
Taylor Harrow has experience of having led exactly these kinds of initiatives, with >70% reduction in vulnerability risk delivered within 3 months. Every organisation is unique, so it’s the experience in making the right decisions that unlocks the ability to drive progress.
If this sounds like an area where your organisation could benefit from some help, leadership or direction setting, please get in touch via Linked In.