Home Cybersecurity The Hidden Danger Behind Double Reporting Of Vulnerabilities

The Hidden Danger Behind Double Reporting Of Vulnerabilities

By editor
663 views

This detailed guide talks about why security teams are overwhelmed with vulnerabilities, the danger that hides behind the double reporting of vulnerabilities, and how to mitigate such vulnerabilities using live patching tools like KernelCare.

We know that the cybersecurity threat is growing, with matching growth in the efforts to try and mitigate the threat and the associated costs. But evidence suggests that mitigation is not progressing quickly enough.

According to a joint analysis performed by McAfee and the Center for Strategic and International Studies, 2020 will see the global cost of cybercrime rise past the $1tn mark for the first time - a massive increase of 50% over the 2018 total. That is a rate of change clearly outpacing any comparable metric such as GDP growth, or growth in IT expenditure.

Awareness is not the issue - after all, companies are spending vast sums defending against cyber threats.

Instead, in this article, we argue that cybersecurity players are essentially overwhelmed by the challenges that they face. We point to a recent occurrence of the double reporting of known vulnerabilities as clear evidence.

Keep reading to see why security teams are overwhelmed with vulnerabilities, how it impacts patching, and what teams can do to patch consistently in the face of a relentless onslaught of vulnerabilities and exploits.

Yet another vulnerability?

In the second half of last year, a Linux kernel vulnerability was discovered, and reported. It was assigned a Common Vulnerabilities and Exposures (CVE) number, CVE-2020-29369, and the vulnerability was patched as expected. So far nothing unusual.

There was nothing beyond the ordinary about the vulnerability itself either. In any OS, the kernel must carefully manage memory – assigning (mapping) memory space when an application needs it, and correctly removing allocations and re-assigning free memory when an application no longer needs it.

However, this process of managing memory space can be glitch-prone. When coded without the necessary care, kernel memory handling processes can give cybercriminals an opportunity. In the case of CVE-2020-29369, the problem occurred in the mmap function which is used for memory mapping in Linux.

The nature of the vulnerability meant that two distinct applications could request access to the same memory space – leading to a kernel crash.

If an attacker played their cards correctly - in other words, engineered an exploit - the attacker would be able to grab data that would otherwise be protected by the kernel. It could be completely innocuous data - or something more valuable, such as valuable personal data or passwords.

A tale of two vulnerability reports

So we can see that a typical vulnerability was reported, and accepted as per usual procedures. But something disconcerting happened next.

Just a few months later, exactly the same vulnerability was reported. Again, a CVE number was allocated, this time CVE-2020-20200. However, it soon turned out that the new vulnerability alert was a duplicate of another vulnerability - CVE-2020-29369.

The researchers that “found” the vulnerability a second time for some reason or the other failed to find the first instance of the vulnerability before requesting another CVE reservation for what they had discovered. One of the primary intentions of the CVE databases is to avoid double reporting, but in this particular instance, another CVE was nonetheless requested.

This case of what’s called “double reporting” is not the first or only instance of a vulnerability being reported twice. Worse, when investigations into a vulnerability get to the point where a CVE has been assigned, the vulnerability will already have been reviewed by numerous highly trained security experts.

Even security researchers can mix it up

In this example of double reporting it is clear that security researchers should either have been aware of the existing vulnerability, or should have found the existing CVE if they sufficiently researched the “new” vulnerability before they requested a new CVE number.

It is a worrying thought. This memory mapping vulnerability lies at the core of the Linux kernel, yet security researchers were apparently unaware of it, hence the double listing. Worse, it is not as if each listing was a decade or even years apart: the individual listings of the same vulnerability were made just months apart, one in August 2020 and one in November 2020.

Are security researchers negligent? No. Security researchers are simply completely overwhelmed by the sheer volume of cybersecurity challenges. That’s why, in this example, Linux kernel security experts missed an existing report of a potentially critical vulnerability.

The hidden danger behind double reporting of vulnerabilities

Clear evidence that the cybersecurity threat is growing, combined with examples where even security experts are getting it wrong suggests that the double reporting has bigger implications than it appears to have at first glance.

It does not suggest that Linux security experts are prone to errors and oversights. It simply suggests that the job of managing security vulnerabilities has become so incredibly hard that even the experts struggle to keep up.

Consider for a moment a typical in-house technology team that has a comprehensive remit - yes, including security, but also covering maintenance, operations, and an endless number of other responsibilities.

Even where enterprise teams have dedicated security experts, chances are that expertise must be applied across a range of threats and technology tools. It would be extremely rare even for a large enterprise to employ a Linux kernel security expert. And even if they did, as we’ve seen, these security experts can get it wrong.

IT teams have tough times ahead

On-site teams will always manage security vulnerabilities to some degree. By responding to news of major exploits, for example, and applying patches accordingly. Warnings from vendors can also drive action, and most good IT departments will have some type of patching regime that ensures patches are applied at set intervals.

But how can IT teams realistically keep up with a growing pile of CVEs that affect Linux distributions across the board, coming in on a daily basis. Does, say, a quarterly patching regime really provide sufficient security? And yes, patching is important, but should it dominate activity at the cost of everything else - which can easily happen given the volume of patches?

It is easy to see that IT teams will find it difficult to keep ahead of the ever-growing list of vulnerabilities.

Get your patching regime right

Formalizing your patching regime is the first step in trying to cope with the mountain of CVEs. Ad-hoc patches based on alarming news reports are not the way to go - there are simply too many vulnerabilities, and relatively few become widely known – leaving countless hidden vulnerabilities and associated exploits that pose a danger.

Nonetheless, one of the key steps in creating a patching regime is prioritizing patches. Critical, widely known vulnerabilities must be patched at speed - with no delay, and sacrificing availability if necessary. Patches for medium and low risk vulnerabilities could be scheduled to fit around the workloads of tech teams, or to avoid issues with availability.

Another key step is to build a sufficiently complete inventory of the hardware and software that requires patching. Some targets for patching will be immediately obvious, but others can easily be missed.

In building your inventory you may also identify some scope for standardization - in other words, upgrading software to the same version or consolidating vendors to make managing patching easier.

Finally, it’s worth codifying your patching regime into a formal patching policy. Patching is hard to do consistently, and all it takes is a single failure to open the door to disaster. A codified patching regime can help your team stay on track with patching - year in, year out.

The trade-off with patching

With any patching regime there is usually a trade-off to be made between availability and security. Yes, you can patch in a highly secure manner – applying patches as fast as they are released. But patching inevitably has an impact on availability as patching often requires server reboots.

In fact, some companies can have specific business requirements that prevent taking down services or servers to apply patches even if a critical CVE appears, which can leave services vulnerable to a new exploit.

Even where you can take servers offline for maintenance, services are degraded and by consequence, the end-user experience is degraded. Think of an online retailer with thousands of customers online suddenly taking half its servers offline for maintenance, for example.

Then there’s also the drain on tech teams who are inevitably sacrificing time spent on other tasks to spend time on patching. Security teams could simply be completely overwhelmed by the burden of patching. There is an alternative, however.

Consider automated patching tools

We’ve identified two key issues behind standard patching regimes: the time and effort required by patching, and the disruption associated with patching. One solution that’s worth considering is automated patching - and even more so if it is rebootless automated patching that applies patches without requiring server restarts.

Automated patching tools continuously monitor patch releases and apply these patches automatically without intervention. It removes the need to dedicate manpower to patch server fleets - patching simply happens seamlessly in the background.With automated patching tech teams are never left overwhelmed with countless patching tasks, nor do tech teams need to try and predict which patches are most important. Instead, all patches are seamlessly, evenly, and consistently applied.

Some automated patching tools such as KernelCare can apply patches on-the-fly - patching live, while a machine is running, and without requiring a reboot. Live patching limits disruption as machines are not taken offline to process a patch. When there is minimal risk of disruption patching consistency is likely to improve.

In other words, the right automated patching tool can resolve two of the biggest problems companies face with patching: effort required, and disruption.

Patching is critical, no matter how you choose to do it

Whatever your company does to cover itself for patching or however you arrange your patching regime the one certainty is that patching is critical. Patches must be applied, but there is a decision to be made about how often you patch and how you patch.

Given the size of the cybersecurity threat there is a strong argument for automated patching. Tech teams and security researchers alike are increasingly overwhelmed and automated patching bridges the resource and availability problem.

It’s always an option to simply apply more manpower to the patching challenge, and for some workloads that may be the only option. Yet in most cases automated, rebootless patching can deliver a big win against today’s enormous cybersecurity challenges.

Recommended read:

You May Also Like

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. By using this site, we will assume that you're OK with it. Accept Read More