The Difference Between Patch and Vulnerability Management
by Eric Kroeger and Jason Mikolanis
A deeper dive . . .
This is the technical companion to the recent Jolley | Hadaway article on how to explain patch management to nontechnical managers.
2018 sure has started off with a bang! Right out of the gates, we got Spectre and Meltdown, design flaws in the processor architecture of, oh let’s say, 90 percent of the systems in use today. Not a bad start to the I.T. year. This has led to a flood of firmware, operating system and application patches. And there will certainly be many more to come.
Even a “small” financial institution could have hundreds of devices (servers, desktop PCs, phones, thin clients, tablets) running various operating systems and numerous applications, many of which are likely to be affected in some way by these, and other vulnerabilities. Vulnerability Management is the process that we use to deal with this mess.
Last time, in the article titled “Understanding Patch Management”, Dan and Matt introduced a formula for Vulnerability Management stating that Vulnerability Management = Policy + Awareness + Prioritization + Patch Management + Testing + Tweaking. More formally, Vulnerability Management can be defined as the process of identifying, classifying, remediating and tracking vulnerabilities within a computing environment. It starts with knowing what we have in our network, determining what’s wrong with it, and prioritizing the remediation process based on risk to the organization. In the perfect world, we would let our systems automatically apply all patches, as soon as they are released by the vendors. Yeah, right! Hundreds of systems x multiple applications x multiple patches = a recipe for disaster. And, in the other perfect world, we would have the time (would probably take years) to test all of our patches for performance degradation, “harmful program interaction”, and lack of vendor support prior to roll out. We know that this cannot happen. Basically, it is a balancing act—an art, not a science.
It all starts when someone (sometimes a user, a software vendor or even a hacker) identifies a problem with a system or an application. The vulnerability becomes publicly known in some way, and hopefully the vendor issues a fix for the problem. The fix is not always made publicly available immediately. In fact, some vulnerabilities can take weeks or months to fix. With others, there may never be a way to technically fix the problem. Vulnerability assessment tools are trained how to look at systems or devices, identify them, and then to check against a series of known issues with that type of system. Is the IOS code the latest? Is the firmware current? Is the system properly configured? If not, the vulnerability tool will build a list of problems and (in the case of most tools) offer suggestions on what to do to address the known vulnerabilities or weaknesses.
So, information technology groups must employ a process to 1) identify vulnerabilities with all systems, 2) assess the risks associated with applying (and not applying) fixes, 3) to apply patches in as much of a controlled environment as possible, 4) to track changes so that we know what has been fixed (and what could have caused problems), and 5) to document the process so that we can analyze and report on the program. And, if we are able to wrap this process up neatly in a solid vulnerability management policy that our senior management team understands and approves, we are more likely to get cut some slack when things don’t go exactly as planned.
As we stated earlier–vulnerability management is much more of an art than a science. With a constant barrage of threats (including zero-day), exploits and patches, having a “perfect” process is basically impossible. Given this, vulnerability management has to be backed up by a defense-in-depth (multi-layer) information security strategy that gives your organization the best chance of keeping systems and data secure. Vulnerability Management is just one piece to the puzzle.
To bring this down to “ground level”, we highly recommend that our clients maintain a detailed inventory of all systems and applications in their environments. There are many tools that can help with this process, and they range from simple and reasonably priced for smaller networks (PDQ Inventory and SysAid) to comprehensive and somewhat expensive (SolarWinds and ConnectWise Automate) for larger environments. When the chips are down, it really helps to know as much as possible about what you have in your environment.
We also highly recommend the use of a vulnerability assessment tool like Nessus (Tenable Software), Qualys (Qualys.com) or Metasploit (Rapid7). In addition to helping to identify the systems on the network (desktops, printers, servers, firewalls, routers, switches, etc.), these tools can tell you what the weaknesses are and (often) how to fix them. We prefer Nessus, and typically run automated (Nessus) scans on our networks as often as possible (at least monthly). We try to do both authenticated and unauthenticated scans to get as much information as possible. Nessus lets us maintain an inventory of systems and scan history, which helps us track and validate the successful application of patches to all of the devices. Nessus also helps to rank the vulnerabilities by criticality. After the scan is complete, one of our IT specialists will analyze the results and start working on the remediation plan. The scan results will be used to determine what patches should be applied to various devices and in what order. The remediation process will be carefully documented for problem solving in the coming days. As we mentioned earlier, not all vulnerabilities can be fixed, either because of limited resources, because there might not be a solution yet, or perhaps the current “solution” could be worse than the problem it solves. This was actually true in the cases of Meltdown and Spectre. For some systems, there are no patches yet. For others, the patches often cause significant performance degradation, and it is not clear if there are actually any known exploits yet. Further, not all systems present the same risks. Does a vulnerability in a network printer present the same risk as a problem with the iOS code on a firewall? Probably not.
Either way, it is important to go about the process using a risk-based methodology and documenting the process carefully for tracking purposes. Once the remediation process is complete, we recommend repeating the Nessus scan to validate the intended results and to look for new vulnerabilities (as they come out way too often).
Finally, we use a tracking process to keep tabs on the issues and remediation steps. This report can be a spreadsheet that we manually update after each scan, or it can be a more sophisticated tracking mechanism that comes with the more expensive vulnerability assessment tools. We highly recommend that reports are presented to an information security group or a tech steering committee on a monthly basis. This helps the organization to monitor the value and effectiveness of the vulnerability management process. When there is so much information in the news about data breaches, ransomware and corporate espionage, keeping your management team informed can go a long way toward building their confidence and having them on your side when something goes wrong. And unfortunately, the odds are not in our favor.
Eric Kroeger and Jason Mikolanis are senior consultants with Virtual Innovation, Inc. (www.vi-mw.com). Virtual Innovation serves its clients by helping make their systems more secure, available and recoverable. For more information, contact Eric Kroeger at 219-405-6533.
Leave a comment
Attacks on AMD Trusted Platform Modules raise security questions. An article review. Read more
New research reveals issues with these commonly overlooked devices… An article review Read more
Known to be vulnerable since 2005, the algorithm will be phased out over the next sev Read more
Hackers are getting unusually creative in their attacks… An article review. One drawb Read more