About Us | Contact Us
View Cart

The Four Basic Truths of System Security

By Dan Hadaway | Sunday, January 1, 2023 - Leave a Comment

System Security and Cybersecurity are not the same thing. . .

Another one of those Dan’s New Leaf Posts, meant to inspire thought about IT Governance . . . .

graphic of an asterisk with 8 points, each point covers the programs described in the article, in the center is a circle that says System Security

Regarding “information security,” the last thirty years have seen an evolution of frameworks, laws, and assessment approaches which intimidate the management team with their complexity.  Even the name has changed, to cybersecurity,-though if you listen hard, you can hear people like me arguing there is a difference.  Some say that information security protects information, cybersecurity protects the technology using that information.  I ask, “What about the people using the technology to process the information?”

In fact, if you notice the title of this article, you’ll see I used the term “system security.”  For the sake of this article, “system security” means the confidentiality, integrity, and availability of information, devices that process and store that information, the users of that information and devices, and the networks which the information travels upon.  To me, system security covers data at rest, data in motion, and data in use.

Before these frameworks, ranging from the ffiec.gov to ISO to CobiT to SSAE-16 to ITIL to PCS, which are now essential to those in the security process, I had a simple method of helping people understand what needed to happen to keep information, technology, and people safe.  I call it the four basic truths.

There are four basic truths when it comes to system security:

  1. Technology, and the use of technology, creates risk.
  2. There is a common cause to this risk.
  3. We can control this risk.
  4. To control the risk in a balanced, cost-effective manner, implement eight control systems.

One:  The use of technology creates risk.  In 2000, when I founded infotex to help high quality Clients manage technology risk, I had to convince people that security represented a problem worth their attention.  In 2013, the apathy management held towards system security began to disappear.  (Thank you Target, Nieman Marcus, Anthem, the United States Federal Government, and all the other huge companies who created, through a lack of security, what I called “the parade of breach news.”)

And by 2017, you could hear a large popping sound, as those with their heads still in the sand became aware of ransomware.  Those who legitimately felt residual risk was low because they did not process confidential information changed their minds, as we all learned that availability was the A in CIA (confidentiality, integrity, and availability, the original goals of information security).  In the words of Jim Morrison, “and I’ll say it again.”  Technology, and the use of technology, creates risk.

Two:  That a threat can exploit a vulnerability creates risk.  We can measure that risk, relative to other risks, by applying metrics to likelihood and impact.  The three most likely families of threats include people, the environment, and the technology itself.  People create risk either maliciously (hackers) or accidentally (user mistakes).  The environment creates risks related to tornados, floods, or even where we use the technology.  And finally, the technology itself can create risk, as we all learn over and over again when new technology embarrasses us.  Technology failures can cost money (financial risk), cause us to be sued (legal risk), break laws and regulations (compliance risk), not do our job effectively (availability or continuity risk), and embarrass us (reputational risk).

Three:  We can control this risk.  “Doing something about it” is why the frameworks referred to above are so complex, as there are thousands of things we CAN do about risk.  But we should focus our efforts where they would be most effective.  There are three primary methods to respond to risk:  transfer, mitigate, or accept.  The tactics available to us, or controls, all fall in one of these three categories.  Technology risk comes in three forms:  inherent, or the risk prior to the implementation of controls, residual, or the risk remaining after our existing controls, and anticipated residual risk – what we think the risk will be after implementing planned controls.

We should prioritize our efforts by measuring the three forms.  Measurements can be as simple as using a relative scale (like 1-8 for likelihood and 1-5 for impact), or it can be as complicated as quantifying tangible and intangible probabilities and costs, and including ease of remediation.  Ease of remediation of course includes cost, but total cost of ownership should be considered.  (Passwords are free, but teaching people to use strong passwords takes a lot of time.)  For example’s sake, a firewall mitigates substantial likelihood and is relatively inexpensive.  Burying our servers in the ground, and not letting anybody have access to them, mitigates almost all likelihood and impact, but the total cost of ownership (availability) precludes us from taking this approach.

Four:  To control risk in a balanced, cost-effective manner, implement eight control systems.  These systems, or business processes, work interactively and should be documented via policy and procedure.

  1. Awareness. I had to cancel a workshop scheduled for September 2001, because Al Qaeda attacked America.  While watching the aftermath, I noticed we were far more secure on 09/12/2001 than we were on 09/10/2001.  That was when I coined the phrase, “Awareness is 9/11’s of the battle.”  Awareness is the reason we all became more secure after the 2013 Parade of Breach News.  It should be “applied in all directions, “which usually includes Board, Management, Technical Team, Users, Vendors, and Customers.  We achieve awareness through education, motivation, and activation.  We lower risk by educating people and systems in all directions of the threats, vulnerabilities, and controls in our systems, as well as the risk we have accepted, and our plans to lower this risk.  We motivate users by helping them understand why controls are in place.  We activate awareness by putting our users on guard, usually by testing user-related controls.  Our policies educate, our training motivates if we explain why, and activation is achieved by phishing tests, pretext calling tests, red team exercises, incident response tests, blue team tests, etc.
  2. Risk Management: Once we have the right people and systems aware, we need to inventory threats and vulnerabilities, as well as assets, and apply relative measurements to the entire inventory.  The inventory should include ALL information assets, which include any technology, information, place, device, or vendor that contains information our customers, partners, and employees trust us to protect.  Nowadays software makes this easy.  Back in 2001 we used spreadsheets.  Spreadsheets still work – the important thing is to start thinking in terms of inherent, residual, and anticipated residual risk as vendors pitch new ways to control risk, or as we respond to our framework assessments.  Essential deliverables from the risk management system include an asset-based risk assessment, updated annually, and presented to the board or ownership, an audit risk assessment and audit plan, and a threat-based risk assessment.  Annual board and management awareness training should include summaries of the above deliverables.  Finally, a risk response document should be developed annually, and can be included in one of the above deliverables, which establishes mitigation plans using one of the three methods for responding to risk (transfer, mitigate, accept).  Transfer and mitigation tactics must be part of the technology plan.  Risk acceptance decisions should be monitored by the incident response team.  The primary goal of risk management is prioritization of awareness and controls, ensuring appropriate awareness of risk, risk response (especially accepted risk), controls, and plans to introduce new controls.  All controls should be documented, for each of the eight systems, in an IT Governance Program.
  3. Vendor Management: One of the most used risk transference methods is outsourcing.  We use cloud providers for everything from our core applications to email.  Each third party that receives information or has access to information should be assigned a vendor owner, who works with the Information Security Officer to measure and control vendor risk, using tactics that address five precepts:  contract review, assurance review, insurance review, financial review, and business continuity plan review.  A vendor risk report should be presented to the board annually.
  4. Asset Management: Not only do we need to brainstorm all information assets, but each one should be assigned various data classification (do they store critical, confidential, internal use, or public information), asset ownership, and metrics related to inherent risk (likelihood, impact, and if we want to get granular, volume).  Nowadays, asset management programs make this easier, and can even automatically update as new assets are added, or old assets are retired.
  5. Access Management: Speaking of retiring old assets, access to information assets should be closely controlled from the time the asset is introduced into our system until after retirement – when is destroyed or, in the case of information, archived.  This system includes management of identities as well as access.  Data ownership should be assigned during a data classification process.  In the early days, we would use software to track data ownership, classification, and access.  Nowadays the Zero Trust Framework published by NIST establishes a paradigm we can use to manage access.  Not only does access management include how we approve and prevent access to information, but it also includes data classification tactics, as well as authentication into the asset.
  6. Technical Security Standards: Speaking of authentication, there are many technical controls which must be in place and tested on a regular basis.  Documentation should be audited regularly against actual practices to ensure technical controls are current, applicable, and enforced.   Examples of documentation include access management procedure, change control procedure, threat monitoring procedure, server hardening procedure, playbooks and runbooks, vulnerability management procedure, etc.
  7. Incident Response: When, and not if, the above systems fail, leading to a potential breach of confidentiality, integrity, or availability, we must have a proactive, well-tested method of response.  The board or ownership should define a multi-disciplinary incident response team, including membership from the board, senior management, marketing or public relations, human resources, information security, information technology, and operations.  A plan should be developed and tested so that, when not if a negative event occurs, escalation is appropriate, and management recognizes the response process as it unfolds.  Incident response tests should exercise data leakage and insurance, and thus include senior management and the board; technical response tactics from detection through escalation should be tested in “blue team tests,” which would not include nontechnical personnel.
  8. Business Continuity: As stated above, availability risk turns out to be an important goal of system security.  Natural or man-made disasters as well as ransomware will require our team to implement contingencies appropriate to the disaster and the business need of all information assets.  A business impact analysis should drive the priorities of how the technical team responds during a disaster, and the plans should be tested so that escalation is appropriate, and management recognizes the response process as it unfolds.

Every framework that we may or may not choose to comply with . . . from the various NIST publications to ISO to ITIL to PCI the FFIEC . . . and every law, from FISMA to GLBA to HIPAA to GDPR . . . all require the four basic truths of system security.  If management understands these basics, as well as the eight systems, it will be easier for the technical team to navigate the frameworks.  Most importantly, our customers . . . and the information they trust us with . . . will be safer.

Original article by Dan Hadaway CRISC CISA CISM. Founder and Managing Partner, infotex

Dan’s New Leaf” is a “fun blog to inspire thought in the area of IT Governance.”

Speaking of safety. Visit offerings.infotex.com to reach out to us and see how infotex can make your financial institution safer!



Latest News
    Artificial intelligence carries risk, but so does organic ignorance … Another one of those Dan’s New Leaf Posts, meant to inspire thought about IT Governance . . . . At a recent conference, I noticed two camps emerging in the debate over artificial intelligence. Some people embrace AI as a tool, while others support Elon […]
    PRESS RELEASE – FOR IMMEDIATE RELEASE BUSINESS NEWS NEW EMPLOYEE FOR INFOTEX We are pleased to announce the appointment of Nathan Taylor as our new Network Administrator at infotex.  “We are very excited to have Nathan join our team as a Network Administrator and look forward to his contributions to maintaining and improving our infrastructure!” […]
    about artificial intelligence . . . And who will protect us from it . . .  Another one of those Dan’s New Leaf Posts, meant to inspire thought about IT Governance . . . . Just watched some press on the the Senate hearings over regulating AI. The normal senator faces, Sam Altman of OpenAI, […]
    The Evolution of an Inside Term Used in our Vendor Risk Report Another one of those Dan’s New Leaf Posts, meant to inspire thought about IT Governance . . . . Those who audit infotex know that our vendor risk report refers to a couple of our providers as “ransomware companies.” This reference started evolving […]
    Another awareness poster for YOUR customers (and users). Now that we have our own employees aware, maybe it’s time to start posting content for our customers! Check out posters.infotex.com for the whole collection! Download the large versions here: Awareness Poster (Portrait) Awareness Poster (Landscape) You are welcome to print out and distribute this around your […]
    New tools could allow unskilled attackers to launch increasingly sophisticated attacks… An article review. Imagine a world where you receive a call from your boss asking you to assist them with something… only it’s not your boss, but an AI being used by an attacker.  This isn’t science fiction, it’s an actual attack that has […]
    Unavailability Strikes Where it doesn’t matter anyway Another one of those Dan’s New Leaf Posts, meant to inspire thought about IT Governance . . . . So, I’m writing today’s article from a resort in the middle of Wisconsin.  I want to make sure I’m staying on top of my New Leaf, which is to […]
    . . . and the importance of segregated response. The latest edition of Executive Vice President, Michael Hartke’s article series! In 2007 when I first joined infotex, coming from small to medium sized business general IT support into the world of cybersecurity, the one thing that was very hard for me to internally rectify was […]
    How concerts can help us understand APTs . . . Especially if you use your imagination! Another one of those Dan’s New Leaf Posts, meant to inspire thought about IT Governance . . . . My daughter reminded me of a concert Stacey and I attended way back in 2013, in Chicago.  It was one […]
    Mutiny! The Malicious Insider Threat Webinar Registration A Webinar-Video It is often awkward to bring up the one attack vector most of us have not addressed. The malicious insider threat. Even if we can flaunt all statistics and claim that the likelihood of an insider attack is low in our bank, the impact is still […]