Yes, the CISO of the Starship Enterprise
On AI replacing the business of cybersecurity.
Another one of those Dan’s New Leaf Posts, meant to inspire thought about IT Governance . . . .
From time to time, my friends from high school, and even some from college, who have a minimal understanding of the cybersecurity business, will hear some news that causes them to be concerned about the veracity and health of our company. People reached out like this in droves back in 2008 – in 2005 we announced our commitment to the banking industry. By 2008 a lot of people – including my partners at the time, reached out, questioning the wisdom of that. Was the Great Recession going to destroy our business?
But bad actors didn’t let the banking crisis of 2008 stop them from attacking banks, and our business not only survived, it thrived.
Recently, I’ve been contacted by more than a few people concerned about artificial intelligence, and how it is someday going to “put us all out of a job.”
“Disruption results from any technology,” I will answer, careful about the tone of my voice. “We’ll get through it.” But when they turn the conversation to risk of extinction, I say I am confident that the hype over AI risk, and that hype alone, will prevent the extinction of our species. That’s because I have always maintained that “awareness is 9/11’s of the battle.”
Then they strike.
Some of them with genuine concern, some seemingly from a sadistic place. The prior are usually asking, “will your company survive this?” The sadistic ones – just kidding me – say something like, “I read that AI will completely replace the need for security.”
But either way, that’s when I usually revert to my Worf metaphor. You see, in the 24th century, one of the most important roles on any starship will be the role held by Worf. Security Officer. For those of you who don’t follow “Star Trek – Next Generation, Worf is the Klingon security officer of the starship that hosted the 1980’s sci-fi television show. Star Trek fans grew to love him and his job description. His character was filled with honor and suspicion. His role included everything from physical security to third party management. And . . . he filled a very necessary role on the starship, by serving on a team (most notably with Geordi and Data) which often addressed what we, in the 21st century, would consider to be cybersecurity incidents.
Three centuries from now . . . heck, three decades from now . . . there will still be information. There will still be technology. There will still be threats exploiting vulnerabilities in those technologies. Sure, there are new and unexpected risks with AI, including the pace of change. But the “CIA” risk of AI is no different than any other technology. We will still need to maintain the confidentiality, integrity, and availability of information.
And we will continue to manage that risk. We will continue to measure it; we will continue to respond to it (by either mitigating, transferring, or accepting it). And we will continue to monitor both our response and potential threats exploiting potential vulnerabilities.
Not only will demand for cybersecurity continue to grow, but we believe it will turn the burned out into the excited again. Technology risk comes in waves, as do the skills and tactics of the cybersecurity superheroes. Because we measure, respond to, and monitor risk. We manage risk.
We at infotex will continue to empower our Clients to manage technology risk by showing them how to maintain the eight systems. And those eight systems – awareness, risk management, vendor management, asset management, access management, technical security standards, incident response and business continuity – will all arise from AI risk, organically but fertilized with a lot of awareness, just like they arise from any other technology.
We have already had AI in our systems. At least immature, narrow forms of AI. We have been addressing the risk of neural networks since the launch of the iPhone. When people say “we’ll never let AI on our network,” they forget that the biometrics features we use are neural networks. They forget when we use Siri on bank-provided iPhones, we’re using an immature form of AI.
Thus, what they’re really saying is “we need a policy that defines AI and our risk appetite for its various forms.”
Machine learning will help us more than hurt us, as we monitor systems and networks and AIs. Sure, “context-aware” malicious code will benefit from these sorts of tools, but that will only strengthen the need for stronger SIEMs. So, I’m definitely not worried about neural networks or machine learning somehow causing our company to stop growing in leaps and bounds.
Language models will enhance, but not replace, our need to respond to the ever-expanding attack surface. And yes, they may be able to replace bad writing and even worse researching, but I just don’t think they will replace the humans doing the important writing and research. They’ll make phishers more difficult to spot, and social engineers more OSInt armed. But that only exacerbates the need for risk measurement, response, and monitoring, so we are okay there.
Leaving all other forms of AI, like AGI (Artificial General Intelligence) and the forms of AI that we have not yet thought up. It also leaves the fictional AI like positronic brains (Isaac Asimov’s robots) and Skynet (the “revolutionary artificial intelligence system built by Cyberdyne Systems for SAC-NORAD, in the movie Terminator). I’m calling this stuff Skynet Risk, though the Center for AI Safety just called it “risk of extinction.” Wow.
Should we be worried about t this?
I’m not losing sleep over what we don’t know. We don’t control that. And I personally don’t believe the hype about it. Might be my age.
But of course I still encourage people to manage the potential for it as a risk. But banks have more imminent, more urgent, more applicable fish to fry. And, I believe so firmly in awareness, that I can rest knowing the intense awareness of “Skynet Risk” will cure Skynet Risk.
Worst case scenario for us cybersecurity professionals? It’s Terminator versus Worf. Even if we humans do end up in a global war against our tools, those of us who are in cybersecurity will be fighting that war, won’t we?
It’s the unintended consequences of AI that we must prepare for. Ironically, we humans are like the language models we created. We don’t know what we don’t know. (AIs don’t know, period.)
Unlike AIs, who don’t need sleep anyway, when it comes to the risk of our not knowing what we don’t know, we like to worry. We toss and turn at night. So I encourage my friends to figure out what they can actually control, and worry about that.
But for us cybersecurity people? For the health of infotex? Skynet risk, even disruption, helps a company in the business of awareness, like infotex. It only creates demand for a company who maintains, as its slogan, “our Clients sleep at night.”
And all forms of AI . . . except maybe the fictional ones . . . will turn bad actors into better bad actors. (Or badder bad actors?). Thus, the good actors need to become safer good actors. And awareness is 9/11’s of that battle.
We have always maintained that a SIEM is not an application. The application is the tool. A SIEM, to us, is three teams working together as one team. The application and other tools we use to manage the security information and event management process may be part of what we provide as a Managed Security Service Provider, but it’s the three teams, working together as one team, that secures a system.
If a SIEM is three teams working together as one, then as the bad actors stand up their AI tools, the SIEM must rely on better AI tools to monitor the risk that we face. Because our customers trust us with the information that we collect about them, we must demand that the residual “CIA” risk from AI stay where it is now. It cannot rise.
So, if you are in cybersecurity, or if you work at infotex, do not lose sleep over whether you will be replaced by a Job Terminator. Instead, focus on what we can control. Follow the risk, and learn to measure, respond, and monitor that risk.
Original article by Dan Hadaway CRISC CISA CISM. Founder and Information Architect, infotex
”Dan’s New Leaf” – a fun blog to inspire thought in IT Governance.