About Us | Contact Us
View Cart

The Risk of AI Risk

By Dan Hadaway | Monday, May 24, 2021 - Leave a Comment

Or, the risk of email hypnosis . . .


And the other implications of complacency!
Another one of those Dan’s New Leaf Posts, meant to inspire thought about IT Governance . . . .


Now that the pandemic is coming to an end, most of us are returning to our daily commutes.  Are you finding yourself in your garage at the end of the day, wondering how you got there again? It might take us a little while before we find ourselves in what we call autopilot. I use this autopilot concept as a metaphor, in user awareness training, to help people understand what I call “email hypnosis.” You see, when we do the same thing over and over and over again we get into this situation where we are in autopilot.

There are risks associated with autopilot, aren’t there?  (If you need a reminder, check out my article, The American Monkey Trap!)

Whenever I study Artificial Intelligence, which is more often than you’d think these days, I am always struck by the similarities of neural networks and machine learning with what we have always called “autopilot”.  To me, the current and present danger with Artificial Intelligence is very similar to the risks we face on our commutes.  Something different . . . something unforeseen or something out of the ordinary . . . could happen and, instead of us recognizing it and responding, we die or kill somebody because we are in autopilot.  Somebody could pull out in front of us while our mind is on the podcast we’re listening to instead of the road right in front of us.

Facebook learned about AI risk after the January 6th riots.  They should have seen this coming; they took a lot of flack about their algorithms as early as 2014, during the Ferguson riots.  Not only did their algorithms advertise weapons to persons discussing the riot, but Facebook struggled for days trying to turn off the ads.

Unintended consequences:  Thus the dangers of autopilot; thus the dangers of neural networks.

Yes, like any risk, they can be managed (self-driving cars).  But there are many different deployments of AI in use at your bank right now and you probably do not realize it.  We use AI in fraud detection, on our websites, to analyze decisions (like loaning?) and in many of our IoT devices.  We use a neural network to log into our phones, and machine learning to customize our cloud-based applications; without even one thought of the risk of unintended consequences.

Why?

To me the real risk of Artificial Intelligence is nontechnical response to the notion of AI risk. “that’ll never happen” is what I often hear back, an immediate knee jerk response to artificial intelligence risk.  And then I hear about robots taking over the world.  And I actually agree with that, I agree that robots will probably never take over the world.  But Terminator was a movie, not a risk assessment.  The real risk with AI risk is that management thinks its about robots taking over the world, and not the unintended consequences of machine learning used in chatboxes, fraud detection, and other applications.  Banks are being caught unintentionally discriminating due to AI risk.  Chatbots are irritating the heck out of our customers.


I worry about this, because it reminds me of the turn of the century, when bank management said again and again, “that will never happen here.”  The risk of AI risk . . . the real risk of artificial intelligence . . . is the notion that the risk is not real.  And we need management on board much quicker than with cyber-risk because the solution to AI risk is currently vendor management.  The notion that AI risk is about robots conquering the world, instead of chatbots leaving our customers exasperated.

On May 17th Vigilize (currently Matt Jolley) published an article review on AI Risk.  It leads to a guidance by Microsoft on measuring Machine Learning Risk, and I strongly urge us all to read it!


Original article by Dan Hadaway CRISC CISA CISM. Founder and Managing Partner, infotex

“Dan’s New Leaf” is a “fun blog to inspire thought in the area of IT Governance.”

 


same_strip_012513


 

Latest News
    A follow-up on Dan’s 2008 Password Manifesto On the NIST Publication on Digital Identity Guidelines Another one of those Dan’s New Leaf Posts, meant to inspire thought about IT Governance . . . . In June 2017, NIST released a special publication on digital identity, NIST SP 800-63, that is starting to get the attention […]
    Another awareness poster for YOUR customers (and users).  Now that we have our own employees aware, maybe it’s time to start posting content for our customers! Download the large versions here: Awareness Poster (Portrait) Awareness Poster (Landscape)   You are welcome to print out and distribute this around your office. Interested in one of ours […]
    Over Seven Billion Usernames Have Been Leaked in Breaches Since 2011… An article review. An unfortunate fact of modern life seems to be the inevitable announcement of new data breaches, and if you’ve lost track of how many breaches you’ve had to perform a risk assessment on you’re probably not alone…but just how much personal […]
    Or, the risk of email hypnosis . . . And the other implications of complacency! Another one of those Dan’s New Leaf Posts, meant to inspire thought about IT Governance . . . . Now that the pandemic is coming to an end, most of us are returning to our daily commutes.  Are you finding […]
    Another awareness poster for YOUR customers (and users).  Now that we have our own employees aware, maybe it’s time to start posting content for our customers! Download the large versions here: Awareness Poster (Portrait) Awareness Poster (Landscape)   You are welcome to print out and distribute this around your office.  
    Machine learning is here to stay, so how do we assess its risk? An article review. When it comes to assessing technology risk, there seems to be as many methods as there are attack vectors… but what happens when an entirely new field opens up?  When it comes to machine learning (ML) there aren’t many […]
    You’ve heard it from every MSSP you’ve met: the definition of a SIEM is in the eye of the beholder. But at infotex, we are not talking about the database – an asset whose definition is continuously evolving. We’re talking about the way three teams collaborate in an overall Technology Risk Monitoring process. And whether […]
    A new study shows organizations are responding to cyber attacks faster than ever, so why is that bad news? An article review. When it comes to cyber attacks, the sooner an organization can begin to respond to an attack the better, so the results of a new study showing a drop in the amount of […]
    …a Crash Course of Security Measures The first article by Sara Fultz, Creative Assistant of infotex! Introduction: As the managing partner of infotex, I am proud to introduce the “debut article” for Sara Fultz.  I told Sara “write an article showing us what you’ve learned that the technical staff will appreciate.” As I read her […]
    infotex Programming Coordinator, Michael Hartke, introduces a high level overview of the upcoming update to the infotex SIEM. Look for more movies in the coming months informing our Clients, and those just now learning about us, about the SIEM and its features and functions.