About Us | Contact Us
View Cart

The Risk of AI Risk

By Dan Hadaway | Monday, May 24, 2021 - Leave a Comment

Or, the risk of email hypnosis . . .


And the other implications of complacency!
Another one of those Dan’s New Leaf Posts, meant to inspire thought about IT Governance . . . .


Now that the pandemic is coming to an end, most of us are returning to our daily commutes.  Are you finding yourself in your garage at the end of the day, wondering how you got there again? It might take us a little while before we find ourselves in what we call autopilot. I use this autopilot concept as a metaphor, in user awareness training, to help people understand what I call “email hypnosis.” You see, when we do the same thing over and over and over again we get into this situation where we are in autopilot.

There are risks associated with autopilot, aren’t there?  (If you need a reminder, check out my article, The American Monkey Trap!)

Whenever I study Artificial Intelligence, which is more often than you’d think these days, I am always struck by the similarities of neural networks and machine learning with what we have always called “autopilot”.  To me, the current and present danger with Artificial Intelligence is very similar to the risks we face on our commutes.  Something different . . . something unforeseen or something out of the ordinary . . . could happen and, instead of us recognizing it and responding, we die or kill somebody because we are in autopilot.  Somebody could pull out in front of us while our mind is on the podcast we’re listening to instead of the road right in front of us.

Facebook learned about AI risk after the January 6th riots.  They should have seen this coming; they took a lot of flack about their algorithms as early as 2014, during the Ferguson riots.  Not only did their algorithms advertise weapons to persons discussing the riot, but Facebook struggled for days trying to turn off the ads.

Unintended consequences:  Thus the dangers of autopilot; thus the dangers of neural networks.

Yes, like any risk, they can be managed (self-driving cars).  But there are many different deployments of AI in use at your bank right now and you probably do not realize it.  We use AI in fraud detection, on our websites, to analyze decisions (like loaning?) and in many of our IoT devices.  We use a neural network to log into our phones, and machine learning to customize our cloud-based applications; without even one thought of the risk of unintended consequences.

Why?

To me the real risk of Artificial Intelligence is nontechnical response to the notion of AI risk. “that’ll never happen” is what I often hear back, an immediate knee jerk response to artificial intelligence risk.  And then I hear about robots taking over the world.  And I actually agree with that, I agree that robots will probably never take over the world.  But Terminator was a movie, not a risk assessment.  The real risk with AI risk is that management thinks its about robots taking over the world, and not the unintended consequences of machine learning used in chatboxes, fraud detection, and other applications.  Banks are being caught unintentionally discriminating due to AI risk.  Chatbots are irritating the heck out of our customers.


I worry about this, because it reminds me of the turn of the century, when bank management said again and again, “that will never happen here.”  The risk of AI risk . . . the real risk of artificial intelligence . . . is the notion that the risk is not real.  And we need management on board much quicker than with cyber-risk because the solution to AI risk is currently vendor management.  The notion that AI risk is about robots conquering the world, instead of chatbots leaving our customers exasperated.

On May 17th Vigilize (currently Matt Jolley) published an article review on AI Risk.  It leads to a guidance by Microsoft on measuring Machine Learning Risk, and I strongly urge us all to read it!


Original article by Dan Hadaway CRISC CISA CISM. Founder and Managing Partner, infotex

“Dan’s New Leaf” is a “fun blog to inspire thought in the area of IT Governance.”

 


same_strip_012513


 

Latest News
    from Dan’s New Role . . . And note the date! Another one of those Dan’s New Leaf Posts, meant to inspire thought about IT Governance . . . . Once again, I am turning over a new leaf.  Those who have not been following this blog for its full fourteen-year history might not realize […]
    Top Seven Risks . . . that small bank Information Security Officers face in 2023! When we present audit reports to boards of directors, we also talk to the board about the top risks the institution is facing. Since 2006, we have been compiling a list of the “top seven risks small institutions are facing,” in […]
    Another awareness poster for YOUR customers (and users). Now that we have our own employees aware, maybe it’s time to start posting content for our customers! Check out posters.infotex.com for the whole collection! Download the large versions here: Awareness Poster (Portrait) Awareness Poster (Landscape) You are welcome to print out and distribute this around your […]
    The new plan calls for technology providers, and not end users, to be responsible for security… An article review.  Following multiple high profile cybersecurity incidents in 2021 and 2022 the Biden Administration recently announced new long-term goals for the nation’s cybersecurity, and under the new plan companies that provide technology would carry more of the […]
    R7: 2023’s Top Seven Technology Risks Webinar-Video What are the top seven risks your board should know about in 2023? Since 2006, Dan has been compiling a list of the “top seven risks small institutions are facing,” in preparation for his board presentations. This webinar will present the 2023 list in a manner that you […]
    A new way of helping people “read” new guidance… Look for more in the future! To save you time, we are proud to present “Adam Reads” . . . recorded versions of our Guidance Summaries! Below you can find an embedded player for the audio file. If you are having issues with that working, you […]
    Times they are a-changin’ . . . The infotex website is being updated. You read that right! We are in the process of updating our website from the circa 2013 version we have had for far too long. As the Digital Media Manager for infotex this excites me greatly and I look forward to the […]
    A new Team member’s first article! In today’s news cycle, it is difficult to miss all the fuss about AI, or more specifically, ChatGPT. So many differing opinions on the matter can make it hard to decipher what the future looks like. Few people think AI is a gimmick, but not many know the possibilities […]
    A draft version of the new framework may be available as early as this summer… An article review. As the cybersecurity landscape is constantly evolving, the tools we use to address risk need to evolve as well–and by this summer we should be getting our first look at planned changes to the NIST cybersecurity framework.  […]
    Another awareness poster for YOUR customers (and users).  Now that we have our own employees aware, maybe it’s time to start posting content for our customers! Check out posters.infotex.com for the whole collection! Download the large versions here: Awareness Poster (Portrait) Awareness Poster (Landscape) You are welcome to print out and distribute this around your […]