The Consequence of Unintended Consequences


Artificial intelligence carries risk, but so does organic ignorance …


Another one of those Dan’s New Leaf Posts, meant to inspire thought about IT Governance . . . .


business man in suit and plugged into computers and other machines

At a recent conference, I noticed two camps emerging in the debate over artificial intelligence. Some people embrace AI as a tool, while others support Elon Musk’s call for a pause in its development. As a risk manager, I fall into the former camp. I believe we need to use AI to manage AI risk.

Pausing AI development would give bad actors a head-start, an unintended consequence itself. And unintended consequences are what worry me the most about AI. In fact, a new Danism of mine:  “I’m not as much worried about artificial intelligence as I am organic ignorance.”

The opposite of awareness.

But it’s easy to say that.  Let’s face it, everything that could go wrong with technology, including the good old fashion hack attack, falls into the category of unintended consequences.   But I saw a report that just over 50% of AI developers believe we have a 10% chance of extermination because of AI.  Doesn’t that mean that just under 50% of AI developers believe we have a 90% chance of successfully managing AI risk?  How many in each camp are risk managers?  And, I’m also compelled to ask, what was our chance of extermination, when we were looking at the risk of . . . I don’t know . . . nuclear weapons?


As a “cyberpoet,” I’m learning to use ChatGPT to brainstorm ideas and help generate different approaches to my talks and articles.  Given you can’t believe half of what it returns, I’m also investigating data validation methods.  Not that I am the person assigned to AI as a subject-matter.  That person is William, who I interview in my podcasts.  (We’re probably going to start looking at the available APIs in the next podcast).


Yes, AI will make hackers better hackers, APT’s will become MAPT’s (more advanced persistent threats), and social media will continue to be the worst place to gather believable information.

But AI will also make artists better artists, not replace them.  Writers will be even better at writing.

And bankers will be even better at banking.


But I also argue with ChatGPT, quite a bit. Despite not believing everything it says, I use a Socratic dialogue to generate interesting concepts, including two I’m sharing in this article.  When I first started reading and writing poetry, back in the 70s, I had a wonderful little green book called a “Websters Rhyming Dictionary.”  But poems came to me when I was not around that little guy, so I’d write a few poems a month.  But when the internet allowed me to go to rhymezone.com, well those who know me best know what happened.  They have to listen to my poems – some of them good, some even funny – all the time.  The tool I used improved.  I now write more and better poems.

We cyberpoets have been writing about unintended consequences since the spammers and lurkers of the early 1990s, the love bug and subsequent harmful malware of the early 2000s, and the proliferation of attack vectors ever since.  Since then, we’ve maintained elaborate lists – then spreadsheets, then databases, soon LLMs — of the risks associated with information technology in general.  Most of the risks with AI are already covered in those lists, and we’ve been writing about them incessantly since the 90’s.

So, AI and technology in general share many risks, including unintended consequences, ethical considerations, security (confidentiality, integrity, availability), complexity, reliability, etc.

But AI risk is unique in several ways.  With the assistance of ChatGPT, I was able to develop this list of seven unique issues arising from AI:

  1. Scale: AI has the potential to operate on a much larger scale than many other technologies, with the ability to process vast amounts of data and make decisions that can impact many people’s lives.
  2. Disruption: The general consensus is that while normal technological advances can be disrupters, the disruption from AI will be much larger.  (More job roles will be replaced, more new roles will be created).
  3. Autonomy: AI has the potential to operate autonomously, without human intervention or oversight, which can make it difficult to predict or control its actions.
  4. Complexity: AI systems can be highly complex and opaque, making it difficult to fully understand how they work and to identify potential risks or unintended consequences.
  5. Adaptability: AI systems have the potential to learn and adapt over time, which can make them difficult to predict or control as they evolve.
  6. Bias: AI systems could be trained with data that leans one way or another towards a political or other bias.
  7. Purpose: AI is being developed by nation-states and other potentially malicious actors who may combine it with other technologies like cryptography and super-computing, to learn unpredictable successful attack methods.

I “humanized” the list, meaning that the original two or three lists generated didn’t include all I knew to be the issues.  So, what you read above is a human-aided artificial intelligence list about issues facing artificial intelligence.

This is how we are going to address AI risk.


The above factors, all of them unintended consequences, make AI risk unique, and require specific attention and strategies to manage the potential negative consequences of AI.

But ultimately the way we control AI risk will be no different than the way we control any risk:  measure, respond, monitor.  We need to conduct a “drill-down risk assessment” on artificial intelligence, neural networks, machine learning, and other gee-whiz-its-here technologies.

To manage the potential negative consequences of AI, we need to measure, respond, and monitor. This requires conducting a risk assessment that identifies unique AI risks, such as scale, disruption, autonomy, complexity, adaptability, bias, and purpose. We also need to develop inherent risk, controls, and residual risk metrics.

However, we must start with an asset inventory to determine where we are already using AI. We should also review the normal threats, including human, environmental, data, legal, etc., and unintended consequences. We need to humanize AI and arm users with data validation tools to prioritize answers they validate.

The real work will be when we brainstorm controls.  I guarantee “awareness” will be an answer to every threat, every vulnerability.  Indeed, Awareness is 9/11’s of the battle.  I expect we all will be adding a paragraph or two into our AUP about blending the use of ChatGPT accounts for personal and bank business, among other things.

To me, the unique control, for which we already have best practices in place, affects the new risk area:  integrity.  You can’t believe half of what ChatGPT returns.  (It’s almost as bad as social media!).  But it’s not just about data validation.  How we interact with AI will be as important a control as how we fact-check it.  As a cyberpoet, I know that the wording and phrasing of questions or requests can greatly affect the responses generated by AI systems.

In fact, my podcast, CyberViews, will be interviewing a person whose company has “found the prompt” that is producing about 90% accuracy.  His comment: “we figure that humans won’t get anywhere near this accuracy and be able to produce this much work in such a small time.”  In other words, a certain amount of “data corruption” is tolerable.

Will our risk assessments have MTC (Maximum Tolerable Corruption)?


We argue all the time.  ChatGPT and me.  And while I didn’t use the name DAN (Do Anything Now – it would be Dan arguing with DAN), I too explored with ways to trick it into doing things it started off declining to do.

But we argue.  All the time.

About truth.  About accuracy.  About integrity, in almost every conversation.  And out of that back-and-forth, we agreed on these factors that can influence the importance of validating information provided by an AI system include:

  1. Nature and importance of the information in question
  2. Classification of the information in question
  3. Intended use and audience of the information in question
  4. Need for accuracy. Potential consequences of errors or inaccuracies in the information in question
  5. Complexity or nuance of the information in question
  6. Subject matter or domain expertise of the AI system
  7. Training method, and the quality, volume, and relevance of the training data used to develop the AI system
  8. Specific wording and phrasing of the question or request

And number nine . . . . . does it make sense?  (ChatGPT did not suggest this.).

So often I know immediately that ChatGPT’s reply is incorrect because . . . it simply does not make sense!

I humanized the above list as well.  I added points 2 and 3, and I added “Need for accuracy” in front of point 4.

But I’m learning that a quicker way to prioritize data validation is by asking three questions:

  1. Do I know this material? (if so, I can tell the accuracy by reading it, if not I need a search engine tab open).
  2. Who will see or hear the results of this query? Since I speak at conferences, I had to add the word “hear.”  In a bank, the answer to this could be “auditors, management, customers, or just ‘nobody.”  The last one lowers the priority of validation.
  3. Is the information procedural, or factual? I find the prior to be more accurate than the latter.  And the latter is easier to check.

Let’s keep in mind, again, that human accuracy is far less than what we’d tolerate from an artificial intelligence.  Think about it:  our nation has been losing from forty to forty-five thousand citizens per year to auto accidents caused by human error.  Other than the local news, not much is made of this.  But when self-driving cars cause a death, it’s front-page news in all the media.


Vendor management is crucial in managing the risks associated with AI. Contracts with vendors must address the purpose of AI usage, involvement in the fine-tuning process, and confirmation of the intended use of data. Assurance processes must be confirmed, and a new framework for the assurance of service organizations is needed.

The AICPA must improve its SSAE-18 methodology to address the risks associated with AI. SDLC controls reviews are critical, and organizations must demand assurance tools that test the code.

Other control systems we will need to tweak:  our BIA, incident response and BCP plans, firewall and SIEM settings, content filters and, of course, our access management program, which will drive in part the update to the Acceptable Use Policy.

We can manage the risk of AI. The consequences of AI only occur when we use the system.  The biggest difference between AI and every single other technology is “the unintended consequence of Arnold Schwarzenegger.”  Thanks to the Terminator series, we do not have to convince people of the problem.  We need to use AI to manage AI risk and embrace it as a tool. As cyberpoets, we must continue to explore the unintended consequences of AI and develop strategies to mitigate them.

It has always been about the way we USE technology.  I hope nobody is offended when I say organic ignorance.  But the ignorance is not only on the development side.  The consequences of AI only occur when we USE the system.  And the GREATEST consequence – of the unintended consequences of AI – directly impacts me, and you.  You see, the consequence of unintended consequences, in this case, is that I get to write about something a bit more frightening, if not enlightening!


Original article by Dan Hadaway CRISC CISA CISM. Founder and Managing Partner, infotex

”Dan’s New Leaf” is a ”fun blog to inspire thought in the area of IT Governance.”


same_strip_012513


 

Related Posts

The Magnificent Seven 2023

Seven Trends . . . …that small bank Information Security Officers face in 2023 Another one of those Dan’s New Leaf Posts, meant to inspire thought about IT Governance . . . . Welcom...

“Phone Phishing” – Awareness Poster (Re-release)

Another awareness poster for YOUR customers (and users). Now that we have our own employees aware, maybe it’s time to start posting content for our customers!Check out posters.infotex.com for...

“Strong Password Tips” – Awareness Poster

Another awareness poster for YOUR customers (and users). Now that we have our own employees aware, maybe it’s time to start posting content for our customers!Check out posters.infotex.com for...