The Greatest AI Risk

A New Risk?

Call it meta-risk

Another one of those Dan’s New Leaf Posts, meant to inspire thought about IT Governance . . . .

In a recent conversation, a thinking friend of mine posed the question: “what is the greatest risk with artificial intelligence?” It actually started with a question, “are there any new risks the AI exposes us to?” The flippant answer was, “of course, every new technology exposes us to new risk.” Then I added, with kind of an irritated, have-you-been-in-a-coma, tone, “the word technology is synonymous with the word risk.”

Binary Code Business People

But that was when my friend, who knows my love of cyber philosophy, triggered me. “No,” she said. “I mean risk outside of the normal CIA.”

You see, cyberpoets like me are in search of a risk category outside the bounds of confidentiality, integrity, and availability – the CIA triad – the original 1970s era goals of information security. We applauded when Dr. Spafford of Purdue University CERIAS proposed that control should be added to this triad. My cyberpoet self always marveled that this would make it a table, not a tripod.

But Spafford’s reasoning was sound. If we’re putting all of our data into the cloud, risks to the control of that data should be measured. I first heard this in a talk he gave to a technology conference I moderated in 2009. Back then cloud computing was just starting to take off in the banking space. He so convinced me that I now run around thinking CIAC is the goal of information security. But then, I am an early innovator of vendor management practices. And that’s where the controls related to goal of control kicks in.

We cyberpoets get a kick out of weird phrases.  A play on words can make our day.  So “control controls” was a fun way of categorizing the questions that needed to be asked to meet control objectives.  Questions like, “what format can you export our data in?”  Or, “where does it say we own the data?”  That sort of thing.  Other control controls could include testing the data export.  Backing the system up outside of the cloud backup methodology, etc.

Yeah, I liked talking about control controls.  But the OCC rolled out OCC Bulletin 2013-29, their guidance on vendor management, which called my control controls “termination plan,” and the rest is history!


What my friend was asking is this: “do you believe your table will now be five-legged?”


Is there a new risk, outside of CIAC, that AI exposes? My friend suggested “Ethical Risk” became apparent with AI. This would cover the risk that we are training our AI systems with bias.

Again, she knows of my love for cyber philosophy.  She was actually looking forward to quizzing me about AI, because she thought it would be cool to have the acronym CI-ACE.

Though she wondered if ethics risk fell under data integrity.  “Only if it doesn’t fall under one of the other two categories,” I quipped.

She reminded me how she found out about the triad – when I explained to her why I thought social media companies were purposefully allowing misinformation on their sites. (It was profitable).

She pointed out that I am a person who thought it unethical how social media companies offered a “free application,” not telling users that they were the product.

After a past election, I had discussed how there were inexpensive data validation controls that could be used to ensure better data integrity.  I guess I was complaining how these sites were obviously not using those controls.  We had mused about their purposeful exclusion of these controls from their social media Information Security Programs.

You see, the risk to social media companies was not misinformation. It was people not clicking on ads.

So, instead of focusing their algorithms on the prevention of data corruption, their algorithms were focused on other things – like getting us all angry.

These algorithms have been trained to rewire us . . . they literally mess with the chemicals in our bodies.  Physical reactions . . . is physical security a new risk?  What is behind this risk?

And the most concerning risk:  once we were aware, we all accepted the risk of being a product.

Or, I should say, those of us who did not delete our social media accounts.  Sure, we may be mitigating it, by being more careful or taking the app off our phone.  

But we all accepted risk.

Just now, I stopped working on this article to chase a notification from a particular financial information aggregator.  I hadn’t logged into this app for more than a week, and a notification popped up on my phone:  “Did you just make a large purchase?”

No, I hadn’t.  In fact, I hadn’t made any new purchases at all.  So . . . being a cautious security person, I didn’t just click a link, I took the time to find the app and launch it and guess what . . . there were no new purchases at all.  They just wanted me to log into the app.

Now, being the cyberpoet I am, where the algorithm manipulates most people’s dopamine or cortisol or whatever levels, I have the added impact of being irritated by the notion that technology is being used to enslave, not free, the common person.  The distraction lasts longer for me, as I ponder the risk implications.

Which brings me back to the risk causing this most recent distraction.  

The risk being managed wasn’t the risk to my bank account.  It was the risk to the aggregator’s bank account. 

And thus, my answer to the notion of a five-legged table – ethics is not a new risk. And it is present – or unfortunately not present – in all decision making, not just Information Security decision making.

But what about the fact I did not remove that app from my phone?


It’s more like when scientists realized Pluto was not a star. (But before they realized it was not a planet).  Pluto had always been there.  Science just didn’t realize what it was.


While I did not agree that ethical risk should be added to the table, my friend and I did agree that the greatest risk with AI is indeed data integrity.  Misinformation will be so much easier to generate and deploy. But this too isn’t new. The social media world has been using forms of AI to manipulate its users for at least a decade now.

The risk is not ethics.  The risk is that there are different perceptions of risk.  The risk of how we look at risk.  The risk is which risks are being managed and which are not.  And who are the risks being managed for.  The risk of risk.

You could call it meta-risk. The risk that those managing risk do not prioritize it the way those using the technology would expect.

We run into meta-risk when e review SOC-2 reports.   Those paying for the reports may not test the controls we would expect to be tested.  That’s what we need to look for in a SOC-2 review.  Did they exploit meta-risk, to hide deficiencies?


So, is the new risk with AI ethical risk? Or is it another manifestation of meta-risk – the risk of risk itself? 

Or is it risk acceptance?  The fact that we are all accepting risk?  Is this risk acceptance just another manifestation of what I have been calling “The American Monkey Trap?

To me, AI helps us realize there is a risk in the way people accept risk. If we want no misinformation, but Facebook wants us to click on ads, it’s Facebook that controls the data integrity controls.  We become enlightened, maybe a bit frightened, but then we accept that risk.

Open AI can build all the guardrails into GPT it wants, but that doesn’t stop a threat actor from removing those guard rails. Remember that, every time you see a GPT hallucination.  They produced the technology – “took it to market” – knowing it was 51% accurate, and that the guard rails could be removed.

And we all accepted that risk.

Original article by Dan Hadaway CRISC CISA CISM. Founder and Information Architect, infotex


Dan’s New Leaf – a fun blog to inspire thought in  IT Governance.

Audit & Assessment

Policies & Procedure Development

Endpoint Detection and Response

Managed SIEM

Consulting Services

Network Monitoring

One Response

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts

The Magnificent Seven 2023

Seven Trends . . . …that small bank Information Security Officers face in 2023 Another one of those Dan’s New Leaf Posts, meant to inspire thought about IT Governance . . . . Welcom...