AI to Fight AI

Busting through meta-risk

From one poet to another

We all recently read the news of the executive order on artificial intelligence.

I actually have read a few articles by Joy Buolamwini who apparently helped the administration write this executive order.  I found her because she refers to herself as “Poet of Code.”

Her main take is that current AI technology was trained with all the same “isms” that our society has: racism, sexism, ageism, etc.  I really liked one of the points she regularly makes, which is, we can’t rely on a technology to help protect us against that same technology, especially when that technology has the potential of becoming conscious. 

I have already written that using AI to protect us from AI reminds me of the days that we started writing scripts to check our scripts.  Not safe.  We needed a human to audit both the scripts, and the scripts we were using to check our scripts. 

It’s Russel’s paradox all over again.  In any basic segregation or independence practice, you need to have an external, independent third-party monitor a system, in order to trust the monitoring process.  It’s basic risk management.  It’s how we avoid the fox watching the henhouse, yes, but it’s about more than that.  It’s about continuous process improvement, quality control, and the speed of mitigation.

Just like you wouldn’t want the person changing a system to be the same person that watches the system, we can’t rely solely on AI to monitor AI.  We will need Red Teaming, vulnerability assessments, risk assessments, and third-party audits. 

But there’s more we can do.  Especially with a system that is “trained” rather than “written.”

The process of adjusting the weights in a neural network is called weight processing.  During the training phase, the neural network adjusts the weights and biases to find the optimal values that minimize the error between the predicted output and the actual output.  The weights are numeric values that are multiplied by inputs.

Who controls the inputs?  What strategies do they prioritize?  Do we retrain and how often?  This relates to the meta-risk I was writing about a few weeks ago.  If AI developers have priorities different from our community, we may not be able to fix the “ism” issues with AI. 

Joy’s remedy is what she calls “socio-technical controls.”  Among other things, Joy advocates for a feature, built into all AI, that establishes a feedback process.  Real human beings should be able to provide validated feedback that would be considered in the weight processing phase of continuous retraining.

I like reading what Joy writes.  If you’ve made it this far in the blog, I hope you check out the articles on her Google scholar page.

She is indeed a poet, and her remedy rhymes with what we’ve been saying for years – safety will always require gray matter.  And awareness is 9/11’s of the battle.

Original article by Dan Hadaway CRISC CISA CISM. Founder and Information Architect, infotex


Dan’s New Leaf – a fun blog to inspire thought in  IT Governance.

Audit & Assessment

Policies & Procedure Development

Endpoint Detection and Response

Managed SIEM

Consulting Services

Network Monitoring

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts

Considerations – Why you should choose infotex, Inc. as your next MSOC!

Reasons why we should be considered! infotex provides a number of services that can be checked out if you click over to offerings.infotex.com! We even made a movie with all the reasons why infotex...

The Magnificent Seven 2023

Seven Trends . . . …that small bank Information Security Officers face in 2023 Another one of those Dan’s New Leaf Posts, meant to inspire thought about IT Governance . . . . Welcom...