The Trolley Problem

Another Meta Risk

Bolted on, not built in.

It was a tough morning for me this morning.  The latest update to my phone was problematic, causing me to be five minutes late for an important meeting.   But I had to apply the update for security reasons.  The people in the meeting all forgave me – who has NOT been through such an experience?

Let’s not forget what causes the need for an update. The software has quality problems. The publisher uses us to fix those problems. While we used to be the users, we are now the used.

At least with Microsoft, there is a process that has evolved, centered around the second Tuesday of every month (Patch Tuesday).  These Google and Apple updates just slap you – whenever convenient to them, not you, which today, happened to be when I picked up my phone, in the morning.

In the early days of information security, we used to complain about Microsoft’s security approach.   This was even before they came up with Patch Tuesday.

Before Patch Tuesday, we called it “bolting security on, instead of building security in.”  But that was a long time ago.

Long before 2018, when Microsoft finally hired 2000 security professionals, something had to be done about Microsoft vulnerabities, leveraged by more and more sophisticated viruses.  “Drive-by attack sites” took advantage of Microsoft’s  rush-to-market competitive practices, and the legal risk to Microsoft was unclear, even though OS vulnerabilities were involved in most security incidents.

Microsoft eventually escaped all liability for the TRILLIONS of dollars lost in our economy, because Microsoft did not build security into their systems.

Instead of building security in, bolt it on.  Yet Microsoft didn’t bolt it on.

We did.

It’s why WSUS was offered for free.  In our ignorance, we all went along with Patch Tuesday – impressed how WSUS streamlined our security process.   But by embracing Patch Tuesday, we strengthened – rewarded- the notion that it’s better to rush products to market, than to make them secure before they go to market.

Fast forward 20 years, and the notion is so baked-in, I am complaining that there is no Patch Tuesday for Google or Apple operating systems.

But suffice it to say, the application industry is not held to the same standard as other industries.  Can you imagine taking your new car in for a recall once a month?  No auto manufacturer would get away bolting security on, instead of building security in.

Until Tesla, who merged information technology with the automobile.


Philippa Foot, a British philosopher, first proposed the Trolley Problem in 1967.   The Trolley Problem was introduced as a genre of decision problems and has since been analyzed extensively by other philosophers such as Judith Thomson, Frances Kamm, and Peter Unger.

The classic version of the Trolley Problem involves a runaway trolley heading towards five workers who will all be killed if the trolley proceeds on its present course.  A decision needs to be made.  The person making the decision stands next to a large switch that can divert the trolley onto a different track.

The only way to save the lives of the five workers is to divert the trolley onto another track – but that track does have one worker on it. If the person diverts the trolley onto the other track, this one worker will die, but the other five workers will be saved.

What should the worker do?

There are many variations of the Trolley Problem, each with its own unique set of circumstances and ethical dilemmas.  One of variations is to lower the number of workers to two.   Is it ethical to purposely kill one person to save two from an accidental death?

I would like to introduce another one of those dilemmas.   Say you’re riding in a self-driving Tesla.   Ahead of you, suddenly, arises a Trolley Problem situation.

Your Tesla COULD jump to the other side of the highway, knowing that the resulting head-on collision with a solo drunk driver will sacrifice you and your three passengers, but prevents a predictable crash with a carload of five people.   (Thus, only five people die, instead of six).

What should the car do?

Will my family be able to sue Tesla for its decision?   How about the solo drunk driver?   And what was the probability threshold used in the drunk driver crash?

Are we ready to let AI make life and death decisions for us?

We better be, for it’s already happening.


I remember back in 2007-2008, when we joked, “this phone is not a phone, it’s a computer that can make calls.”   The Tesla is not a car. It’s a computer that can drive.


One of the most frightening things that I have read, since I started my career in cybersecurity, is this memo, signed by most of the AI developers, warning us of the risk of AI.

And yet, every one of these companies is continuing with their development.   We’re rewarding them for bolting security on instead of building it in every time we marvel about the features of ChatGPT.

When you log into it, it warns you of its risks.  Legal risk mitigation.  The reason they published the memo?  Legal risk mitigation.

The way I read this whiny billionaire memo: these companies are merely protecting their future legal liabilities.   They exploit the new risk I have written about.  I call it “meta-risk.”  The risk to users is not the same as the risk to developers.

Where awareness should be 9/11’s of the battle, the “AI Scares Me” memo actually attempts to use awareness against us.   They are literally telling us that they are not going to build security in.  And that it scares them.

Some of subsequ3ntly admitted they don’t even know how it works.  Worse – neither do their smartest developers.  They’re hping they can bolt something on down the road.

Or better:  use us to bolt it on.

Yes, the CEOs of AI are worried.  Worried that AI will cause huge disruptions.  That we will need to bolt security on somewhere down the road, and we might not be able to find the right size bolts.  Somewhere down the road, these billionaires – or actually, their AI-armed legal processes – will be pointing back to that memo, saying, “we tried to warn you.”


Elon Musk has said that the best way to protect ourselves from AI is to merge with it.

Really?

We never had security incidents when we were all using Novell as our network operating system.   Those who chose to use UNIX servers instead of Microsoft servers still have no UNIX Patch Tuesday.

But Microsoft established that rushing your product to market would be tolerated.   They didn’t release a memo saying they were worried about the trillions of dollars they were going to cost us.   They just rushed their product to market and developed a Patch Tuesday process that we now all follow without even complaining that security was bolted on, not built in.   They rely on us to fix their quality issues.


Do we have a “meta trolley problem?”  I’ve referred to AI metaphorically as a high-speed train.  One one track we have 300 million American workers, and on the other track, the treasury of a few billionaires?


I shudder, imagining being merged with AI, the morning after Tesla’s version of Patch Tuesday.   Not only will I be five minutes late for my meeting, but it might not even be same me showing up to it.

Original article by Dan Hadaway CRISC CISA CISM. Founder and Information Architect, infotex


Dan’s New Leaf – a fun blog to inspire thought in  IT Governance.

Audit & Assessment

Policies & Procedure Development

Endpoint Detection and Response

Managed SIEM

Consulting Services

Network Monitoring

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts

Considerations – Why you should choose infotex, Inc. as your next MSOC!

Reasons why we should be considered! infotex provides a number of services that can be checked out if you click over to offerings.infotex.com! We even made a movie with all the reasons why infotex...

The Magnificent Seven 2023

Seven Trends . . . …that small bank Information Security Officers face in 2023 Another one of those Dan’s New Leaf Posts, meant to inspire thought about IT Governance . . . . Welcom...

“Cooked Turkey” – Awareness Poster

Another awareness poster for YOUR customers (and users). Now that we have our own employees aware, maybe it’s time to start posting content for our customers!Check out posters.infotex.com for...