Sharpen Your Saw

Embrace and Manage Risk

Your Own AI Routine

Another one of those Dan’s New Leaf Posts, meant to inspire thought about IT Governance . . . .

Those of us who have found the time to integrate large language models (LLMs, like ChatGPT) into our day-to-day practices now recognize the irony in the phrase, “found the time.” We know that the use of AI is a huge timesaver.

When people claim they don’t have the time to learn ChatGPT, I am reminded of the story in Stephen Covey’s book. (He had to help his neighbor realize that sharpening his saw was worth the time).

I recently bumped into a friend who was making the same declaration, and after showing him the LLM app, Perplexity, he admitted it was fear more than time preventing him from learning AI.  And that is a good thing. It is totally understandable. There is risk in any technology.

Those of us who remember the days of management adopting technologies without even telling their information security people about it, should be quite satisfied that we have everybody sufficiently frightened, at least of AI. But as I have always said about awareness, we must be enlightened, not frightened.

And please know we are working on an AI policy (I just talked with Adam and we will have this ready by the end of September) because we don’t want our users to just go out willy-nilly learning to use AI. We must educate our users and if you’re not including AI use in your next awareness training session, please reach out to me so I can help you understand the awareness opportunity you are missing.

The risks surrounding LLMs are plentiful and some black swan. Ask the lawyer who stupidly relied on GPT hallucinations for his court filings.

I get that many of us learned way back in college to stay away from things that hallucinate. But the benefits of AI are going to far outweigh the risks, especially now that we have the awareness that we have always wished for. It is good that our users are procrastinating, but not all of them are and we need to get out in front of AI ASAP.

I remember when I joined social media begrudgingly only because I knew I needed to understand it in order to teach people how to be safe from it. We as cybersecurity professionals need to start learning to use generative AI like: ChatGPT, Perplexity, Canva and Social Pilot, not only to learn how to teach people safe methods of its use, but also for one other really amazing purpose: it will make us better professionals. It will save us tons of time.

Where social media made us less productive, angry, and … frankly …a bit dumber, LLM’s can make us more productive and enlightened. People are afraid to use GPT because it’s 51% accurate, but still get their news from social media which is how accurate?

If you follow my podcast, you might have heard the one where Bill Arnold and I muse over the fact that GPT is actually more accurate than human beings IF you prompt it correctly.  And that is the cusp: you must become your own prompt artist.

One of your 2024 goals – maybe even your 2024 New Year’s resolution – should be to develop your own personal prompt routine.  I am starting to develop one that looks something like this:










This is an ever-evolving process as my responses get better and better. As far as safety controls, I’ve already written and talked about this, but they could be reduced to two thoughts, input, and output. Never put information about customers or the company in LLMs. Never copy and paste from ChatGPT without knowing 100% that everything that you are reading is accurate.

But let me dive into each of the headings:


I am more curious than most, but I believe that in the course of any day most people in management are asked questions which they don’t completely know the answer to at least a half a dozen times.


I’m noticing that many times that I would normally go to Google I am now deciding is this one for Google, ChatGPT, or Perplexity – an app I found that uses GPT but can search the current Internet as well as provide links, backing up the information it returns and its responses.   Which tool I choose is based on whether I will need to define a role.


I like Perplexity because it’s a large language model that relies on GPT, but it considers current issues and provides links to webpages backing up the response. But Perplexity is not good at taking on roles. So sometimes I will use GPT to develop the prompt and then put the actual prompt in Perplexity. This helps me sharpen my prompt, but it simplifies the oppugn stage of my process, which is when I try to prove the response was wrong.


Then, before I actually ask the question I’m going to ask, I define as much of the context as I can without providing information about customers or my company. I ALWAYS keep in mind that sometimes you can provide information that’s not about your company, but that any human being could figure out, “you’re talking about ABC company.”


At the end of my context, I will then put this prompt, “please ask clarifying questions,” if what I’m asking about is complex. I will not do this in Perplexity because ChatGPT, having taken on a role, is better at asking clarifying questions.

Responding to the clarifying questions is often a back-and-forth and you often end up having to type the response up somewhere else because you have to refer to each of the questions that are being asked. Sometimes I will say, “can you reduce that list of questions to the top three?”


Once all of the clarifying questions have been worked out, I will then ask GPT to summarize the context. I will then take that summary and the question to Perplexity.


Both ChatGPT and Perplexity can be more verbose than Dan Hadaway himself. I will often say, “can you summarize what you just sent back in 10 sentences or less?”


I will then carefully try to prove anything that’s in those 10 sentences wrong. Know that just because Perplexity has a link, it doesn’t mean that the basis article was written correctly, accurately, or by an author knowing the truth.

Update the time-proven adage: “Being on the Internet doesn’t make it true, even if AI found it.” 


If I am planning to use the information that I have brought back in any of my writing, I will then humanize it.

This article is merely to expose you to the fact that you need to develop your own personal routine and stop procrastinating out of fear. Don’t use my routine, use your own.  For one, mine is still in development, and for two, I rarely stick to exactly that routine.

Mark Twain once said, “I’ve had a lot of worries in my life, most of which never happened.” If you still haven’t found the time to save yourself a whole bunch of time, your fear comes from a good place. But – as usual with procrastination – you will find the fear to be merely the acronym for false evidence appearing real.

Just be sure you have your Risk Manager hat on when you start exploring.

Original article by Dan Hadaway CRISC CISA CISM. Founder and Information Architect, infotex

Dan’s New Leaf – a fun blog to inspire thought in  IT Governance.

Audit & Assessment

Policies & Procedure Development

Endpoint Detection and Response

Managed SIEM

Consulting Services

Network Monitoring

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts

Considerations – Why you should choose infotex, Inc. as your next MSOC!

Reasons why we should be considered! infotex provides a number of services that can be checked out if you click over to! We even made a movie with all the reasons why infotex...

The Magnificent Seven 2023

Seven Trends . . . …that small bank Information Security Officers face in 2023 Another one of those Dan’s New Leaf Posts, meant to inspire thought about IT Governance . . . . Welcom...