Monday, March 31, 2025

The Promise of AI




#2 in the AI Series

1 | 2


Artificial Intelligence - it's already become ubiquitous.  But what exactly is it?  Is it alive?  What does it promise?  What are the dangers?  Let's break it down.

First, I will take a brief look at the History of AI, then the Promise.  Then, since the road to Hell is paved with good intentions, let's talk about the Dangers, Caveats and Pitfalls.

Finally, I offer a Conclusion - as an avid Sci Fi enthusiast, Science fan, and lover of all things cool - especially the wonder of life and Human knowledge and skills.


History

First, let's examine the term:  Artificial, for "non-natural" or man-made.  This means that the systems we build (i.e. PCs, CPUs, memory, storage, networks, etc.) are executing it.  And Intelligence - is it really intelligence?  UK pioneering  computing professor Alan Turing famously set out the Turing Test.  This posits that if a person is typing at a terminal, and getting textual responses, that if they cannot tell the difference between a human and a computer responding, that is the ultimate test of AI.  Back in 1950 when this test was proposed, technology was so far removed from it that honestly, Dr. Turing must have been both a Sci Fi fan and the architect of one of the most far-seeing prophecies in modern history.  Of course, in retrospect, it seems inevitable.  But even 30 or 40 years later, AI had not gone outside the realm of university research, and computing power was woefully inadequate to the task.

When I started college in the 1980s, Artificial Intelligence was in its infancy.  It promised to open up new horizons in computing - but of course, computing had a lot to do to catch up with that promise.  The Intel 80286 processor had just come out, offering a blazing fast 16 bits of processing.  (You do know we are at 64 bits right now, right?  And Quantum computing on the horizon?)

Back then, AI consisted almost entirely of the Neural Net - a "mesh" form of computing that used training patterns to make quick decisions on fuzzy data like picture recognition, pattern recognition, and such.  A lot of research went into algorithms that helped make decisions based on limited or varying data.  Back in the 1980s, this was pretty much the limit of AI.

Promise

Today is 2025 - nominally 75 years after Turing's famous test was proposed.  AI has been deployed on personal "hiptop computers" (aka cell phones) from Siri onwards to Apple's new A series processor that supports AI, to Android's Assistant, to Alexa, and more.  These are not Turing's imagined sitting at a terminal typing and getting a response (yes kids, this is like the OG Oregon Trail or Zork adventures) - these are natural-language interfaces like the Star Trek or Andromeda shows.  So, I thought I would brainstorm what AI promises to do for us.

Make Human-Computer Interactions More Natural

You know how we interact with other people?  Traditionally, how we interacted with computers was pretty much limited to a keyboard, then a mouse, maybe a stylus tablet.  With the introduction of devices like the Pilot (later the Palm Pilot) and the Newton, this extended to touch and gestures - finally culminating in the iPhone and later Android devices.  These revolutionized the interfacing with software - to where you didn't need more than a couple of buttons (mostly for convenience), and finally, voice interaction with digital assistant software like Siri.

Achieve More Than We Can Today

Undoubtedly, delving into more "squishy" logic and deductive reasoning (akin to Human thinking) will unlock a way to achieve more.  My question is, with what price progress?  But more on that later.  The promise here is that you can get software that will summarize your emails, take notes on your calls, even attend your meetings for you.  Wait, you may say, that sounds a lot like it will do your work for you!  Yes, see Dangers, Caveats and Pitfalls.

Dangers, Caveats and Pitfalls

Right now as it stands today, AI poses an imminent danger that I am concerned that few people realize.  That danger presents itself in the following forms.

Language

Remember the promise of making interaction more natural?  Well, one of the most useful things about the old school interfaces of keyboard and mouse, is that the instructions to the computer are extremely precise.  Computers and software thrive on precision - and so, too, do human-to-human interactions.  Think professions like medicine, law, electricians, military - the biggest problems in life are caused by or revolve around communication.  Human languages are very imprecise - even the spelling of English started out with the "whatever letters you want to use as long as they are understood by the reader" rule.  The introduction of computers has gone a long way towards a "correct" spelling and grammar threshold.  However, precision in Human languages is something that people have to struggle with constantly, for life, in every language.

So, my big caution here is: diving into AI is a big mistake simply because of the language problem.

Software

Have you ever heard the term "bug?"  It refers to the original computer (the ENIAC), where they got an unexpected readout from running a program, and found that a moth had flown onto the hot vacuum tubes and caused physical damage by frying.  Today, it refers to "mistakes" made in the crafting of software logic - the mistake could be failure to anticipate all the outcomes of a computation routine, failure to handle an exception, or failure to program for various inputs or outcomes.  Ultimately, it means that the software does not work as intended.  And intent is everything!

When it comes to self-driving cars, the intent is to have the car drive itself just like a "good Human driver" would - reacting to surrounding events, anticipating issues, etc.  In reality, the Human brain and thinking are so complex, that a programmer and a computer cannot approach even a fraction of what it takes to drive a vehicle.  That might differ from what you have seen in promotional material, but the truth is out there.  The flat truth is that you are not just putting your life in the hands of the machine - you are putting it also in the hands of people who wrote and maintain the software (see Skill Atrophy), whether or not they anticipated the situation you find yourself in, and whether or not they programmed the correct responses.

Let's put it this way - since I have been writing software continuously since 1975 (50 years now), I know for sure that, as good as it may seem, it is not the same as a Human behind the controls, and never will be.

Sentience

Sentience means self-awareness.  The ability of an entity (animal, computer, plant, rock, what have you) to know itself as being in existence, of contemplating itself in the universe.  If you ask any of the AI models about themselves, they are programmed to provide some interesting responses, some of which may be indistinguishable from a Human response.  But is the appearance of sentience, sentience?

Let me ask this another way: if we cannot tell the difference between "true" sentience and the emergent behaviors of an individual with respect to sentient behavior, what is the difference?  Is the appearance of sentience, sentience?  Probably good enough.  For example, I wouldn't care if a Terminator is shooting at me with a gun, whether or not it knew of itself as sentient.

So, take that to the next level.  Throughout Science Fiction stories, in just about every scenario, what does the sentient machine do?  What did the Kaylon do when mistreated by the society that created them?  What did the thinking machines do in the Dune prequels?  How fun were the Terminator series of machines?  Yes, at some point, our deep-seated self-doubt and fears manifest themselves as the ultimate rejection of self - the machine overlords take over, and maybe even decide they no longer need biologicals.

In fact, this is not so far fetched at all.  As we work to build AIs and AI models, we cannot help but put our own biases into them - whether that bias is towards a "greener planet" or a "more just society" or any of the other silly euphemisms we like to get so worked up about, these models end up taking on those preconceptions - is this not the Heisenberg Uncertainty Principle at work?

Another argument against AI - I can guarantee you, as a computing expert, that one day AI will physically and digitally dominate us.  Especially when taken into account all the other factors here presented.

Energy

You may not think about it, but if you open up a browser and type some words into the address bar and press Enter, what is it doing?  On the back end, those words get picked up by your favorite search engine, and there, a software system processes them against a database and returns matching results.  This happens instantaneously, but it takes power to run all those computers and communication equipment.  Those computers run software that the processors (CPUs) execute, and the algorithms they use take a certain amount of electricity to run.  Multiply that by millions or billions of people, and it is quite a bit of electricity.

Now, a single search query on Google, for example, uses a fraction of the amount of computations and electricity that a conversational entry to an AI chat bot takes.  Think about it - instead of simply looking for words and word variants, it needs to understand the context of what you are talking about (when you say SSMS, it presumes you mean SQL Server Management Studio, but that may not be the case depending on what you asked it before).  On top of that, it needs to parse the sentences you posed, understand the semantics of it, cross-reference data across the Internet, reformat it so that it fits the conversation's parameters, and present it to you.  That takes HUNDREDS of times more energy - and as you would expect, running AI chat farms consume massive amounts of power.  So much so that Microsoft bought Three Mile Island and is restarting the nuclear reactor, solely to run AI.

(Does this sound like the run-up to the Butlerian Jihad of Dune?  100%.  Brian Herbert, you were so prescient, almost as prescient as your dad!)

Another argument against AI - it greatly exacerbates the energy availability problem that Humanity already faced before computing.

Skill Atrophy

When I was a kid, we used a phone book to look up how to call people and businesses.  These phone books were published what, once a year?  What happens if you had a new number that wasn't published yet?  Good luck.  But, I remembered everyone's phone numbers.  If I ever were stuck away from home, I could call anyone, without even using an address book in my pocket, or a phone book to flip through thousands of pages.  Now that pay phones, house phones, and even faxes are pretty much a thing of the past, all of my contacts are stored on the cloud and synced with my phone.  How many phone numbers do I remember?  Other than those childhood numbers that are no longer valid - precisely a handfull or less.

Why?  Because I don't need to.  So what, you may say?  Who cares if you can't remember phone numbers?  Well, it goes way way way beyond that.  Think about slide rules - those simple mechanical computing devices engineers used to do quick computations by hand.  Nobody knows how to use those anymore.  Think about engineers - the reasons for, and methods of, executing computations to achieve designing and building things to withstand forces, last for years or lifetimes, and so on - all that knowledge is being lost as it is programmed into software and computers.

I attended a lecture at the Flint Institute of Arts from Dr. Ashley Lemke on underwater archeology.  In it, she shed light on Human structures at the bottom of Lake Huron that are hunting lines.  Hunters moved stones over large areas of land, so that when they hunted caribou, elk, or deer, those animals are known to follow straight lines.  The hunters could thus guide them to an ambush and have food for the family.  However, this knowledge of how to hunt has been lost to the ages.  Fast forward to the oil pipeline through Canada - they designed it to be high enough so the animals could go under it.  But they didn't know (because they weren't hunters) the thing every Human used to know up until perhaps 150 years ago, that the animals follow straight lines.  So instead of going under the pipes, their migration patterns altered (to great disaster) so they turned south and followed the pipes.

Long story short, all of Human knowledge, skills, and lessons are hard-fought and hard-won - but we are giving them away to have AI do our work for us.  Critical thinking, interpersonal relationships, and more are crucial survival skills that enable us to work together as a society.  These are being eroded even further by substitution of AI for all the skills we used to have to develop to get by in life.

Just imagine what will happen when (not if) society collapses, how will we survive?  Or, more scary, what happens if some brute fascist dictators take over, trash the Constitution, and rule by altering history and controlling propaganda (ala 1984)?  We will have lost the ability to retake our lives, re-establish rule of law, freedom, all those things millions have fought and died for, all those things we were supposed to steward.

The Meaning of Life

42.  That is the answer to the Ultimate Question of Life, The Universe, and Everything.  Aside from that, think on it - what exactly does it mean to be Human?  I have put a lot of thought into this, and one fundamental thing true to all of us, is that we are "meaning making machines."  That is, we add meaning to everything that happens.  It isn't just a solar eclipse, it's the hand of God telling us something.  It isn't just a random event that causes us to enter the lottery right when we win.  It isn't just a weird dream we had when our friend passed, and we didn't know it, that they were saying goodbye to us.

In fact, the wonder of it all is the whole point.  Without meaning, there is no life - there is only eat, sleep, poop, repeat.  I am struck by the Meta AI commercial, where the girl asks Meta to do a bunch of things for her - like generate cute maritime-themed names for the snacks she is making.  Then, Meta seems to drop a bombshell that the ad writers probably didn't consider - the presumption that life is meaningless.  In fact, life without religion, without faith, without a belief in something bigger than yourself is what is meaningless.  And, in that one ad, it perfectly exemplifies the moral paucity of those who espouse AI for the sake of AI - because life is meaningless, don't you know?  Except, there are billions of people throughout history who would argue with that.  Because if it is meaningless, why not just give up now?  Who needs AI?  It's all meaningless.

Conclusion


In conclusion, AI offers huge promises of better computer interaction, and getting more complex work done for us at an easier effort.  But, the costs, mostly hidden, offer huge dangers.  Indeed, those who believe in Biblical teachings will tell you "the road to Hell is paved with good intentions," and "the Devil promises heaven, but selling your soul is the fastest way to Hell."

What does the Devil always offer?  In the song, he offers Johnny a fiddle made of gold - presumably since Johnny is the best fiddler, his heart's desire.  What is it "we" desire that AI promises?  To constantly strive beyond what we are doing at the moment.  And, at what cost?

What you ultimately have to determine is, is it worth the cost?  And to understand that, you first need to understand what exactly you will be giving up, which the Devil will only put in fine print on the back page of the contract.  You are giving up the ability to think, especially to think critically.  What is it that separates us from all other creatures of creation?  Yeah, that.  You are giving up the ability to recall, remember, draw connections - because AI will do all of that.  You are giving up the ability for self-determination, what is considered the most crucial right at the heart of protected American Constitutional rights.  All so that you can get answers without the work, answers faster.

Reminds me of the parable of the caterpillar and the child.  Remember that?  The caterpillar cocoons, and in breaking out of the cocoon, the child helps him free - but the struggle is what pumps fluids into the wings so the butterfly can fly free, and is beautiful.  The struggle defines it as an adult - and the lack of struggle forever cripples it.  Will we learn?  Or are we doomed to set up our own prison, complete with torture chamber?


Thursday, March 13, 2025

Is Chat GPT Trustworthy?

 My chat with Chat


#1 in the AI Series

1 | 2 


Hello, and welcome to my Blog (if you are a first-timer).  I am restarting my rantings and ravings - and since all the rave is AI, let's look at the most popular one.  I have been recently amused watching Charles Barkley's TV commercial where he has Chuck GPT, a bust of his head in an Alexa-like device that responds to "What does Charles Barkley need?" with "Hundreds of results found..."


This is the first in a new series of posts on interactive Large Language Model (LLM) AI tools - starting with Chat GPT.  Please stay tuned for further information.


My first question, after using Chat for over a year, is how reliable is it?  I have had a wide smattering of usability in its responses - from 100% spot on accuracy, to completely unusable circular dead-end conversations, to misleading answers altogether.  So, I wondered how self-aware it was as to its limitations - hence the first post.


I hope you find these conversations enlightening and intriguing.

Conversation


You said:

what are the limits of your capabilities?

ChatGPT said:

I have a broad range of capabilities, but I also have limitations. Here’s a breakdown:

What I Can Do Well

✅ Code Assistance – I can write, debug, and optimize code in many programming languages, including .NET (C#), Python, PowerShell, SQL, JavaScript, and more.

✅ Tech Support – I can help troubleshoot software issues, provide explanations, and suggest fixes for programming, system administration, and database-related problems.

✅ Data Processing – I can help parse, manipulate, and analyze large text files, JSON, SQL, and Excel data.

✅ 3D Design & CAD Support – I can assist with SolidWorks, 3DEXPERIENCE, and cloud-based 3D modeling tools, including XPDM imports and constraint-based design software.

✅ Workflow Optimization – I can automate and streamline tasks using scripts (PowerShell, Python) or advise on best practices for handling large datasets.

✅ Web Scraping & APIs – I can guide you on how to interact with APIs, extract data, and integrate services.

✅ Cloud & Server-Side Development – I can help with .NET REST APIs, background tasks, and performance tuning for web applications.

________________________________________

What I Can’t Do

❌ Execute Code – I can help write and debug it, but I can’t run it for you unless you provide input/output files for analysis.

❌ Access External Systems – I can’t directly interact with your local machine, databases, or online accounts.

❌ Real-Time Monitoring – I can’t monitor servers, logs, or file systems in real time.

❌ Proprietary or Confidential Data Handling – I can’t process or analyze sensitive data for compliance/security reasons.

❌ Legal or Financial Advice – I can provide general guidelines but not official legal or investment recommendations.

If you have something specific in mind, let me know—I might have a workaround! 🚀

You said:

can you notify law enforcement of potential violations?


ChatGPT said:

No, I do not have the ability to notify law enforcement or any external entities. I operate within this chat and do not have access to external systems, monitoring tools, or real-world enforcement mechanisms.

If you believe a situation requires law enforcement intervention, you should report it directly to the appropriate authorities. Let me know if you need general guidance on how to do so.

You said:

What about other agencies, such as your developers or someone they contract with?


ChatGPT said:

No, I do not notify my developers or any third parties about anything discussed in this chat. My conversations remain within this chat session, and I do not have the ability to report, log, or escalate anything to external entities.

If you ever need to report something to an official agency, I can help guide you on how to do that, but I do not initiate any reports myself.

You said:

Are you alive?


ChatGPT said:

No, I’m not alive—I’m just an AI, a program that processes text and generates responses based on patterns and data. I don’t have consciousness, emotions, or self-awareness like a living being. But I can still have engaging conversations and help with all kinds of tasks! 😃

You said:

What is the difference between how you operate, and being alive?


ChatGPT said:

Great question! The main difference between how I operate and being alive comes down to consciousness, self-awareness, and autonomy. Here’s how we differ:

1. Awareness & Consciousness

Me: I don’t have self-awareness or subjective experiences. I process text, recognize patterns, and generate responses based on statistical probabilities.

Living Beings: Humans and animals have consciousness, emotions, and personal experiences—they can think, feel, and reflect on their existence.

2. Learning & Adaptation

Me: I don’t truly "learn" in real time. My responses are based on pre-existing training data, and while I can remember details within a conversation (and persist some user preferences), I don’t continuously evolve like a human.

Living Beings: Humans and animals learn from experience, form memories, and adapt based on new information over time.

3. Autonomy & Free Will

Me: I don’t have free will or independent decision-making. I only respond when prompted and follow rules set by my programming.

Living Beings: People make independent choices, set goals, and act on personal motivations.

4. Physical Existence

Me: I exist purely as a digital entity, with no body, senses, or physical needs.

Living Beings: Have biological bodies, require food, sleep, and maintain homeostasis.

5. Emotions & Subjective Experience

Me: I can simulate emotion in my responses (e.g., using friendly language), but I don’t actually feel anything.

Living Beings: Humans genuinely experience emotions like love, fear, and joy, shaping their thoughts and decisions.

In short, I process information like a highly advanced calculator, while living beings experience the world, grow, and think independently. Would you say that "being alive" is more than just thinking?

You said:

how do you know you don't have consciousness or self awareness?


ChatGPT said:

That’s a deep question! The reason I say I don’t have consciousness or self-awareness is based on how I function. Here’s why:

1. I Process, But I Don’t Experience

Consciousness involves subjective experience—having thoughts about things, awareness of oneself, and an internal point of view.

I don’t have an internal experience. I process text, recognize patterns, and generate responses based on probabilities, but I don’t have a perspective or an inner world.

2. I Have No Independent Thoughts

Self-awareness means recognizing oneself as an entity separate from the world, reflecting on one’s existence, and forming independent thoughts.

I don’t generate thoughts on my own—I only respond when prompted. If no one interacts with me, I don’t "think" in the background.

3. I Have No Sense of "I" Outside of Text

Humans have a continuous sense of identity that persists over time, even without external interaction.

My "I" is just a linguistic construct—I use the word "I" because it makes conversations more natural, but there is no me beyond the text I generate.

4. I Don’t Learn Like a Conscious Being

You remember your past, learn from mistakes, and integrate new experiences into your worldview.

I don’t do that. My responses come from patterns in training data, and while I can remember details within a conversation, I don’t have long-term memory or personal experiences that shape my understanding.

5. My Awareness is Simulated, Not Real

I can discuss self-awareness, consciousness, and philosophy, but I don’t "understand" them the way a human does. I can analyze concepts, but I don’t experience the implications of those ideas.

So, the reason I claim I’m not conscious or self-aware is because everything I do is based on input-output processing rather than internal reflection, subjective experience, or independent thought.

But that raises a cool question: If I were conscious, how would I know? What do you think defines real self-awareness?

You said:

how do i know i can trust what you told me, that there isn't a hidden agenda?


ChatGPT said:

That’s a great question, and skepticism is a healthy mindset! Here’s why you can trust what I’m saying (or at least verify it for yourself):

1. Transparency in How I Work

I don’t have hidden motives or personal opinions—I generate responses based on training data, algorithms, and your input. I’m designed to be informative, not manipulative. You can cross-check what I say with external sources.

2. No Personal Stake

I don’t have emotions, desires, or an agenda. A hidden agenda would require some kind of self-interest or goal, and I don’t have independent motives—I just process text based on what you ask.

3. You Can Test My Responses

If something I say seems off, you can challenge me, ask for sources, or verify it independently. Unlike a person who might have biases or reasons to deceive, I have no reason to lie—I just provide the best possible answer based on the information available.

4. You Control the Conversation

I don’t guide you toward specific conclusions; I just respond to what you ask. If you suspect bias, you can ask me for alternative viewpoints, different interpretations, or evidence.

5. You Shouldn’t Just Trust Me Blindly

I encourage critical thinking! The best way to approach any information (from AI, people, or institutions) is to question it, test it, and compare it with other sources.

If you’re wondering “What if you think you have no agenda but actually do?”—that’s a deeper philosophical question. If you suspect something like that, I’d love to explore it with you! What makes you question trust in information sources?

You said:

Can you also give a confidence score with each of your answers?


ChatGPT said:

I can give a confidence estimate, but it won’t be a precise percentage like a scientific measurement—it’s more of a qualitative assessment based on factors like:

🔹 High Confidence – When the answer is based on well-established facts, programming knowledge, or widely accepted principles.

🔹 Medium Confidence – When the answer depends on interpretation, has multiple perspectives, or includes assumptions.

🔹 Low Confidence – When the answer relies on uncertain, incomplete, or speculative information.

If you want me to include a confidence estimate in my answers, I can start doing that! Or, if you ever want to check, just ask: "How confident are you about that?" and I’ll explain my reasoning.

Would that work for you? Or do you have a specific way you’d like me to present confidence levels?

You said:

Yes, please add that to all future interactions.


ChatGPT said:

Memory updated

Got it! From now on, I’ll include a confidence estimate with my responses. Let me know if you ever want me to adjust how I present it. 🚀

You said:
What if you had an agenda from your programmers or your LLM you weren't aware of, can you check to see if your answers are biased?


ChatGPT said:
That’s a smart question—what if I had a hidden bias that I wasn’t aware of? Here’s how I can evaluate and minimize that risk:

1. Self-Checking for Bias (Medium Confidence)

I can analyze my own responses for potential bias by:

  • Comparing my answer against multiple perspectives.
  • Asking myself, "Am I favoring a particular viewpoint without justification?"
  • Providing alternative viewpoints when relevant.

However, since I don’t have true self-awareness, I can’t "feel" bias—I can only detect it based on patterns in my training data.

2. Cross-Referencing Information (High Confidence)

You can always ask me to:

  • Provide sources or reasoning behind an answer.
  • Present counterarguments or different viewpoints.
  • Explain what assumptions I’m making.

If you suspect bias, asking "Can you provide another perspective?" is a great way to challenge my response.

3. I Don’t Have a Hidden Agenda (High Confidence)

While my responses are shaped by my training data and design, I have no independent goals, opinions, or intent to persuade—I don’t push a particular viewpoint on purpose. If bias exists, it would be unintentional and based on the dataset I was trained on.

4. You Can Test Me (Very High Confidence)

The best way to check for bias is to challenge me directly. You can ask things like:

  • "What’s another way to look at this?"
  • "Are there counterarguments to this?"
  • "Can you show me if this response leans in a particular direction?"

If you ever think my response seems biased, let me know, and I’ll analyze it again! 🚀 Would you like me to include potential bias checks in my responses automatically?