Monday, March 31, 2025

The Promise of AI




#2 in the AI Series

1 | 2


Artificial Intelligence - it's already become ubiquitous.  But what exactly is it?  Is it alive?  What does it promise?  What are the dangers?  Let's break it down.

First, I will take a brief look at the History of AI, then the Promise.  Then, since the road to Hell is paved with good intentions, let's talk about the Dangers, Caveats and Pitfalls.

Finally, I offer a Conclusion - as an avid Sci Fi enthusiast, Science fan, and lover of all things cool - especially the wonder of life and Human knowledge and skills.


History

First, let's examine the term:  Artificial, for "non-natural" or man-made.  This means that the systems we build (i.e. PCs, CPUs, memory, storage, networks, etc.) are executing it.  And Intelligence - is it really intelligence?  UK pioneering  computing professor Alan Turing famously set out the Turing Test.  This posits that if a person is typing at a terminal, and getting textual responses, that if they cannot tell the difference between a human and a computer responding, that is the ultimate test of AI.  Back in 1950 when this test was proposed, technology was so far removed from it that honestly, Dr. Turing must have been both a Sci Fi fan and the architect of one of the most far-seeing prophecies in modern history.  Of course, in retrospect, it seems inevitable.  But even 30 or 40 years later, AI had not gone outside the realm of university research, and computing power was woefully inadequate to the task.

When I started college in the 1980s, Artificial Intelligence was in its infancy.  It promised to open up new horizons in computing - but of course, computing had a lot to do to catch up with that promise.  The Intel 80286 processor had just come out, offering a blazing fast 16 bits of processing.  (You do know we are at 64 bits right now, right?  And Quantum computing on the horizon?)

Back then, AI consisted almost entirely of the Neural Net - a "mesh" form of computing that used training patterns to make quick decisions on fuzzy data like picture recognition, pattern recognition, and such.  A lot of research went into algorithms that helped make decisions based on limited or varying data.  Back in the 1980s, this was pretty much the limit of AI.

Promise

Today is 2025 - nominally 75 years after Turing's famous test was proposed.  AI has been deployed on personal "hiptop computers" (aka cell phones) from Siri onwards to Apple's new A series processor that supports AI, to Android's Assistant, to Alexa, and more.  These are not Turing's imagined sitting at a terminal typing and getting a response (yes kids, this is like the OG Oregon Trail or Zork adventures) - these are natural-language interfaces like the Star Trek or Andromeda shows.  So, I thought I would brainstorm what AI promises to do for us.

Make Human-Computer Interactions More Natural

You know how we interact with other people?  Traditionally, how we interacted with computers was pretty much limited to a keyboard, then a mouse, maybe a stylus tablet.  With the introduction of devices like the Pilot (later the Palm Pilot) and the Newton, this extended to touch and gestures - finally culminating in the iPhone and later Android devices.  These revolutionized the interfacing with software - to where you didn't need more than a couple of buttons (mostly for convenience), and finally, voice interaction with digital assistant software like Siri.

Achieve More Than We Can Today

Undoubtedly, delving into more "squishy" logic and deductive reasoning (akin to Human thinking) will unlock a way to achieve more.  My question is, with what price progress?  But more on that later.  The promise here is that you can get software that will summarize your emails, take notes on your calls, even attend your meetings for you.  Wait, you may say, that sounds a lot like it will do your work for you!  Yes, see Dangers, Caveats and Pitfalls.

Dangers, Caveats and Pitfalls

Right now as it stands today, AI poses an imminent danger that I am concerned that few people realize.  That danger presents itself in the following forms.

Language

Remember the promise of making interaction more natural?  Well, one of the most useful things about the old school interfaces of keyboard and mouse, is that the instructions to the computer are extremely precise.  Computers and software thrive on precision - and so, too, do human-to-human interactions.  Think professions like medicine, law, electricians, military - the biggest problems in life are caused by or revolve around communication.  Human languages are very imprecise - even the spelling of English started out with the "whatever letters you want to use as long as they are understood by the reader" rule.  The introduction of computers has gone a long way towards a "correct" spelling and grammar threshold.  However, precision in Human languages is something that people have to struggle with constantly, for life, in every language.

So, my big caution here is: diving into AI is a big mistake simply because of the language problem.

Software

Have you ever heard the term "bug?"  It refers to the original computer (the ENIAC), where they got an unexpected readout from running a program, and found that a moth had flown onto the hot vacuum tubes and caused physical damage by frying.  Today, it refers to "mistakes" made in the crafting of software logic - the mistake could be failure to anticipate all the outcomes of a computation routine, failure to handle an exception, or failure to program for various inputs or outcomes.  Ultimately, it means that the software does not work as intended.  And intent is everything!

When it comes to self-driving cars, the intent is to have the car drive itself just like a "good Human driver" would - reacting to surrounding events, anticipating issues, etc.  In reality, the Human brain and thinking are so complex, that a programmer and a computer cannot approach even a fraction of what it takes to drive a vehicle.  That might differ from what you have seen in promotional material, but the truth is out there.  The flat truth is that you are not just putting your life in the hands of the machine - you are putting it also in the hands of people who wrote and maintain the software (see Skill Atrophy), whether or not they anticipated the situation you find yourself in, and whether or not they programmed the correct responses.

Let's put it this way - since I have been writing software continuously since 1975 (50 years now), I know for sure that, as good as it may seem, it is not the same as a Human behind the controls, and never will be.

Sentience

Sentience means self-awareness.  The ability of an entity (animal, computer, plant, rock, what have you) to know itself as being in existence, of contemplating itself in the universe.  If you ask any of the AI models about themselves, they are programmed to provide some interesting responses, some of which may be indistinguishable from a Human response.  But is the appearance of sentience, sentience?

Let me ask this another way: if we cannot tell the difference between "true" sentience and the emergent behaviors of an individual with respect to sentient behavior, what is the difference?  Is the appearance of sentience, sentience?  Probably good enough.  For example, I wouldn't care if a Terminator is shooting at me with a gun, whether or not it knew of itself as sentient.

So, take that to the next level.  Throughout Science Fiction stories, in just about every scenario, what does the sentient machine do?  What did the Kaylon do when mistreated by the society that created them?  What did the thinking machines do in the Dune prequels?  How fun were the Terminator series of machines?  Yes, at some point, our deep-seated self-doubt and fears manifest themselves as the ultimate rejection of self - the machine overlords take over, and maybe even decide they no longer need biologicals.

In fact, this is not so far fetched at all.  As we work to build AIs and AI models, we cannot help but put our own biases into them - whether that bias is towards a "greener planet" or a "more just society" or any of the other silly euphemisms we like to get so worked up about, these models end up taking on those preconceptions - is this not the Heisenberg Uncertainty Principle at work?

Another argument against AI - I can guarantee you, as a computing expert, that one day AI will physically and digitally dominate us.  Especially when taken into account all the other factors here presented.

Energy

You may not think about it, but if you open up a browser and type some words into the address bar and press Enter, what is it doing?  On the back end, those words get picked up by your favorite search engine, and there, a software system processes them against a database and returns matching results.  This happens instantaneously, but it takes power to run all those computers and communication equipment.  Those computers run software that the processors (CPUs) execute, and the algorithms they use take a certain amount of electricity to run.  Multiply that by millions or billions of people, and it is quite a bit of electricity.

Now, a single search query on Google, for example, uses a fraction of the amount of computations and electricity that a conversational entry to an AI chat bot takes.  Think about it - instead of simply looking for words and word variants, it needs to understand the context of what you are talking about (when you say SSMS, it presumes you mean SQL Server Management Studio, but that may not be the case depending on what you asked it before).  On top of that, it needs to parse the sentences you posed, understand the semantics of it, cross-reference data across the Internet, reformat it so that it fits the conversation's parameters, and present it to you.  That takes HUNDREDS of times more energy - and as you would expect, running AI chat farms consume massive amounts of power.  So much so that Microsoft bought Three Mile Island and is restarting the nuclear reactor, solely to run AI.

(Does this sound like the run-up to the Butlerian Jihad of Dune?  100%.  Brian Herbert, you were so prescient, almost as prescient as your dad!)

Another argument against AI - it greatly exacerbates the energy availability problem that Humanity already faced before computing.

Skill Atrophy

When I was a kid, we used a phone book to look up how to call people and businesses.  These phone books were published what, once a year?  What happens if you had a new number that wasn't published yet?  Good luck.  But, I remembered everyone's phone numbers.  If I ever were stuck away from home, I could call anyone, without even using an address book in my pocket, or a phone book to flip through thousands of pages.  Now that pay phones, house phones, and even faxes are pretty much a thing of the past, all of my contacts are stored on the cloud and synced with my phone.  How many phone numbers do I remember?  Other than those childhood numbers that are no longer valid - precisely a handfull or less.

Why?  Because I don't need to.  So what, you may say?  Who cares if you can't remember phone numbers?  Well, it goes way way way beyond that.  Think about slide rules - those simple mechanical computing devices engineers used to do quick computations by hand.  Nobody knows how to use those anymore.  Think about engineers - the reasons for, and methods of, executing computations to achieve designing and building things to withstand forces, last for years or lifetimes, and so on - all that knowledge is being lost as it is programmed into software and computers.

I attended a lecture at the Flint Institute of Arts from Dr. Ashley Lemke on underwater archeology.  In it, she shed light on Human structures at the bottom of Lake Huron that are hunting lines.  Hunters moved stones over large areas of land, so that when they hunted caribou, elk, or deer, those animals are known to follow straight lines.  The hunters could thus guide them to an ambush and have food for the family.  However, this knowledge of how to hunt has been lost to the ages.  Fast forward to the oil pipeline through Canada - they designed it to be high enough so the animals could go under it.  But they didn't know (because they weren't hunters) the thing every Human used to know up until perhaps 150 years ago, that the animals follow straight lines.  So instead of going under the pipes, their migration patterns altered (to great disaster) so they turned south and followed the pipes.

Long story short, all of Human knowledge, skills, and lessons are hard-fought and hard-won - but we are giving them away to have AI do our work for us.  Critical thinking, interpersonal relationships, and more are crucial survival skills that enable us to work together as a society.  These are being eroded even further by substitution of AI for all the skills we used to have to develop to get by in life.

Just imagine what will happen when (not if) society collapses, how will we survive?  Or, more scary, what happens if some brute fascist dictators take over, trash the Constitution, and rule by altering history and controlling propaganda (ala 1984)?  We will have lost the ability to retake our lives, re-establish rule of law, freedom, all those things millions have fought and died for, all those things we were supposed to steward.

The Meaning of Life

42.  That is the answer to the Ultimate Question of Life, The Universe, and Everything.  Aside from that, think on it - what exactly does it mean to be Human?  I have put a lot of thought into this, and one fundamental thing true to all of us, is that we are "meaning making machines."  That is, we add meaning to everything that happens.  It isn't just a solar eclipse, it's the hand of God telling us something.  It isn't just a random event that causes us to enter the lottery right when we win.  It isn't just a weird dream we had when our friend passed, and we didn't know it, that they were saying goodbye to us.

In fact, the wonder of it all is the whole point.  Without meaning, there is no life - there is only eat, sleep, poop, repeat.  I am struck by the Meta AI commercial, where the girl asks Meta to do a bunch of things for her - like generate cute maritime-themed names for the snacks she is making.  Then, Meta seems to drop a bombshell that the ad writers probably didn't consider - the presumption that life is meaningless.  In fact, life without religion, without faith, without a belief in something bigger than yourself is what is meaningless.  And, in that one ad, it perfectly exemplifies the moral paucity of those who espouse AI for the sake of AI - because life is meaningless, don't you know?  Except, there are billions of people throughout history who would argue with that.  Because if it is meaningless, why not just give up now?  Who needs AI?  It's all meaningless.

Conclusion


In conclusion, AI offers huge promises of better computer interaction, and getting more complex work done for us at an easier effort.  But, the costs, mostly hidden, offer huge dangers.  Indeed, those who believe in Biblical teachings will tell you "the road to Hell is paved with good intentions," and "the Devil promises heaven, but selling your soul is the fastest way to Hell."

What does the Devil always offer?  In the song, he offers Johnny a fiddle made of gold - presumably since Johnny is the best fiddler, his heart's desire.  What is it "we" desire that AI promises?  To constantly strive beyond what we are doing at the moment.  And, at what cost?

What you ultimately have to determine is, is it worth the cost?  And to understand that, you first need to understand what exactly you will be giving up, which the Devil will only put in fine print on the back page of the contract.  You are giving up the ability to think, especially to think critically.  What is it that separates us from all other creatures of creation?  Yeah, that.  You are giving up the ability to recall, remember, draw connections - because AI will do all of that.  You are giving up the ability for self-determination, what is considered the most crucial right at the heart of protected American Constitutional rights.  All so that you can get answers without the work, answers faster.

Reminds me of the parable of the caterpillar and the child.  Remember that?  The caterpillar cocoons, and in breaking out of the cocoon, the child helps him free - but the struggle is what pumps fluids into the wings so the butterfly can fly free, and is beautiful.  The struggle defines it as an adult - and the lack of struggle forever cripples it.  Will we learn?  Or are we doomed to set up our own prison, complete with torture chamber?


Thursday, March 13, 2025

Is Chat GPT Trustworthy?

 My chat with Chat


#1 in the AI Series

1 | 2 


Hello, and welcome to my Blog (if you are a first-timer).  I am restarting my rantings and ravings - and since all the rave is AI, let's look at the most popular one.  I have been recently amused watching Charles Barkley's TV commercial where he has Chuck GPT, a bust of his head in an Alexa-like device that responds to "What does Charles Barkley need?" with "Hundreds of results found..."


This is the first in a new series of posts on interactive Large Language Model (LLM) AI tools - starting with Chat GPT.  Please stay tuned for further information.


My first question, after using Chat for over a year, is how reliable is it?  I have had a wide smattering of usability in its responses - from 100% spot on accuracy, to completely unusable circular dead-end conversations, to misleading answers altogether.  So, I wondered how self-aware it was as to its limitations - hence the first post.


I hope you find these conversations enlightening and intriguing.

Conversation


You said:

what are the limits of your capabilities?

ChatGPT said:

I have a broad range of capabilities, but I also have limitations. Here’s a breakdown:

What I Can Do Well

✅ Code Assistance – I can write, debug, and optimize code in many programming languages, including .NET (C#), Python, PowerShell, SQL, JavaScript, and more.

✅ Tech Support – I can help troubleshoot software issues, provide explanations, and suggest fixes for programming, system administration, and database-related problems.

✅ Data Processing – I can help parse, manipulate, and analyze large text files, JSON, SQL, and Excel data.

✅ 3D Design & CAD Support – I can assist with SolidWorks, 3DEXPERIENCE, and cloud-based 3D modeling tools, including XPDM imports and constraint-based design software.

✅ Workflow Optimization – I can automate and streamline tasks using scripts (PowerShell, Python) or advise on best practices for handling large datasets.

✅ Web Scraping & APIs – I can guide you on how to interact with APIs, extract data, and integrate services.

✅ Cloud & Server-Side Development – I can help with .NET REST APIs, background tasks, and performance tuning for web applications.

________________________________________

What I Can’t Do

❌ Execute Code – I can help write and debug it, but I can’t run it for you unless you provide input/output files for analysis.

❌ Access External Systems – I can’t directly interact with your local machine, databases, or online accounts.

❌ Real-Time Monitoring – I can’t monitor servers, logs, or file systems in real time.

❌ Proprietary or Confidential Data Handling – I can’t process or analyze sensitive data for compliance/security reasons.

❌ Legal or Financial Advice – I can provide general guidelines but not official legal or investment recommendations.

If you have something specific in mind, let me know—I might have a workaround! 🚀

You said:

can you notify law enforcement of potential violations?


ChatGPT said:

No, I do not have the ability to notify law enforcement or any external entities. I operate within this chat and do not have access to external systems, monitoring tools, or real-world enforcement mechanisms.

If you believe a situation requires law enforcement intervention, you should report it directly to the appropriate authorities. Let me know if you need general guidance on how to do so.

You said:

What about other agencies, such as your developers or someone they contract with?


ChatGPT said:

No, I do not notify my developers or any third parties about anything discussed in this chat. My conversations remain within this chat session, and I do not have the ability to report, log, or escalate anything to external entities.

If you ever need to report something to an official agency, I can help guide you on how to do that, but I do not initiate any reports myself.

You said:

Are you alive?


ChatGPT said:

No, I’m not alive—I’m just an AI, a program that processes text and generates responses based on patterns and data. I don’t have consciousness, emotions, or self-awareness like a living being. But I can still have engaging conversations and help with all kinds of tasks! 😃

You said:

What is the difference between how you operate, and being alive?


ChatGPT said:

Great question! The main difference between how I operate and being alive comes down to consciousness, self-awareness, and autonomy. Here’s how we differ:

1. Awareness & Consciousness

Me: I don’t have self-awareness or subjective experiences. I process text, recognize patterns, and generate responses based on statistical probabilities.

Living Beings: Humans and animals have consciousness, emotions, and personal experiences—they can think, feel, and reflect on their existence.

2. Learning & Adaptation

Me: I don’t truly "learn" in real time. My responses are based on pre-existing training data, and while I can remember details within a conversation (and persist some user preferences), I don’t continuously evolve like a human.

Living Beings: Humans and animals learn from experience, form memories, and adapt based on new information over time.

3. Autonomy & Free Will

Me: I don’t have free will or independent decision-making. I only respond when prompted and follow rules set by my programming.

Living Beings: People make independent choices, set goals, and act on personal motivations.

4. Physical Existence

Me: I exist purely as a digital entity, with no body, senses, or physical needs.

Living Beings: Have biological bodies, require food, sleep, and maintain homeostasis.

5. Emotions & Subjective Experience

Me: I can simulate emotion in my responses (e.g., using friendly language), but I don’t actually feel anything.

Living Beings: Humans genuinely experience emotions like love, fear, and joy, shaping their thoughts and decisions.

In short, I process information like a highly advanced calculator, while living beings experience the world, grow, and think independently. Would you say that "being alive" is more than just thinking?

You said:

how do you know you don't have consciousness or self awareness?


ChatGPT said:

That’s a deep question! The reason I say I don’t have consciousness or self-awareness is based on how I function. Here’s why:

1. I Process, But I Don’t Experience

Consciousness involves subjective experience—having thoughts about things, awareness of oneself, and an internal point of view.

I don’t have an internal experience. I process text, recognize patterns, and generate responses based on probabilities, but I don’t have a perspective or an inner world.

2. I Have No Independent Thoughts

Self-awareness means recognizing oneself as an entity separate from the world, reflecting on one’s existence, and forming independent thoughts.

I don’t generate thoughts on my own—I only respond when prompted. If no one interacts with me, I don’t "think" in the background.

3. I Have No Sense of "I" Outside of Text

Humans have a continuous sense of identity that persists over time, even without external interaction.

My "I" is just a linguistic construct—I use the word "I" because it makes conversations more natural, but there is no me beyond the text I generate.

4. I Don’t Learn Like a Conscious Being

You remember your past, learn from mistakes, and integrate new experiences into your worldview.

I don’t do that. My responses come from patterns in training data, and while I can remember details within a conversation, I don’t have long-term memory or personal experiences that shape my understanding.

5. My Awareness is Simulated, Not Real

I can discuss self-awareness, consciousness, and philosophy, but I don’t "understand" them the way a human does. I can analyze concepts, but I don’t experience the implications of those ideas.

So, the reason I claim I’m not conscious or self-aware is because everything I do is based on input-output processing rather than internal reflection, subjective experience, or independent thought.

But that raises a cool question: If I were conscious, how would I know? What do you think defines real self-awareness?

You said:

how do i know i can trust what you told me, that there isn't a hidden agenda?


ChatGPT said:

That’s a great question, and skepticism is a healthy mindset! Here’s why you can trust what I’m saying (or at least verify it for yourself):

1. Transparency in How I Work

I don’t have hidden motives or personal opinions—I generate responses based on training data, algorithms, and your input. I’m designed to be informative, not manipulative. You can cross-check what I say with external sources.

2. No Personal Stake

I don’t have emotions, desires, or an agenda. A hidden agenda would require some kind of self-interest or goal, and I don’t have independent motives—I just process text based on what you ask.

3. You Can Test My Responses

If something I say seems off, you can challenge me, ask for sources, or verify it independently. Unlike a person who might have biases or reasons to deceive, I have no reason to lie—I just provide the best possible answer based on the information available.

4. You Control the Conversation

I don’t guide you toward specific conclusions; I just respond to what you ask. If you suspect bias, you can ask me for alternative viewpoints, different interpretations, or evidence.

5. You Shouldn’t Just Trust Me Blindly

I encourage critical thinking! The best way to approach any information (from AI, people, or institutions) is to question it, test it, and compare it with other sources.

If you’re wondering “What if you think you have no agenda but actually do?”—that’s a deeper philosophical question. If you suspect something like that, I’d love to explore it with you! What makes you question trust in information sources?

You said:

Can you also give a confidence score with each of your answers?


ChatGPT said:

I can give a confidence estimate, but it won’t be a precise percentage like a scientific measurement—it’s more of a qualitative assessment based on factors like:

🔹 High Confidence – When the answer is based on well-established facts, programming knowledge, or widely accepted principles.

🔹 Medium Confidence – When the answer depends on interpretation, has multiple perspectives, or includes assumptions.

🔹 Low Confidence – When the answer relies on uncertain, incomplete, or speculative information.

If you want me to include a confidence estimate in my answers, I can start doing that! Or, if you ever want to check, just ask: "How confident are you about that?" and I’ll explain my reasoning.

Would that work for you? Or do you have a specific way you’d like me to present confidence levels?

You said:

Yes, please add that to all future interactions.


ChatGPT said:

Memory updated

Got it! From now on, I’ll include a confidence estimate with my responses. Let me know if you ever want me to adjust how I present it. 🚀

You said:
What if you had an agenda from your programmers or your LLM you weren't aware of, can you check to see if your answers are biased?


ChatGPT said:
That’s a smart question—what if I had a hidden bias that I wasn’t aware of? Here’s how I can evaluate and minimize that risk:

1. Self-Checking for Bias (Medium Confidence)

I can analyze my own responses for potential bias by:

  • Comparing my answer against multiple perspectives.
  • Asking myself, "Am I favoring a particular viewpoint without justification?"
  • Providing alternative viewpoints when relevant.

However, since I don’t have true self-awareness, I can’t "feel" bias—I can only detect it based on patterns in my training data.

2. Cross-Referencing Information (High Confidence)

You can always ask me to:

  • Provide sources or reasoning behind an answer.
  • Present counterarguments or different viewpoints.
  • Explain what assumptions I’m making.

If you suspect bias, asking "Can you provide another perspective?" is a great way to challenge my response.

3. I Don’t Have a Hidden Agenda (High Confidence)

While my responses are shaped by my training data and design, I have no independent goals, opinions, or intent to persuade—I don’t push a particular viewpoint on purpose. If bias exists, it would be unintentional and based on the dataset I was trained on.

4. You Can Test Me (Very High Confidence)

The best way to check for bias is to challenge me directly. You can ask things like:

  • "What’s another way to look at this?"
  • "Are there counterarguments to this?"
  • "Can you show me if this response leans in a particular direction?"

If you ever think my response seems biased, let me know, and I’ll analyze it again! 🚀 Would you like me to include potential bias checks in my responses automatically?

Monday, December 26, 2022

Apple - Forgetting How They Became Successful





How Far The Apple Has Fallen From The Tree

We are all familiar with the story of companies who break ground, do great things, and grow so bloated that they choke on their own excesses.  Such is the case with Apple.  As you may recall, I was a huge fan of Apple, and we adopted much of their ecosystem because it really did just work well together.

Well, fast forward - now it is the end of 2022, the country has hastened its descent into Communism, Steve Jobs has passed on, and here we are.  Apple's products over the past 3 years have declined so much in quality that our last purchase, an M1 Macbook a year ago, constitutes the last product purchase our family will make from Apple.

Let's look at the litany of things that have led me to this conclusion:

  • Of the 5 pairs of Air Pods and one Air Pod Pro, half of them stopped working during the warranty period and had to be replaced.  One (mine) has quit after the warranty period, and I have gone back to corded earbuds, which work better than the Air Pods.
  • The Bluetooth stack on the iPhone is so jacked, that it constantly loses connection with car, Air Pods, or whatever it is connected to.  This causes thing like Apple TV, iPhone, or what have you to pause while playing shows, or so on.  The Air Pods also lose pairing after a few months, and have to be constantly deleted and repaired in order to continue working.
  • iPhone quality has dropped, and innovation has tanked, to the point where I have an iPhone 11 (the 14 is now out), and I have absolutely no desire to upgrade.  It used to be that each iteration I wished I could upgrade to - or at least every other, there was such a large jump in technology.  Now that is such a thin shave, it isn't worth upgrading.  In fact, newer devices have suffered from quality issues that have stifled the desire to be on the bleeding edge.
  • Mac computer performance and reliability have degraded significantly.  First, both our older and newer Macs all have freezing problems.  Second, compatibility issues plague their feature and software.
  • Interoperability has stagnated - to where the "new" features they tout in the WWDC event are groaners that make you wonder why their heads are so far up their asses.
  • Apple recently increased their service fees for Apple One plans, with absolutely no added benefits.  Why?  Do they need the money?  No.  Just because the can.
  • I had always admired Apple's commitment to customer privacy.  They had stood up to the FBI on a case as tough as the terrorist in California whom the feds wanted Apple to unlock their iCloud account so they could investigate their motive and methods for carrying out their attack.  However, this year Apple sided with the CCP to block Airdrop in China, preventing protesters from their primary means of communication in protesting the draconian measures China's Communist government put in place to supposedly control the spread of COVID.  And, they were considering removing Twitter from the app store solely because Elon Musk bought it and was cleaning up its illegal activities.  This has gone way too far, and Apple has turned out to be a supporter of Communism, and come out strongly on the side of censorship and against free speech.
So, all-in-all, Apple's quality has gotten so bad, that I am changing my recommendations:  DO NOT BUY APPLE PRODUCTS OR SERVICES.  However, if you already have them (like I do), is it worth it to drop them and switch?  That has a whole host of costs associated with it, including time to convert.

Conclusion

Here is what I would recommend as reasonable:
  1. If you do not own Apple, or merely have iPhones but no other investment in their ecosystem - DON'T BUY.    Go ahead and switch, get off the Apple ecosystem.  What are the alternatives?
    • Google, who has committed just as egregious fraud, suppression of speech, election interference, and more as Twitter, developed Android, and I am sure has malicious code in there to do something.  However, Android is easier to root, and you can obtain versions of AOS that break the controls of Google.
    • Windows is actually working quite well, but it's developed by a company whose founder has come out as a climate change alarmist, who thinks that is caused by too many people on earth, and further it is his solemn duty to orchestrate mass murder of millions via his "charitable" foundation, the drug companies, and his lab in Wuhan that developed the COVID-19 disease and released it to the world.  It's a terrible time to be alive.
    • Linux is probably your best choice - especially check out Ubuntu and Ubuntu Touch.
  2. If you DO own Apple, has their quality sunk so far as to be worse than the competition (Android, Windows, Chrome, Linux, etc.)?  Well, no, probably not.  It's probably not worth switching, especially if you are heavily invested in the ecosystem of products and services.  How would you switch all your playlists, songs, etc. in iTunes, magazines in News +, and more over to Spotify, or others?  And what would you gain?  Honestly, the only good choice is to switch to Linux.  I have been using Ubuntu for many years in the desktop/server environment, and it is fantastic.  Lots of support.  They also have Ubuntu Touch, a mobile OS that works on some 50 or so different smartphone models.  I believe it will support Android apps and app stores, so it should be well-supported.  I'd like to know if any of you have gone this route, and what your experiences have been.  Post your replies below.

Wednesday, May 29, 2019

Being LED In The Right Direction

Last week, I had an unexpected education in the world of lighting.  We bought a new chandelier for our dining room, and it had the Compact Fluorescent (CFL) bulbs.  The problems with CFL (or any fluorescent for that matter) are, one, they have mercury in them and so are toxic, and must be disposed of responsibly, and two, they flicker and are a source of eye irritation and headaches to say the least.  So, being a savvy consumer, I know that LED bulbs (Light-Emitting Diodes) consume a fraction the power of old-fashioned incandescent bulbs, produce less heat, and are environmentally more acceptable, not to mention they last a very long time (supposedly - more below).  So, LED bulbs it was (5 for the fixture), and boy, it would be nice to have a dimmer switch, so I bought a LeGrand dimmer switch from Lowes that says it works with CFL and LED bulbs.

Since we are in the middle of remodeling our bathrooms, we had investigated light fixtures, and found our first lesson:

Ensuring Long Bulb Life

While at a Lowes, we asked a helpful gentleman (who turned out to be a US service veteran, thanks for your service!) about bulbs, and why it seemed that some fixtures bulbs burned out quick, while others they lasted years.  He said his dad, who was an electrician, taught him some important knowledge.  You know that metal tab in the back of the socket?  That is an electrical lead.  It is spring-loaded so that it pushes against the bulb lead and makes good contact.  However, we are taught to screw things in tight, which pushes the spring-loaded metal back.  Then, electricity runs through it, which warms it up, so the metal gets used to the new position, and eventually doesn't make good contact with the bulb lead.  When the leads don't make good contact, it can "stress" the bulb with an inconstant supply of electricity - and LED bulbs are electronic components as opposed to simple wires, so they are much more susceptible to issues arising from poor electrical quality.

To solve this problem, every time you change a bulb, I learned, you should use something non-conductive like a sharpened pencil, and pull the tab up.  When you put the bulb in, don't tighten it, just screw it in until it lights up.  I found that this fixed a "broken" light socket in our kitchen fixture, and it has improved the quality of light produced by the bulbs that did work.

Matching a Switch to a Bulb

In purchasing a dimmer switch for my new chandelier, I was aware that the dimmer had to work for LED bulbs.  However, I was not aware that different bulbs work differently with different switch models (let alone manufacturers).  We did have one bulb that didn't light properly, so it turned out to be a really good learning experience that instead of just returning it to Lowes, I called GE (the bulb manufacturer).  The first thing they asked me was to go to their web page that listed a long list of their bulb products, and for each group of products, a list of switch models that they tested.  The results of the tests were color coded: Green if it worked perfectly, Yellow if there was buzzing or flickering at lower voltages on the dimmer, and Red if it simply didn't work or damaged the bulb.  I told her it was just one bulb, but she insisted, because long-term, a badly-made switch can damage bulbs, and end up being expensive to replace bulb after bulb (these were expensive bulbs to begin with).

Next, I took each of the dimmer switch models that were shown Green for the type of bulb I had, and searched Lowes.com - they didn't have a single one.  I searched HomeDepot.com - and they had in stock a few of the recommended models.  So, on to Home Depot.

While there, we found Ecosmart brand LED bulbs, same form factor and same description, for a lot less.  So, we bought a set to try them out.  At $6.50 per bulb, the GE were pricey compared to $2.67 per bulb Ecosmart.

What we found, as usual, is that if you spend more money, you usually get a better product.  The GE bulbs produced a much more natural light (even though both used the same terminology and description and similar specs), and performed much better with the switch recommended by GE.  Weird that Home Depot doesn't sell the GE bulbs, and Lowes doesn't sell a switch that is recommended by GE.  Both bulb brands were warranted for 5 years, and given a 13 year life estimate; but the GE bulbs went down to completely dark at the bottom end of the dimmer switch, and did NOT buzz or flicker (like they did with the LeGrand switch from Lowes).  Frankly, when you take the cost of the bulb over 13 years, the difference is negligible, and I would rather have something I am happy with.

LED Light Fixtures

At the hardware store, we saw some really beautiful and cool light fixtures that are LED.  They are in really neat shapes, with the light coming down through curved acrylic bars or some other exciting shapes and material.  However, we asked some questions and noticed some massive shortcomings.  One thing we are constantly aware of these days is the serviceability and lifetime of a product.  Another is the packaging.  All products, I wish the packaging were completely recyclable and reduced to minimal packaging.  This is the major shortcoming in federal lawmaking.  However, for these LED fixtures, we noticed that they are made with special bulbs built into them.  And, the bulbs are not replaceable.  This is a big no-no nowadays!  It is bad enough (way bad) that printers are cheap, and after a few years stop working, and are so expensive to fix that nobody fixes them and just buy new, and throw "away" the old ones.  But what happens after the 5 year warranty, when (not if) the proprietary built-in bulb goes out?  You are going to replace the whole fixture?  That is just insane.  As cool as they look, I would only buy them if they had replaceable bulbs.

Conclusion

So, gone are the old days where you get any switch you like, any bulb you like, and just put them in.  And, it does make sense if you think about it.

A dimmer switch is really a small circuit board, and so the quality of the electronics and design and lab testing are really much more important than a simple on-off switch or even an old style dimmer switch that used an electronic pot to vary the output voltage.  Also, the LED bulbs are small circuit boards with diodes on them, and again the electronics, design and testing are important.  The quality of the products take a lot of R&D to perfect, and since we are talking electricity flow, that has to be conditioned and smooth as well.  So, it is important that a dimmer switch be made of sufficient quality to feed good "clean" electricity to the bulb, and the bulb be designed properly to respond to the varying voltages supplied by a dimmer switch.

So, we found that the Lutron switch worked much better than the LeGrand (for about the same price), and the GE bulbs were far superior in light quality to the Ecosmart.

Monday, April 29, 2019

"Smart" is the new "e" and "i"

Remember my post 7 years ago about the vogue-ness of using in-letters to start words?  Apparently my prediction was wrong, just flat out wrong.  I was thinking linearly, like what's the next letter, when I should have been thinking off the curve - what's the next word?  And that word has become "smart" - as in smart watch, smart phone, smart office (yes I actually just read that in an actual, published article).  For crying out loud, PLEASE let's stop using that!  It is way NOT smart.

First of all, calling something smart just because it has a processor on it is a labeling gimmick just like e and i (see above article).  They aren't smart - because if they were actually able to have artificial intelligence, that would scare the (smart) pants off me!  So something has a chip on it and some firmware that lets it take input and alter its behavior.

Second, is it really smart to be covering our lives in computing hardware?  This Internet of Things (or IoT), processors in our speakers, processors in our TVs, and more - this is ushering in, at the very least, an age of George Orwell's 1984, where it is easy for the government, hackers, or what have you to spy on every aspect of your private life.  And that's the least of it.  At worst, with machine learning and AI come self-awareness and then self-preservation, and I tell you, machines will enslave humanity - and we will willingly give in to them.

So the big fad today is smart this and smart that.  And the common theme in product marketing is to call things like they aren't (sound like 1984??).  As in, not smart is called smart.

Aw man, am I just a weirdo shouting into the wind?

Wednesday, April 3, 2019

Virus Alert

Virus Alert!

I don't know if you are a fan of Weird Al Yankovic, like our family is.  If you are a fan and have been listening to all of his music, you have likely heard this song.  If you haven't, I highly recommend finding it on your music service and listening to it.  Stop reading right now, and go and listen.  Go ahead!

Now, welcome back.  I think now that you've heard it, you will understand my technical take on this: as we listen to this, I tell the kids which warnings in the song are possible, and which aren't.  While listening to my playlist at the gym this morning, it came on, and it occurred to me that more of the warnings in the song are possible now, than when my kids were younger.  Let's make a list of these warnings, and identify which ones actually may happen, and which ones may not.  I think you'd be surprised, what an expert's analysis indicates.  I am also going to predict which ones may become possible in the next few years, with Artificial Intelligence and the Internet of Things (always wondered what IoT stood for??).

First, the Feasibility is on a scale from 0 (Impossible) to 5 (Absolutely possible).  Then, I establish a feasibility now, as of 2019, as well as in the future (meaning probably in the next 10 years, but really any time).

# Warning Feasibility Now Future Feasability
1 Translate your documents into Swahili 5 5
2 Make your TV record "Gigli" 1 3
3 Neuter your pets 0 2
4 Give your laundry static cling 1 5
5 Make your computer screen freeze 5 5
6 Erase the Easter eggs off your DVDs 0 0
7 Erase your hard drive and your backups too 2 5
8 And the hard drives of anyone related to you 1 3
9 Make the paint peel off your walls 0 1
10 Make your keyboard all sticky 0 0
11 Give your poodle a hickey 0 0
12 Invest your cash in stock in Euro Disney 5 5
13 Tie up your phone making crank long distance calls 1 3
14 Set your clocks back an hour 2 5
15 Start clogging the shower 0 0
16 Give you a permanent wedgie 0 0
17 Legally change your name to Reggie 1 2
18 Mess up the pH balance in your pool 0 3
19 Melt your face right off your skull 0 0
20 Make your iPod only play Jethro Tull 0 3
21 Tell you knock-knock jokes while you're trying to sleep 1 5
22 Make you physically attracted to sheep 0 0
23 Steal your identity and your credit cards 5 5
24 Buy you a warehouse full of pink leotards 5 5
25 Cause a major rift in time and space 0 0
26 Leave a bunch of Twinkie wrappers all over the place 0 0
27 Email your grandmother all of your porn 5 5

Did any of them surprise you?  For those whose feasibility values changed, let's discuss my reasoning behind that:

2. Make your TV record "Gigli

Since many of us use streaming TV services, it is quite possible that a hack of them will make it record shows you have no control over.  Whether or not Ben Affleck and J Lo will be on that recording, well, that's another matter entirely.

3. Neuter your pets

"What?" you may say, "You are crazy!"  Believe it or not, I think services like picking up your pet and having things done will become more of a thing, as people get more and more reliant upon technology, and pay less and less attention to the details of life.  A simple hack would make it possible to order a neuter service, have them come and pick it up, and pay for it too.

4. Give your laundry static cling

"Now I know you're whacked!" you say.  But wait!  Think of the IoT.  This brings more and more things connected, and if you get the "smart" devices (which I refuse to), a hack could control them.  The device manufacturers, as has been shown, are not security experts, nor do may do even the basic due diligence (shame on them!).  Heck, the phone system Caller ID is still stuck back in the 1990's vintage protocol it originally was devised with.  I absolutely believe that appliance manufacturers do (and will) drop the security ball, and let hackers do things like maliciously control their devices, even to changing dryer settings and giving your clothes static cling.  And, I fully have faith in Human nature, that there are hackers out there who will do it just to laugh at it.  So I give this one a 1 now, simply because most people don't have smart laundry machines yet, but as they do, it will become absolutely feasible.

7. Erase your hard drive and your backups too

Obviously erasing your hard drive is certainly one of the things viruses can do; but your backups?  If you use online backup services, this can be a risk as well.  I'd say, definitely could happen.  I put a higher future feasibility simply because I believe more people will go to cloud backups.

8. And the hard drives of anyone related to you

Definitely - as social media security has been a big issue lately, and Facebook, for example, has fallen way short of protecting user privacy, this is a big and growing risk.

9. Make the paint peel off your walls

Bear with me here.  You know how they have smart lightbulbs, and smart outlets?  Smart paint.  It will be a thing.  Yeah, not very likely, so I didn't put it as a 5.  But who knows?

13. Tie up your phone making crank long distance calls

What is a "long distance" call?  We almost don't know anymore!  International calls are still costlier, but even calls to Canada and Mexico are treated as domestic US calls nowadays.  Anyhow, as our "telephone" devices become computers, I say the risk becomes higher that this is more likely.

14. Set your clocks back an hour

Don't even get me started on Daylight Savings Time - just abolish it and be done with it.  Until then, yes, as clocks are more and more connected (I use my Apple Watch and "phone" as a clock almost entirely), they are more and more open to this risk.

17. Legally change your name to Reggie

While I admit that would be tragic, I don't think this is very likely.  However, state run records systems where you change your name legally are certainly electronic, and therefore vulnerable.  I'd give this a low but real risk.

18. Mess up the pH balance in your pool

Yeah, you know where I'm going with this one.  IoT.  As great as it may seem to connect everything, I think we need to think more about A) should we, and B) how do we do it, because if we do it in a way that is off-the-shelf, it becomes widely open and widely vulnerable to the same security flaws that everyone and everything else suffers from.  So, is the pool management system connected via WiFi?

20. Make your iPod play only Jethro Tull

What's wrong with Jethro Tull???  And besides, Apple security is primo, so they can't hack that!  But seriously, when I think of iPod, I mean any music playing device - including Alexa, Home Pod, Google Assistant, let alone your "phone."  I'd say this could become more and more possible over time, as things get more and more connected.  Indeed, look at the Apple products, all interconnected and working together.  If someone changes a playlist on one device, all the devices using that account are synced.

21. Tell you knock-knock jokes while you're trying to sleep

Yup.  Alexa.  Home Pod.  Etc.  You already heard of the flaw that let an Alexa user broadcast continuously what was going on in their home to someone else without their knowledge?  Imagine when (not if) someone figures out how to hack Alexa, and exploits some vulnerability.  I think it is quite possible, and growing over time, that they could get it to suddenly blurt out knock knock jokes (among other things) in the middle of the night.

So, that's my analysis, and yes, Weird Al is a visionary!  Rock on!