Friday, May 12, 2017

Myth and Reality - The Truth and Dangers with Artificial Intelligence


The inspiration to write this post came when I logged onto our Salesforce account, and was presented with the marketing banner, "Meet Salesforce Einstein, AI-powered CRM for everyone."  It occurred to me that AI is the new Cloud, the new IoT.  As such, its use has become commonplace in the news, articles, advertising - everywhere, and most assuredly, its meaning has become vague, evoking the yearning for the "latest and greatest" while dire warnings from science, science fiction, and business are becoming more urgent and public.

This publicizing of a technical term creates a great opportunity - to raise awareness, both of what it is and what it can become, and to hopefully inspire some action.


In Computer Science, Artificial Intelligence (or AI) is a term that has been thrown around since at least I was in college back in 1990 (as a Bachellor's in Computer Science from University of Michigan).  I have found that, until the last year or so, it has been a pretty technical term that those of us in the industry seem to intuitively understand.  However, now that its use has become ubiquitous in the day of "smart phones" and massively-capable mobile devices, I feel that its use and meaning may have become watered down and obfuscating much in the same way that the term Cloud has become prior to it.  I'd like to clear that up.

First of all, we have Hardware (or the physical parts of a computer that you can touch), and Software (or the instruction code that is bundled up into a thing that performs tasks and makes the hardware useful).  AI is software, for sure, but not all software is AI.  This is in much the same way as putting a hard drive on the Internet and calling it a Cloud isn't really a Cloud, but a Cloud-accessible storage.

So what are the basic elements that make it AI?  I would say that it requires a few things:

  • Capability to "learn"
    • What do we mean by learning?  This is so intrinsic to what it is to be a Human, that we almost don't have a precise grasp on it.  We try something - and we fail, or succeed, and that outcome gives us a certain level of understanding that we didn't have before, and we use that to inform our future decisions.  So if we turn left while driving, and hit a wall, do we refrain from turning left thereafter?  No, but we do look and examine the conditions first (is there a wall?) before attempting a turn, either left or right.
    • The ability of animals to learn ranges from very simple (think insects) to extremely complex (think primates, dolphins, whales, elephants let alone us Humans).  Software learning could consist of building a database of "facts" and using those to make decisions, but learning on anything but the level of an ant is much more complicated than that.  It is comparable to reprogramming ourselves based on input, not simply cataloging the results and using them.  We can learn new skills, for example, that we didn't have before (these range from mental to physical skills, and combinations thereof).
  • Recognition via senses
    • We have our various senses:  touch, taste, smell, sight, hearing, kinesthesis, time, and more.  We can recognize, for example, with sight a color (say red), in various light will be different actual colors, but we still recognize it as red because of the context of the ambient light.  We are far, far more nuanced than that simple example, recognizing faces, even similarities between faces (oh, you look like his daughter), and more.  AI commonly consists of pattern recognition, sometimes using what is called a neural net as a computing construct to build history and analyze input based on that history.  You've seen the sci fi shows that do facial recognition, where they pull up all the cameras in a city and search for someone (they are really close to that now in reality), and perhaps have set up a sign-on with the PS4 camera so that when you show your face, it logs you on.
  • Language
    • If you have tried to learn a different language, or even delve into the depths of your mother tongue, you will know what a crazily complex thing Human languages are!  However, AI will be able to understand, at least at a rudimentary level, spoken and written language.  What do we mean by "understand?"  Yes, understand the explicit meaning of words, if not the implicit meanings ("what do you mean, yes honey I love you?  Are you mocking me?").  Intonation, inflection, even body language are things that are probably way off, but they undoubtedly will be coming - if we continue on the path we are on now in AI research.
  • Decision-Making
    • As I said earlier, cataloging the outcomes of actions and making decisions based on those outcomes and current input, is a very small part of decision making.  If you don't think that's true, then you aren't a Project Manager - a very highly skilled profession that requires years of training and experience to pull of successfully, even with highly skilled learners that Humans are.  But some form of adaptive decision making is part of a true AI solution.
  • Morals
    • A set of limiting and/or guiding principles, the violation of which are thought of as horrific and punishable, also must be included in a true AI.  Think Isaac Asimov's rules on robot behavior.
    • What if these moral limiters could be bypassed (e.g. the robot's eyes turn from blue to red)?  What if some unscrupulous developer excluded them altogether?  What if someone hacked them?


So, it would seem we are pretty far off from C-3PO, Human-Cyborg Relations, right?  Yes and no, yes some things we have very far to go (hey, they are still struggling with a 2-legged robot keeping its balance), and in other things they have made huge advancements.  However, now, before things get too far, is the time to step back and ask important social questions, like why, or should we?  Elon Musk, the founder of PayPal, Tesla Motors, and SpaceX (among others) thinks that achieving some of these key AI milestones is merely a matter of a couple decades away (or less).  And he should know - he is involved in a lot of technology development, including self-driving cars.

Am I predicting all doom and gloom?  You know the old saying, the road to Hell is paved with good intentions.  It starts out with cool things like mobile phone assistants, but then it progresses to clerks who can take your order and deal with dissatisfied customers (in a mostly dissatisfying way), to robotic assembly lines that can fix their own production issues - but suddenly, something happens.  Skynet becomes self-aware, and determines that Humans are a scourge to be wiped out.  Frankly, I cannot see any progression of AI that doesn't end in some catastrophic, cataclysmic struggle for existence.  That's the stupidity of being Human - we push and push and push, knowing that we are pushing off the cliff, but it's someone else's responsibility, someone else's problem.  That's right, the aliens are projecting an SEP field on us!  I have endless faith that Humans are self-destructive to a small or large degree, depending on how successful you are.

Does AI have to progress to the point where machines become a danger to us?  Let's look at things this way.  Computers are already so complex, that there is probably not a single person on this planet anymore who can comprehend the full set of what is going on there.  That means, the product is beyond the understanding of any individual.  The complexities that arise mean that we don't fully understand the consequences.  We don't fully grasp the potential - and all we can see is what the Sales and Marketing people want us to see.

As a self-confessed tech geek and tech aficionado, I am warning us all that this is a bad path.  There is no good ending for this.

In the Media

AI is in the media on an almost daily basis.  From self-driving cars, to smart homes, electronic assistants (Alexis, Siri, Cortana, and Hey Google), to even things we don't think of like traffic lights and telephone call routers, AI to some basic level has been here for decades.  However, it has reached a point where technology, research, and capabilities have expanded tremendously - and this brings up danger signs.

What, am I saying that the latest military top secret project will become Skynet (think Terminator)?  Or that the people in charge will bypass authentication and set things to automatically run without safeguards will enable the digital world to enslave humanity (think Brian Herbert's prequels to Dune)?  Yes, as a matter of fact, even though that may sound farfetched and outlandish, Elon Musk and Stephen Hawking, 2 of the world's foremost minds in business, technology and science, are warning just such a thing.  I have agreed with that assessment wholeheartedly for about 30 years, as a computer expert who has been developing and implementing automation solutions that help make businesses run better and faster.

What's the Answer?

Get rid of computers!  Yeah, right.  The only way that will happen is by circumstances outside of our (Humanity's) control.  So if that's not an option, do we stop research on AI?  I know 2 high profile brilliant people who would say so, and I strongly urge a complete and total ban on AI.  Machines should not do the above, and it should be enshrined in our laws, our religions - or, if not, we risk extinction.  Maybe even having governmental or non-governmental review boards to review software for signs of AI, and enforcement to punish perpetrators of a newly-defined crime, a crime against Humanity's future.

I have read a lot of books, fiction and non-fiction.  Of all the books I have read, my hands-down favorite are the Dune series by Frank Herbert and son Brian Herbert.  Especially piercingly prescient, the father's series takes place thousands of years after the Human race barely avoided extinction from its self-induced machine overlords (the subject of the son's prequels).  In the myriad of planetary systems, thousands of distinct cultures and civilizations descended from Earth - each and every one of them has a basic law at the core of all religions, government, society - Thou Shalt Not Create a Thinking Machine.  In the books, Humans learned their lessons about the boundaries of technology - and had achieved faster-than-light interstellar travel, vast technological marvels, and more - all without a machine more intelligent than a calculator.  True, that's a novel - and true, computers are fast, capable, and people spend money on faster and more capable.  But at some point, all of these warnings will come to fruition (not may, WILL).  And we will have to repeat the lessons of the universe of Dune - if we survive.