Phil Friedman en Lifestyle, Professions, Workers, Careers, Engineers and Technicians Writer/Editor | Marketer | Ghost Writer | Marine Industry and Small-Business Consultant • Port Royal Group 14/11/2017 · 3 min de lectura · 4,4K

Artificial Un-intelligence

Artificial Un-intelligence

ALL THE TALK ABOUT ARTIFICIAL INTELLIGENCE APPEARS TO BE JUST THAT, TALK...


A recent article in Forbes loudly purported to provide us with "10 Powerful Examples Of Artificial Intelligence In Use Today". Unfortunately, not one of the examples cited represents a true instantiation of Intelligence, artificial or otherwise.

I'll give you the list in just a minute. But let's first take a look at the notion of "intelligence".

Most dictionary definitions run something like this one from Merriam Webster:

"1) ... the ability to learn or understand or to deal with new or trying situations ... also the skilled use of reason, (2) ... the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria ...."

Note that a core component of intelligence is being able to understand and deal with new and trying situations.
A computer that is running a speech recognition program and nominally "engaging in conversation" with a customer by means of an algorithm that selects and machine-generates "appropriate" artificially spoken "responses" is not, ipso facto, exhibiting intelligence.

In such circumstances, the computer is not dealing with new situations, but only comparing currently captured phrases to those in its established database. It then follows a binary decision tree to determine which phrase or phrases are most appropriate to generate in response.

Sometimes the program is self-correcting and self-learning that is, it captures and incorporates data concerning which responses fail to be understood by the customer, and modifies its future responses by taking into account this new data.  It does not, however, improve its own logical structure. Or question the premises built into its controlling algorithm.


That is, it does not reach beyond being a binary "counting machine" that employs two-value logic, notwithstanding that it exhibits a rudimentary form of self-learning and self-correction based on the expansion of its empirical database.


I will acknowledge emergent machine intelligence when a computer-based entity says, "Shit, why can't I get that right?" even though neither that expression nor the condition for generating it has been pre-programmed into the system...

Now, let's take a look at the purported ten powerful examples of AI in use today. They are:


1) Siri, 2) Alexa, 3) Tesla, 4) Cogito, 5) Boxever, 6) John Paul, 7) Amazon.com, 8) Netflix, 9) Pandora, and 10) Nest.


The interesting thing about this list is that not a single entry is intelligent in any meaningful way. Indeed, the author of the Forbes article, R. L. Adams, says of these programs that they are

"...  merely advanced machine learning software with extensive behavioral algorithms that adapt themselves to our likes and dislikes. While extremely useful, these machines aren't getting smarter in the existential sense, but they are improving their skills and usefulness based on a large dataset."

In other words, they know or understand squat.  They are, in fact, eminently Un-intelligent, however well they perform the functions they were designed to handle.

Whence the hype about the rapidly approaching Singularity of AI?  I suggest it comes not so much from the Prophets as from the Profits of AI...

It's simply good for the wallet or research coffers to tout the imminent arrival of world-changing artificial intelligence.Am I being overly cynical? Take a look at who the leading honchos of AI are and for whom, in the main, they work.


The fact is we are Asimov-light-years away from developing true artificial intelligence ―which will most likely involve, I submit, first developing organic artificial neural networks. 

What we have now is a pile of public relations and science-business marketing that seeks to dazzle us with what are essentially parlor tricks like autonomous automobiles and self-piloting ships. These are not and will not be "intelligent" unless and until they can do things such as assess imminent danger to life and learn to make life-and-death decisions independent of being pre-programmed to simply count rapidly through a finite and limited number of alternative scenarios and programmer-weighted results. Until then, we'll be left with AUI (artificial un-intelligence).  Phil Friedman


Postscript:  One of the best, most concise pieces I've run across on this topic is "The Future of Artificial Intelligence" by Dr. Mark Humphrys, presented as a talk to the "Next Generation" symposium, Jesus College, Cambridge, Aug 1997. The piece is brilliantly insightful and marvelously written. Check it out.

PLF


Author's Notes:   This piece is the first in a what is turning out to be a series on Artificial Intelligence that I am writing.  The series is from a layman's point of view, one that is not filtered through the eyes and judgment of someone with a vested interest in the hyping of AI. If you'd care to read the other articles in the series, they are:


1) "Artificial Un-Intelligence"

2) "The Emperor May Be a Bot... But He Still Has No Clothes"

3) "The Robots Are Coming, the Robots Are Coming..."


If you enjoyed this post and would like to receive notifications of my writing on a regular basis, simply click the [FOLLOW] button on my beBee profile. Better yet, elect there to follow my blog by email. As a writer-friend of mine says, you can always change your mind later.

As well, if you feel this piece is of value, please like it and share it around to your network —  whether on beBee, LinkedIn, Twitter, Facebook, or Google+, provided only that you credit me properly as the author, and include a live link to the original post.


About me, Phil Friedman: With some 30 years background in the marine industry, I've worn different hats — as a yacht designer, boat builder, marine operations and business manager, marine industry consultant, marine marketing and communications specialist, yachting magazine writer and editor, yacht surveyor, and marine industry educator. I'm also trained and experienced in interest-based negotiation and mediation.

In a previous life, I was formally trained as an academic philosopher and taught logic and philosophy at university.

 

Before writing comes thinking (The optional-to-read pitch):

As a professional writer, editor, university educator, and speaker, with more than 1,000 print and digital publications, I've recently launched an online program for enhancing your expository writing: learn2engage — With Confidence. My mission is to help writers and would-be writers improve their thought and writing, master the logic of discussion, and strengthen their ability to deal with disagreement.



For more information, click on the image immediately above. To schedule an appointment for a free 1/2-hour consult email: info@learn2engage.org. I look forward to speaking with you soon.


Text Copyright 2017 by Phil Friedman  —  All Rights Reserved
Image Credits: Phil Friedman and Google Images.com


#AI  #ARTIFICIALINTELLIGENCE  #FUTURISM  #PREDICTINGTHEFUTURE



Martina Baxter 30/11/2017 · #74

#72 Thanks to you also, @Phil Friedman The second paragraph of your comment was also amazingly clear.

0
Martina Baxter 30/11/2017 · #73

#71 Huge thanks, @Ian Weinberg. I had only a vague recollection, and you've explained it clearly.

+1 +1
Phil Friedman 29/11/2017 · #72

#70 My thanks to @Ian Weinberg for his answer below. Anyone interested in this issue can find a plethora of information on it by doing a very simple Google search. One of the clearest plain-language answers I've run across is from Paul King, Director of Data Science, Computational Neuroscientist, Entrepreneur, on Quora:

"The brain is neither analog nor digital, but works using a signal processing paradigm that has some properties in common with both... Unlike a digital computer, the brain does not use binary logic or binary addressable memory, and it does not perform binary arithmetic. Information in the brain is represented in terms of statistical approximations and estimations rather than exact values. The brain is also non-deterministic and cannot replay instruction sequences with error-free precision. So in all these ways, the brain is definitely not "digital."

Combining this statement (the rest of which is available online at https://www.forbes.com/sites/quora/2016/09/27/is-the-human-brain-analog-or-digital/#5a6ef2347106 ) with Ian's equally clear answer, plus the consideration that the rapidity of repeated and sequencing of the firings of a single or multiple neurons may affect the nature of the information it is transmitting, leads to the conclusion that the human brain is much more complex and sophisticated than a binary-based (two-valued) computer processor of the kind with which we are presently familiar. CC: @David B. Grinberg. Cheers to all.

+2 +2
Ian Weinberg 29/11/2017 · #71

#70 The firing of a neuron (the action potential) is an all or none phenomenon. However, each neuron has multiple connections (synapses) on its dendrites, some of which are stimulatory while others are inhibitory. The triggering of the action potential therefore is determined by the stimulatory threshold being reached. This function reflects more the partial on - partial off situation of Chaos theory rather than binary activity.

+2 +2
Martina Baxter 29/11/2017 · #70

#67 A question on binary. @Phil Friedman and @Ian Weinberg Aren't neurons and synapses binary in function the same way as the building blocks of computer processing? They fire or they don't?

+1 +1
Phil Friedman 28/11/2017 · #69

#68 With all due respect, David, I believe that while multi-value logic may be needed to grasp and resolve the apparent paradoxes of quantum mechanics, it is not clear to me that quantum computing alone is sufficient for modeling the human brain/mind, in the absence of a change in the paradigm for a physical medium, e.g., a shift from inorganic circuits to organic (carbon-based) artificially-grown neural networks.

"Can many-valued logic help to comprehend quantum phenomena?" by Jaros law Pykacz, Institute of Mathematics, University of Gdan´sk, Poland; e-mail: pykacz@mat.ug.edu.pl. file:///D:/Downloads/10.1.1.670.9792(2).pdf

But really, David, WTF do I know anyway? Not much, I assure you, in this field. My layman's thesis here is that we cannot trust the prognostications of the Prophets and Profits of AI because their actions belie any vestiges of honesty. They tell us they've accomplished things they haven't, then to cover up, they work in double-speak to redefine the term "intelligence", while they present us aboriginal savages with shiny baubles like the Atlas robot to amuse us and hold our attention while they fill their coffers and under-deliver on their promises. If most of them weren't as smart as they are, I'd think they were politicians. Cheers, my friend. Thank you for reading and for the great comments here and on LinkedIn.

+2 +2
David B. Grinberg 28/11/2017 · #68

#67 Phil, thanks for your reply. However, the beauty of quantum computing is that it's not binary. That's because in a quantum state particles can appear in more than one place at the same time, which is a real game changer compared to binary states. Here's some info from Wikipedia:
"Quantum computing studies computation systems (quantum computers) that make direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from binary digital electronic computers based on transistors. Whereas common digital computing requires that the data be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses quantum bits, which can be in superpositions of states."
https://en.wikipedia.org/wiki/Quantum_computing

+1 +1
Phil Friedman 27/11/2017 · #67

#66 David, the promise of quantum computing is blazing speed. However, I submit that binary (two value) logic will never be able to fully model the human brain and mind — which embodies multi-value (non-binary) logic that is the core of what we call intelligence. Even the Prophets of AI are starting to admit that the “I” in AI does not mean what we commonly think of as intelligence, but is a subset. However, the discussion is a good one. Thank you for joining the conversation. Cheers!

+1 +1