Harvey Lloyd in Humans, Nature and Creativity President • The LEAD Center, Ltd Oct 24, 2018 · 3 min read · 1.7K

AI and the Social/Moral Dilemma

AI and the Social/Moral Dilemma                                  Image Credit:  https://redmondmag.com/articles/2017/12/01/can-ai-protect-it.aspx

AI is something that is coming of age.  Although true AI (Artificial Intelligence) is still in its infancy the social structure is beginning to do its thing.

That thing that people do when taking their own perspective and run computations out to infinity.  Depending on your starting point we all arrive at different locations of a future belief of outcome.  Movies, commentaries and news all serve to give us the starting point depending on where we are at the moment of reading or listening.

Having given some thought to this moral dilemma of AI, i came to the conclusion that we are wrestling with the idea in future context from a Carl Jung ego level argument process.  What i don't believe we have done yet is looked at the future of AI from Carl Jung’s Shadow perspective. Certainly we have experienced the fears that emanate from the shadow but not in the context of the motivation of the fear.

Under postmodern theory much of our value is derived from material things (Science).  Careers, philanthropy, status and many other material concepts all make up the matrix of a humans meaning.  Personal Branding is a bellwether of this theory. Our shadow concept of self is built around these very dynamic actions and reflections of experiences each day.

AI, although being offered as a way to reduce costs, increase free time and make life easier, it has a shadow outcome that we are now only scratching the surface.  AI is slowly replacing everything we use to develop our postmodern self.

We talk to bots that grow better than us each day, we see AI replacing routine repetitive task and we are now seeing big data making decisions and choices at the highest of levels.  Our meaning is being encircled and under attack.  If only on the periphery, we are starting to sense the change and its possible impacts.  Somewhat like the frog in the rising heat of the water.  It's comfortable now and we may even add some bubbles to the water, but off in the deepest parts of our subconscious we sense a due date.

If AI exists that i can not strive above, economic value is produced for me and no material things i produced are available for me to share with my family/community, then why do i exist?

Author in Source Title

Many find meaning in the expansion of knowledge with an endgame of financial success and independence.  We all have a view of helping others, the subjectivity enters when we place our own stabilized basic needs first.  This first priority, which sounds reasonable, is what AI may be threatening within our shadow. Motivating the fear.

Much of life meaning comes from sacrifice.  We sacrifice for each other daily.  I use my being, wisdom, education or physical capabilities in sacrifice to others.  This sacrifice is what AI is venturing into that makes us think about that future without the sacrifice.

Philosophers were wrestling with these issues long before AI was even a sci-fi concept.  Man's search for meaning has transcended the mystical to the scientific. This experience has lead to new and exciting meanings as we socialize each generation.  Now we find science replacing human endeavour (Independence) at levels we find meaning.

What are we doing if AI is taking care of our aging parents?

The social extension of AI in to the future is not only a ethical issue but has a deeper shadow interference of mans meaning.  Conceptually in a post AI world we will have to find meaning within each other which is somewhat contrary to the postmodern thinking.  We may find ourselves back at the mystical.

Why would i need a Phd in anything?

Google is already changing the way way we educate at the deepest levels.  The transfer of information is no longer stored inside our minds but a server farm somewhere, waiting for need to arise.  What Henry Ford did for manufacturing Google has done with information.  I need not memorize the books for later use in concept formation, i have Google.

Google's recent adventure with China is demonstrating how they can fine tune the engine for moral concepts held by China.  The intelligence of design can now control the flow of information according to moral/ideals creation by others.

Information in conjunction with action is the road where we find meaning, whether a picture brought home by our 1st grader of a dinosaur or the grand revelations of the latest astronomer, meaning is found in these two categories of existence.   If we give Google, the information highway organizer, the hands feet and mind within a body.........

I found this thought through eclectic reading/listening of several articles and videos.  It intrigued me as we look at a future where our status or meaning are generally wrapped up in what we “do/own”.  If AI is replacing what we “Do/own” then where does mankind go to find meaning in this new world we are creating? 

The marketed outcome of AI is certainly glorious, but it removes the postmodern meaning of the knowledge and actions of humanity.
Author in Source Title

Within AI, although a scientific endeavor, we find ourselves deep inside our own heads wondering why we love it or hate it.

I would enjoy any thoughts on this topic.

sharon arnett Feb 28, 2020 · #16

Hi dear how are you  doing today? hope fine am  sharon arnett and am not always online here and i will like you to contact me through my email at

(   sharonaarnet@gmail.com    )

Harvey Lloyd Oct 26, 2018 · #15

#14 I agree. The door, just as the door with the atomic bomb, is open.

Avoiding conflict of biblical proportions will be the goal. Phil on your post touched on one of my fears as silicon valley and their worldview seem to be at the helm of AI. I am not encouraged.

There is a Utopian view of outcome along with one that is quite catastrophic. Each are, at this point, merely straw-men of future outcomes from a perspective of an individual.

Again i agree with Phil that we cant preach from this futuristic straw-man position. But rather need to talk about the low resolution values that all humans should be guided. This would enable us to examine future tech through humanities common vision.

Right now it seems that each group has their own low-res value set (me included). AI in this atmosphere could develop disastrously.

Great discussion and i appreciate your engagement in this topic.

+1 +1
Zacharias 🐝 Voulgaris Oct 26, 2018 · #14

#13 Evolutionary successor implies that AI stems from us organically. This is not the case, no matter how clearly the futurists paint this picture. If advanced AIs succeed us, it won't be through a natural process but through a bloody conflict.

The possibility of an intelligent species serving another for millennia sounds quite idealistic and perhaps a bit twisted. How long will it be before these advanced AIs will become self-aware? What would you call this then? Definitely not symbiosis.

Speaking of symbiosis, perhaps that's a more likely scenario. For this to happen however, people need to evolve too, instead of outsourcing everything to AIs. I somehow doubt that this scenario will be as popular though since it requires some effort from our part...

+1 +1
Harvey Lloyd Oct 25, 2018 · #13

#11 That is the discussion. Has humanity created its evolutionary successor or brought to life its helper for the next milinnia?

I dont think either one of the paths will be easy. There will be the ones left behind in this race of technology and those that are propelled forward.

+2 +2
Harvey Lloyd Oct 25, 2018 · #12

#10 the beginning of the end is already here. The question is what direction will it take and is it sustainable across social strata?

We have ended the narrow funnel of information into the wide open space of information. So much so that AI is starting to look possible. We can account for all forms of emotional action across social media. It can be analyzed and parsed for the best possible response.

Information is so vast now that folks like Google are beginning to understand they can manipulate information ever so slightly and focus their earnings. They need not lobby washington they can merely shift a few alogarythems and many people get a snoot cup of biased information. Challenging Washington’s authority.

So the end has happened the question is what is the next step.

I am on board with you this will develop into the best thing since canned beer or we will be writing on cave walls again.

+1 +1
Mohammed Abdul Jawad Oct 25, 2018 · #11

Everywhere in the world, so much is talked about artificial intelligence, advanced robotics, disruptive technology and digital transformation, Wonder what could be the fate of human beings? Will they become aliens in the coming years?

+1 +1
Pascal Derrien Oct 25, 2018 · #10

A postmodern society with no sense of purpose is a risk but we have always managed to push the boundaries so maybe this is just an adjustment in our evolution , with hindsight who knows it will only be seen as a small development in a industrial revolution cycle or the beginning of the end ? :-).

+1 +1