Geoff Hudson-Searle en Publishers & Bloggers, IT - Information Technology, beBee in English CEO • HS Business Management Limited 30/5/2018 · 5 min de lectura · +800

Robots are surely not going to destroy the planet, or are they?

Robots are surely not going to destroy the planet, or are they?

Elon Musk, the mastermind behind SpaceX and Tesla, believes that artificial intelligence is “potentially more dangerous than nukes,” imploring all of humankind “to be super careful with AI,” unless we want the ultimate fate of humanity to closely resemble Judgment Day from Terminator. Personally, I think Musk is being a little futuristic in his thinking after all, we have survived more than 60 years of the threat of thermonuclear mutually assured destruction but still, it is worth considering Musk’s words in greater detail, and clearly he has a point.

Musk made his comments on Twitter back in 2014, after reading Superintelligence by Nick Bostrom. The book deals with the eventual creation of a machine intelligence (artificial general intelligence, AGI) that can rival the human brain, and our fate thereafter. While most experts agree that a human-level AGI is mostly inevitable by this point it’s just a matter of when Bostrom contends that humanity still has a big advantage up its sleeve: we get to make the first move. This is what Musk is referring to when he says we need to be careful with AI: we’re rapidly moving towards a Terminator-like scenario, but the actual implementation of these human-level AIs is down to us. We are the ones who will program how the AI actually works. We are the ones who can imbue the AI with a sense of ethics and morality. We are the ones who can implement safeguards, such as Asimov’s three laws of robotics, to prevent an eventual robot holocaust.

In short, if we end up building a race of super-intelligent robots, we have no one but ourselves to blame and Musk, sadly, is not too optimistic about humanity putting the right safeguards in place. In a second tweet, Musk says: ‘Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” Here he’s referring to humanity’s role as the precursor to a human-level artificial intelligence and after the AI is up and running, we’ll be ruled superfluous to AI society and quickly erased.

Stephen Hawking warned that technology needs to be controlled in order to prevent it from destroying the human race.
The world-renowned physicist, who has spoken out about the dangers of artificial intelligence in the past, believes we all need to establish a way of identifying threats quickly, before they have a chance to escalate.

“Since civilisation began, aggression has been useful inasmuch as it has definite survival advantages,” he told The Times.

“It is hard-wired into our genes by Darwinian evolution. Now, however, technology has advanced at such a pace that this aggression may destroy us all by nuclear or biological war. We need to control this inherited instinct by our logic and reason.”

In a Reddit AMA back in 2015, Mr Hawking said that AI would grow so powerful it would be capable of killing us entirely unintentionally.

“The real risk with AI isn’t malice but competence,” Professor Hawking said. “A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.


“You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”


The theoretical physicist Stephen Hawking, who has died recently aged 76, said last year that he wanted to “inspire people around the world to look up at the stars and not down at their feet”. Hawking, who until 2009 held a chair at Cambridge university once occupied by Isaac Newton, was uniquely placed to encourage an upwards gaze.

Enfeebled by amyotrophic lateral sclerosis, a form of motor neurone disease, he displayed extraordinary clarity of mind. His ambition was to truly understand the workings of the universe and then to share the wonder.

Importantly, he warned of the perils of artificial intelligence and feared that the rise of the machines would be accompanied by the downfall of humanity. Not that he felt that human civilisation had particularly distinguished itself: our past, he once said, was a “history of stupidity”.

Here are 10 interesting insights into the life and viewpoints of Stephen Hawking. Sure, Stephen Hawking is a brilliant, groundbreaking scientist, but that’s not all …


Stephen Hawking had much to say on the future of tech after all, he was an expert: Hawking was one of the first people to become connected to the internet.

“So, we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.
“Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilisation.


While he saw many benefits to artificial intelligence – notably, the Intel-developed computer system ACAT that allows him to communicate more effectively than ever – he echoes entrepreneurial icons like Elon Musk by warning that the completion of A.I.’s potential would also “spell the end of the human race.”

Stephen Hawking co-authored an ominous editorial in the Independent warning of the dangers of AI.

The theories for oblivion generally fall into the following categories (and they miss the true danger):
– Military AI’s run amok: AIs decide that humans are a threat and set out to exterminate them.
– The AI optimization apocalypse: AI’s decide that the best way to optimize some process, their own survival, spam reduction, whatever, is to eliminate the human race.
– The resource race: AIs decide that they want more and more computing power, and the needs of meager Earthlings are getting in the way. The AI destroys humanity and converts all the resources, biomass — all the mass of the Earth actually — into computing substrate.
– Unknowable motivations: AI’s develop some unknown motivation that only supremely intelligent beings can understand and humans are in the way of their objective, so they eliminate us.
I don’t want to discount these theories. They’re all relevant and vaguely scary. But I don’t believe any of them describe the actual reason why AIs will facilitate the end of humanity.

As machines take on more jobs, many find themselves out of work or with raises indefinitely postponed. Is this the end of growth? No, says Erik Brynjolfsson:  

https://www.ted.com/talks/erik_brynjolfsson_the_key_to_growth_race_em_with_em_the_machines?utm_campaign=tedspread&utm_medium=referral&utm_source=tedcomshare


Final thought: Artificial Intelligence will facilitate the creation of artificial realities  custom virtual universes  that are so indistinguishable from reality, most human beings will choose to spend their lives in these virtual worlds rather than in the real world. People will not breed. Humanity will die off.

It’s easy to imagine. All you have to do is look at a bus, subway, city street or even restaurant to see human beings unplugging from reality (and their fellow physical humans) for virtual lives online.

AIs are going to create compelling virtual environments which humans will voluntarily immerse themselves in. At first these environments will be for part-time entertainment and work. The first applications of AI will be for human-augmentation. We’re already seeing this with Siri, Indigo, EVA, Echo and the proliferation of AI assistants.

AI will gradually become more integrated into human beings, and Virtual platforms like Oculus and Vive will become smaller, much higher quality and integrated directly into our brains.

AIs are going to facilitate tremendous advances in brain science. Direct human-computer interfaces will become the norm, probably not with the penetrative violation of the matrix I/O ports, but more with the elegance of a neural lace. It’s not that far off.
In a world with true general AI, they’re going to get orders of magnitude smarter very quickly as they learn how to optimize their own intelligence. Human and AI civilization will quickly progress to a post-scarcity environment.

And as the fully integrated virtual universes become indistinguishable from reality, people will spend more and more time plugged in.
Humans will not have to work, there will be no work for humans. Stripped of the main motivation most people have for doing anything, people will be left to do whatever they want.

Want to play games all day? Insert yourself into a Matrix quality representation of Game of Thrones where you control one of the great houses. Go ahead. Play for years with hundreds of friends.

Want to spend all day trolling through the knowledge of the world in a virtual, fully interactive learning universe? Please do. Every piece of human knowledge can be available, and you can experience recreations of historical events first-hand.

Want to explore space? Check out this fully immersive experience from an unmanned Mars space-probe. Or just live in the Star Wars or Star Trek universe.

Want to have a month long orgasm with the virtual sex hydra of omnisport? Enjoy, we’ll see you in thirty days. Online of course. No one dates anymore.

Well, some people will date. They will date AI’s. Scientists are already working on AI sex robots. What happens when you combine the intelligence, creativity and sensitivity embodied by Samantha in the movie Her with an android that is anatomically indistinguishable from a perfect human (Ex Machina, Humans, etc)?

Deep learning algorithms will find out your likes, dislikes and how to charm your pants off. The AIs will be perfect matches for your personality. They can choose your most desirable face and body type, or design their own face and attire for maximum allure.


Predicting the future is always a difficult matter. We can only rely on the predictions of experts and technology observations of what is in existence, however, it’s impossible to rule anything out.

We do not yet know whether AI will usher in a golden age of human existence, or if it will all end in the destruction of everything humans cherish. What is clear, though, is that thanks to AI, the world of the future could bear little resemblance to the one we inhabit today.

An AI takeover is a hypothetical scenario, but a robot uprising could be closer than ever predicted in which AI becomes the dominant for of intelligence of earth, with computers or robots effectively taking control of the planet away from the human species, according to royal astronomer Sir Martin Rees, who believes machines will replace humanity within a few centuries.

Possible scenarios include replacement of the entire human workforce, takeover by a super-intelligent AI, and the popular notion of a robot uprising. Some public figures that we have discussed in this blog, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future super-intelligent machines remain under human control.

We need to watch this space…..

As Masayoshi Son once said:

“I believe this artificial intelligence is going to be our partner. If we misuse it, it will be a risk. If we use it right, it can be our partner.”





#7 I do not ignore them, my respects for these characters, (TStephen Hawking and Elon Musk), I only give my opinion, among other thousands or millions that could be, and I think mine is not far from what is written in the article that I you suggested, which I value a lot.
As long as it is not expressed lightly, the difference of criteria will also be very positive in a community.

+1 +1
Geoff Hudson-Searle 3/6/2018 · #9

#5 @Roberto De la Cruz Utria you may find this article and insights interesting; https://www.bbc.co.uk/news/technology-44040008

+1 +1
Geoff Hudson-Searle 2/6/2018 · #8

#4 @Jerry Fletcher as always, just loved your comments. If you look around you will see that AI, Machine Learning and Robotics are simply a way of our life these days, the question for all of us is whether we allow AI to take over our lives, I believe even our local doctors will deploy the technology this year. "I believe 2018 is the year that this will start to become mainstream, to begin to impact many aspects of our lives in a truly ubiquitous and meaningful way," says Ralph Haupter, the president of Microsoft Asia. The idea that computers have some amount of "intelligence" is not new, says Haupter, pointing as far back as 1950 when computer pioneer Alan Turing asked whether machines can think. "So it has taken nearly 70 years for the right combination of factors to come together to move AI from concept to an increasingly ubiquitous reality," says Haupter. Those factors are the mass distribution and use of Internet-connected devices, which generate massive quantities of data, and cloud computing and software algorithms that can recognize patterns within data, he says.

0
Geoff Hudson-Searle 2/6/2018 · #7

#3 Thank you @Roberto De la Cruz Utria for your insights and opinions, I do feel that AI, Machine Learning and Robotics have come along way, the problem is all of the commentry from Stephen Hawking and Elon Musk cannot really be ignored the fact is we are now completing exploratory human and AI capability so that we can keep up with the speed of life. As I have stated recently the AI is only as good as the information (good or bad) that you feed the machine, which ultimately means that it really cannot decipher right from wrong, robots like automous and flying cars will need regulating.

+1 +1
Geoff Hudson-Searle 2/6/2018 · #6

#2 Thank you @Debasish Majumder for your wonderful comments

+1 +1

In the face of possible malefic robots, and insufficient control mechanisms, these can always be fought, just as they fight powerful terrorists well armed, traffickers or hacker today, which give battles occasionally, but in the end lose or have to hide.
Science fiction is very beautiful but sometimes we do not see the border with fantasy, for example: in the films Terminator and sequels and others, maybe even physical laws are violated, common sense, such a cartoon.

+1 +1
Jerry Fletcher 31/5/2018 · #4

Geoff, The fact that I won't probably be around to see AI even close to replacing humans doesn't matter. This is one of those questions which can haunt or motivate us. Trouble is there is no singular control mechanism in place and I, for one, hope there never is. I grew up on Science Fiction and it is there I would turn to get a glimpse of the future. The variations are many but I believe that the changes will occur a little at a time and will tend to sort out how folks deal with it. My guess is that Pareto will once again be in the driver's seat and that 80% will subsume themselves to the allure of emotional fulfillment via machine and that 20% will be left to continue the design and engineering of Robotic production. And so it goes.

+2 +2

Interesting buss, in my humble opinion I think the AIG would take a long time to reach, and also would not arrive massively and suddenly, so in that case we would have time to react and take the necessary measures to avoid undesirable damages.
Precisely, the AIG could overcome the human brain, and perhaps could achieve that negative attitudes are minimized, such as: being impulsive, aggressive, lacking empathy, not being predictable, etc., something impossible to achieve in humans.
Our brain can not be designed, unlike the AI, which can always be enriched for good, as has been done for years.
The machines work with energy, and these can always be manipulated, and monitored, even remotely.
And of course there will always be the possibility that powerful malicious AIGs will develop, so we will have the task of combating them, establishing minimum standards of security and functioning for the developers that would allow certifying the use of the future AIG.

+1 +1