Zacharias 🐝 Voulgaris en Artificial Intelligence, Writers, Technology CTO • Data Science Partnership 27/2/2018 · 1 min de lectura · +100

The Hidden Dangers of A.I. (excerpt)

The Hidden Dangers of A.I. (excerpt)


    Overview

    Recently I had a couple of very insightful conversations with some people, over drinks or coffee. We talked about A.I. systems and how they can pose a threat to society. The funny thing is that none of these people were A.I. experts, yet they had a very mature perspective on the topic. This lead me to believe that if non-experts have such concerns about A.I. then perhaps it’s not as niche a topic as it seemed. BTW, the dangers they pinpointed had nothing to do with robots taking over the world through some Hollywood-like scenario, but were far more subtle, just like A.I. itself. Also, they are not about how A.I. can hurt us sometime in the future but how its dangers have already started to manifest. So, I thought about this topic some more, going beyond the generic and quite vague warnings that some individuals have shared with the world over interviews. The main dangers I’ve identified through this quest are the following:

    * Over-reliance on A.I.

    * Degradation of soft skills

    * Inevitable bugs in automated processes

    * Lack of direct experience of the world

    * Blind faith in tech infused with A.I.


    Interestingly, all of these have more to do with us, as people, rather than the adaptive code that powers these artificial mental processes we call A.I.


    Over-reliance on A.I.

    Let’s start with the most obvious pitfall, over-reliance on this new tech. In a way, this is actually happening to some extent, since many of us use A.I. even without realizing it and have come to depend on it. Pretty much every system that runs on a smart phone that makes the device “smart” is something to watch out for. From virtual assistants to adaptive home screens, to social chatbots, these are A.I. systems that we may get used to so much that we won’t be able to do without. Personally I don’t use any of these, but as the various operating systems evolve, they may not leave users a choice when it comes to the use of A.I. in them.


    Degradation of Soft Skills

    Soft skills may be something many people talk about and even more have come to value, especially in the workplace. However, with A.I. becoming more and more of a smooth interface for us (e.g. with customer service bots), we may not be as motivated to cultivate these skills. This inevitably leads to their degradation, along with the atrophy of related mental faculties, such as creativity and intuition. After all, if an A.I. can provide us with viable solutions to problems, how can we feel the need to think outside-the-box in order to find them? And if an A.I. can make connecting with others on-line very easy, why would someone opt for face-to-face connections instead (unless their job dictates that)?


    Unfortunately this article is a bit longer than appropriate for SM posting. If you enjoy it, feel free to check out the full version at my data science / A.I. blog. In any case, I'd very much like to hear your views on this topic, as I find the discussions that follow articles here very insightful. Thank you for reading!



    #1 Thank you Ali for your feedback! Perhaps if we were to see it as yet another technology, making sure we keep fail-safes in place, instead of cultivating it without restrain or giving it access to the Internet, we could create a situation of synergy. That's what I refer to by having it as "an auxiliary technology."

    +1 +1

    We can't get anything for nothing. We pay for the threats of A.I. The option of excluding it is not viable. SO, the question is how to minimize the risk of A.I. while enhancing its benefits. The threats you outline are valid . Thank you @Zacharias 🐝 Voulgaris for this enlightment.

    +2 +2