So, How Does the Novel Intelligence Paradigm Apply to Artificial Intelligence?
In the past month, I introduced a new framework I devised to model intelligence. This 3-fold-turn-5-fold taxonomy looks at intelligence from a practical perspective and how it tackles problems, as well as learning. I didn't talk much about the learning part, but this is something that you can infer as it's inherently linked to asking questions, like "why?", "how?" and "what?" These questions constitute the backbone of this paradigm, as they correspond to three of the intelligences that comprise it. Specifically, they deal with the philosophical, mechanical, and mathematical intelligences respectively (corresponding article).
Augmenting these three intelligences are the communicative and the collaborative intelligences, which enable harmonious coordination of the aforementioned three, and the ability to join forces with other individuals in a mutually beneficial way, usually towards a common goal. I also talked about the role of Ethics and Morality in all this (corresponding article). In this article, we'll look at how all this applies to Artificial Intelligence (AI) and how it can mitigate the inherent risks this technology exhibits.
This article is not my first (or even my last) attempt to write about AI, as I'm very passionate about this topic. I even co-authored a book on AI and how it applies to data science, my field of expertise. My co-author is currently working as an AI researcher at a research institute in Germany, and recently we were invited to a panel about the business aspect of AI, which was part of the Customer Technology World conference which took place online. Anyway, enough about all this, since all the traction in the world can only scratch the surface for a subject like this. The field of AI is more like an ocean, and we have just expanded the coastline a bit. To proceed further, we may need a bigger boat!
Enter a different paradigm of intelligence, one that can help us first and foremost understand it in ourselves, before we start training machines to develop and use it. Otherwise, AI can be quite dangerous, as thought leaders in this field have warned us repeatedly (e.g., Elon Musk). Whether these people are right or wrong, I don't know, but it doesn't take a superintelligence to see that we are treading on thin ice here. I recently had a very insightful chat with a Canadian engineer/scientist who works in the field of Quantum Computing and who is one of the few people who have access to state-of-the-art AI. When I asked him why not enable access to this technology to other people via an API, for example, he responded that it was too dangerous. So, when I talk about AI (and Artificial General Intelligence, which is the logical next step of the state-of-the-art), I have all this in mind.
AI currently excels at mechanical and mathematical intelligence. It has little to do with philosophical intelligence, although it is bound to get there too. Communicative intelligence is also getting better though it's doubtful it will reach high-level communication like what we see in sci-fi films anytime soon. There isn't enough business value for this yet, plus the performance trade-off may not be justified in many use cases (e.g., AI systems that deal with back-end tasks, where no one asks any questions as to how the results come about). As for collaborative intelligence, this is fairly developed, at least for when it comes to other machines. As for the collaboration with humans, that's possible but not sufficiently developed due to the lack of a common framework and the prowess gap between man and machine. However, for particular niche applications, it is a reality (e.g., chess teams comprising of a human and a chess program), and there is potential for other applications too.
So, how can AI be developed/guided to embrace other aspects of the five-fold framework of intelligence and bridge the gap of understanding between the intelligence machines and us? Well, first and foremost, we could educate everyone on the topic of intelligence and the limitations of AI so that we are all on the same page. At the very least, we can have a reasonable expectation of AI and not be swayed by the marketing of the futurist movement, which is overly optimistic about technology in general, IMHO.
What's more, we can design AI systems that are well-rounded in terms of intelligence, instead of super-experts in one particular kind of intelligence and hopeless idiots in other aspects. This potential solution is significant when we start looking at the objective functions (aka fitness functions) that such systems try to optimize. To ensure a sustainable and safe evolution of an AI's functionality, we need to take baby steps instead of allowing it to accelerate uncontrollably, as it would if left untethered to reality.
Finally, we can detach AI development from organizations with vested interests that don’t represent the whole. An advanced AI is inherently dangerous, but in the hands of someone who doesn’t care about the rest of the world, it’s even more dangerous. We may not be able to control how an AI system thinks, but we can control whether it is exposed to objective functions, reflecting objectives of questionable morality.
Perhaps keeping the human in the loop of the whole AI development and maintenance process is a great rule-of-thumb solution, at least for the time being. However, we need to think ahead, intelligently, and in the most holistic way as we work on this technology. Nature may be forgiving of our mistakes, but it's doubtful that AI would do the same if we fail to instill the right values in it…
If you enjoy articles like this but have a penchant for the technical side of things, you are probably going to like my technical blog, where I write about artificial intelligence and other data-related topics. Cheers!