||Technology helps us with many things, and we expect Artificial Intelligence (AI) to give us much more in the future. However, there are certain risks involved. In science fiction AI has been described as something apocalyptic: Artificial General Intelligence or Super Intelligence takes over the world using its thinking power. Humans become slaves, laboratory animals, zoo/reservation inhabitants, or simply exterminated. That has been fiction. Not anymore. Recent technological developments especially in Machine Learning, and AI achievements in complex games for example, created worries about the imminence of the above apocalypse.
The discussions focus on issues like the probability of AI acquiring an independent existence of itself, transforming us into something we do not want to be, affect or even direct evolution in a radically different direction, etc. Not everyone agrees on whether any of these things will happen, or when they may happen.
AI is seen as a technology providing answers, products, services to us in order to satisfy our needs, solve our problems and make our world balanced and perfect. In accordance to that, the discussion about its benevolence or cruelty is about whether its deliveries will be good or bad to humans, animals, or the whole universe. This is a significant issue and we have to handle it somehow.
We suggest a different approach. It would be possible to handle the issue of the impact of AI if we changed focus from the product to the process: AI designed to help us use the “right” process of thinking instead of delivering answers to make our world perfect.
In order to be able to design such an AI we need to know what we want. The answer to this question demands knowledge about what we are. Are we recipients of services and products that we need according to our nature? Only that? Partly that? Are we recipients but through us, through our thinking and through our choices? Or are we only thinking and choices, a kind of a Socratic psyche?
If we think we are only recipients, and design AI in order to be successful in making our world perfect, we may soon go to ruin like the old despots who could have all their wishes satisfied. Our thinking, making choices and feeling anxiety will unavoidably languish and go away. It seems also that this would lead rapidly to the emergence of an independent AI with own goals and existence. Not only because no one will be there to stop it, but also because there will be a well-defined goal from the very beginning for AI to work for the best it can.
If we design AI to make us think exclusively in the “right” way it will never let us be in peace. It will soon perplex our mind to dissolution, meaning we will not exist anymore. On the other hand, AI would have a very clear goal to achieve, and being undisturbed because of our non-existence, should very fast make itself independent.
If we base the design of AI on the idea that we are both processors and recipients it could be just right. This approach would be in accordance with the idea of thinking and knowledge being interdependent, and of us thinking in order to solve our problems and to satisfy our needs. Moreover, the goal would not be well-defined: Delivery or choice? Both delivery and choice? Who chooses? Who delivers? Who thinks?
This can be seen as an independent seminar but also as a follow up on the on AI we recently had when we watched movies and discussed risks with AI