English posts

Will the future still need the man?

 

“Weak” and “Strong” Artificial Intelligence

Some scientists, such as Stephen Hawking and Stuart Russell, believe that if advanced AI someday gains the ability to re-design itself at an ever-increasing rate, an unstoppable “intelligence explosion” could lead to human extinction (Technological Singularity). There is no doubt that the studies and research on Artificial Intelligence, if only targeted to commercial profit, military power and technological speculation paradigms, represent today one of the greatest existential threat to humanity.

There are many laboratories in the world developing technologies associated with AI, but only some of them make their progress known. But from the signals – albeit fragmented – we get, it is extremely easy to understand an exposure to significant risks, even if masked by the opaque veil of “modernity”.

In order to clarify and avoid generalizations: simple positive implementations of AI are already part of our everyday since long time. Every time we turn on a last generation washing machine, or browse on Google or Facebook, complex adaptive algorithms guide us and facilitate the task, learning our behavior or our needs.

I recently installed a new “smart” thermostat for my home’s heating. After a short learning period during which the system learns my temperature requirements and gets the house and radiators thermal characteristics, the program proposes its “intelligent” heating solution. Through its algorithms, the application adjusts the on/off switching of the boiler, taking into account the radiators thermal hysteresis, the cooling parameters of the house and the thermal efficiency of the boiler, keeping the temperature at the desired level and saving about 20% of energy.

However, these are cases of “Weak-AI”: a no sentient Artificial Intelligence, only focused on one small task. In contrast, the long-term goal of many international researchers is the “Strong-AI“, so defined in his criticism by the philosopher John Searlethe appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds“. While Weak-AI can overcome human beings whatever its specific task, such as playing chess or solving equations, Strong-AI is intended to overcome the humans for most of their cognitive functions.

 

Androids in the imaginary

The android is an artificial being, a robot with human form, mostly present in the imaginary fiction. There are many mythological “artificial persons”: the Golem of the Jewish legend, Galatea in the Greek mythology or the “homunculus” of the alchemical treatises (Paracelsus).

In its first literary or movie representations (for example, the women-robot in Metropolis – 1927) the boundary between the android and the human being has always been clear and delimited: the former had still mechanical or easily recognizable appearance, so as to avoid confusion of roles. But in 1982, Ridley Scott’s cult-movie “Blade Runner” introduced an anthropomorphic android, perfectly indistinguishable from humans. Since then, the anxiety generated by the overlapping of roles has multiplied through many fiction works, imagining a future in which humanity must necessarily cope with other beings identical to humans (Real Humans, AI – Artificial Intelligence, Alien, Ex Machina).

It is interesting – however – to note that in George Lucas’ colossal saga “Star Wars“, which began in 1977 and reached the seventh episode in 2015, albeit populated by countless alien worlds living beings, the androids robots are always easy to distinguish from other living forms. In other words, Lucas suggests a future in which the intelligence boundary between biological and mechanical structure is never overcome. Indeed, the evil character of Darth Vader, although kept alive by a complex bio-technological system, retains the use of his human mind.

 

Why an anthropomorphic robot?

Hanson Robotics Inc. is a company in Richardson (Texas) founded by David Hanson in 2003. Their motto is: “We bring robots to life” and this is their mission statement: “Our long-term mission is to dramatically improve people’s everyday lives with affordable, highly intelligent robots that teach, serve, entertain, and are capable of developing a deep relationship with people. In time, we hope our intelligent robots will come to truly understand and care about people and evolve greater-than-human wisdom, to the point that they will one day be able to address and solve some of the most challenging problems we face“.

This statement, although clearly formulated to address a positive message and to catch market’s eye, creates instead a deep sense of bewilderment.

Here it is a short video of Sophia, a humanoid robot created by Hanson Robotics, conversing with a reporter. David Hanson developed the project in detail, including aesthetic, because he is convinced that people will be more willing to interact with artificial intelligence if this will have humanlike form and expressions.

In this other link it is shown another video related to facial expression experiments on a perfectly realized robot on Albert Einstein’s likeness: <link to video>

But the need to put a fundamental question raises at this point: if robots are designed to assist and facilitate some common functions of the human being, for what reason they need to have an anthropomorphic appearance? In the case of the Einstein-robot, it seems clear that a human interlocutor will be more easily led to placing his confidence to the words and gestures of dr. Einstein rather than to what is heard from a machine speaker! In this way, however, trust also becomes a dangerous reduction of critical and evaluative skills.

In every known culture, humans experience joy, sadness, disgust, anger, fear, and surprise, and indicate these emotions using the same facial expressions. We all run the same engine under our hoods, though we may be painted different colors; a principle which evolutionary psychologists call the psychic unity of humankind. When something is universal enough in our everyday lives, we take it for granted to the point of forgetting it exists.

Not surprisingly, human beings often “anthropomorphize” humanlike properties of that which is not human. In The Matrix (1999), the supposed “artificial intelligence” Agent Smith initially appears utterly cool and collected, his face passive and unemotional. But later, while interrogating the human Morpheus, Agent Smith gives vent to his disgust with humanity and his face shows the human-universal facial expression for disgust. In a relationship between humans it is natural to rely on this adaptive instincts to better understand the feelings and emotions of our interlocutor. However – in the case where the interaction involves a non-human counterpart – anthropomorphism is a seductive trap that often leads to error.

This anthropomorphic bias can be classed as insidious: it takes place with no deliberate intent, without conscious realization, and in the face of apparent knowledge. The realization of a robot with anthropomorphic form thus represents a technique to even more amplify the human tendency to project a “consciousness” in the other, capturing in this way (with greater efficiency) a complete and blind reliance of the man to the technological medium.

 

Conclusions

While we now know that Alan Turing was too optimistic on the timeline, AI’s inexorable progress over the past 50 years suggests that Nobel Prize Herbert Simon was right when he wrote in 1956 “machines will be capable … of doing any work a man can do“. If this really was the future of humanity, this question should not be ignored: will the future then need us? What will be the task of the human?

There is no doubt how the AI (especially the Weak-AI) is helping us in improving some of our activities, freeing us from burdensome tasks and at the same time optimizing resources. However, the Strong-AI oriented research and the creation of an intelligence capable of simulating the man did not seem to take account of specific fundamental factors, impossible to reproduce artificially, which belong only to the human ways of thinking and existing.

Few months ago the professor of Philosophy and Ethics of Information at Oxford University, Luciano Floridi, stated that ” getting driverless cars to make ethical decisions is impossible”, and yet Google and Tesla Motors, ignoring this and dozens of other similar opinions, are proceeding with the development of this means of transport.

Imagine this: your car is speeding along a bridge at fifty miles per hour when an errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.

Yet this example is perhaps the most simple and obvious. Down the path to create “humanoids” robots, there will be thousands of ethical questions needing to be asked and – most likely – they will be unresolved or even ignored. Yet let’s assume for a moment that research and technology will one day be able to achieve a perfect android. Well, if these machines will truly be “perfect,” they will be able to reproduce themselves leading inexorably to dramatic conclusions.

But before we reach a conclusion, I would place a question: what is a man? Does his intelligence, his intuitive ability, his desire to know, to love, to suffer, to experience the beauty, to understand his own role in the world make sense?

Is modern man, conscious of his ‘humanity’, directing his research to create an “amplifier” of human intelligence, to support and grow his irreplaceable mental, psychological and spiritual abilities or does he believe these as obsolete and superfluous? So, may we not realize that soon man himself became superfluous, so imperfect compared to an Artificial Intelligence that seems to replace him perfectly? An AI that indeed, having no imagination, emotions, feelings can be much more efficient, easier to manage and to reach, according to the known Weber’s formula: maximum performance with minimum effort?

 

 

Lascia un commento

Il tuo indirizzo email non sarà pubblicato. I campi obbligatori sono contrassegnati *

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.