SHOULD ROBOTS BE MORE HUMAN?
We’re entering an era where science is going to make it possible for robots to be ubiquitous in our lives. Self-driving cars are fast becoming a possibility. In Japan, robots already care for the elderly. Right from home companions to chore helpers, robots appear poised to help us out in ways science fiction writers of old had imagined.
Now, of course, the inevitable question is being asked. Do we want our robots to be more like ourselves? For instance, picture yourself in a driverless car. Would you like it better if the computer that is driving the vehicle shut up and drove on in silence? Or would you prefer being engaged in conversation with it?
On the face of it, talking to a computer may not sound all that different to talking to oneself, and we may say that we prefer silence, but in a study that happened in the United States, when two groups of people were tested on two different cars, one of which was silent and the other chatty, it was found that the chatty car got a lot more brownie points than the silent one. Also, though both cars ended the study by having a minor crash, the silent car got many more brickbats than the one which talked to its occupants.
This leads us to believe that we want our robots to be more like ourselves: imperfect, chatty, social and unpredictable. However, we must take care that the unpredictable nature appears only in trivialities. After all, of what use is a driving robot if it is no better than a human being in making driving errors?
A new breed of robots are now being designed in MIT and Carnegie Mellon which focus on entertaining people by telling them jokes. The latest example of this is Ginger, who recently debuted as a comedian at the World-Changing Ideas Summit. And she brought the house down with her wisecracks. This splash of charisma and humanity, scientists say, may be the bridge that most people will need to be able to trust robots in the future. If they act more like us, and if they can pretend to be empathetic towards our cause, we will trust them more easily.
That brings in other questions as well, though. The first is one of attachment. The more they act like humans, the more we’re liable to get attached to them and think of them as human companions rather than lumps of lifeless metal. The second question is more dangerous. If robots act as though they have free will, is it possible that human beings shift moral responsibility over to them? Is it possible for an army general of the future to blame his robot for bombing a city of civilians?
All said, while the debate rages on, human-like robots are coming. And it is very likely that you will love yours and cuddle it to sleep every night.
Related Posts
[catlist name=Science]