At the beginning of this month the NYT reported a highly interesting article on the use of robots in daily practises and activities, including a doctor using a robot to aid her in checking the health condition of a patient or a manager attending an office meeting using a robot avatar, complete with screen, camera, microphone and speakers.
In reading this article, the first consideration that arose concerned how ICTs are changing our habits and how the augmented reality is no longer a scenario restricted to the world of science fiction. The emergence of real-world augmented reality is an immensely exciting and suggestive development, but there is also a second and perhaps more interesting consideration.
Reading of these ‘tele-operated’ robots led me to think about other forms of robot, the companion devices, that have been developed in the last decade and which seem to become more popular and widely diffused all the time. Consider for example the Wi-Fi embedded Nabaztag (http://en.wikipedia.org/wiki/Nabaztag), the PARO bay therapeutic robot (http://www.parorobots.com/), or KASPAR (http://kaspar.feis.herts.ac.uk/); not to mention the famous AIBO dog (http://support.sony-europe.com/aibo/index.asp).
These robots represent a new form of artificial agent; they are able to perform both reactively and proactively while interacting with the environment and with human beings. Moreover they are, or will be, deployed to perform socially relevant tasks, as nursing children and elderly people (http://www.sciencedaily.com/releases/2010/03/100324184558.htm) or simply to provide companionship to human agents
It is not difficult to imagine that in forty years, once the young generation of today has grown old, robots such as AIBO or KASPAR will be considered commodities and may also hold fundamental social roles. They will not be a medium to allow a remote user to be tele-present somewhere else, they will be autonomous agents, able to function reactively and proactively and share in our own environment.
These robots are interesting when considered from a technological and social perspective. For one thing, they show that we have the technology to design and build devices of this level of sophistication and that we are moving toward a new kind of social interaction. However the most interesting aspect of this is revealed when considering the robots from an ethical perspective.
Here is the second consideration. The dissemination of ICTs determined a revolution, which concerns
more than simply the development of new technology; this is the Fourth Revolution (http://www.philosophyofinformation.net/publications/pdf/tisip.pdf). Such a revolution affects the place of human beings in the universe: following it, it will be understood that there are other agents able to interact with us and in our environment.
As philosophers and ethicists we need to investigate such changes, to address the issues related to the moral status of these new agents and to determine a suitable ethical framework to provide the guidelines for ethical use of and interaction
with these agents.
I agree that we need to investigate these changes both from an ethical/philosophical and also from a practical perspective. One way to do this, which I think is largely lacking, is to develop positive visions about the future.
This is one thing I like about transhumanism and the work of Ray Kurzweil. In particular, Kurzweil’s “Singularity is Near” opened my eyes to the potential of emerging technologies in a way that nothing had done before, and while one can very well be (i) sceptical about the realism of his vision, (ii) terrified of the associated risks, and/or (iii) completely repelled by it (as most of my friends with whom I’ve discussed it seem to be), I found it immensely valuable to read a synthesis that clearly revealed the potential of such technologies while presenting a vision that was essentially positive. Not everyone needs to like the transhumanist perspective, but alternative visions often seem to be banal, unrealistic, incoherent or simply lacking. As a species we tend to focus more on what we want to get away from rather than where we want to go.
Another issue that currently fascinates me is the boundary between secular and religious thought. Concepts in positive pscyhology such as transendence seem to have the potential to help to close this gap, potentially showing the way towards a synthesis between the benefits of secularism and those that, somehow, we seem to lose along the way when we abandon any kind of relgious faith. This of course has deep implications for our attitude towards artificial intelligence, ageing, death, and enhancement.
At the same time I am convinced that we live in extremely dangerous times for the global civilisation on which most of us depend. This makes it all the more urgent to develop, and hopefully agree on, concrete, coherent visions of the future. Agreeing is the difficult part, of course, but I think it can be done if the visions are made sufficiently explicit, and if people increasingly let go of the idea that “their”, or indeed any, vision of the future is the “right” one. Those visions would then reflect values, which can then be used to guide decisions in the here and now that are effective in steering us towards the kind of futures we want.
Robots moving slowly into human-occupied territory. Read Asimov’s 20th Century Man and his short story Darwinian Poolroom. Can robots who move into the practice of nursing and then medicine and surgery really make judgments about others’ good as well as others’ mechanical (medical) needs, including how to keep them alive?
If robots take roles similar to humans, and execute them well, then we have a real problem of regulation of robots, unless they can be made to be like humans in their ability to think in abstractions and adapt their behavior? Can it be done? How does one socialize a robot so that the robot learns the cultural norms from which ethics derive? How can one continue the social awareness of a robot so that the robot can engage in the kind of internal and external discourse that develops the robots ability to act ethically in unforeseen circumstances?
But then, again, there is the example of R. Daneel Olivaw.
Dennis I don’t see why any of this should be a problem in principle, although I will not dare to make predictions on whether or when it will happen in practice. It’s basically an issue of complexity and reverse-engineering. If the robots are designed intelligently (unlike humans?!) then they should actually be easier to socialise than people. In fact the socialisation could be pretty much hardwired in.
Peter: You are right about the possibilities. But intelligence is not the only limitation on what persons/robots are willing to do with respect to others, given adequate social norms and well-taught practical ethics. Intelligence can be turned toward an assessment of personal interests. Highly intelligent persons can be dangerous. For example, what if robots develop a love for order and see humans as an impediment? Or what if, humans in general getting used to be cared for by intelligent robots, and relying on robots’ judgments with respect to the way things should be, simply fade away?
Thanks Dennis. I didn’t mean to downplay the risks: there are indeed some genuinely catastrophic possibilies. In fact, these are in many ways easier to imagine and believe in than the positive visions. But that’s precisely why I think it’s important to develop the latter, while keeping the risks in mind.
Errors aside, though, the behaviour of robots will presumably depend largely on how they are designed. Unlike scientologists and some other religious believers, I do not believe that we were born or created good. We evolved from apes, and have inherited from our ancestors a complex mix of self-interest and (machiavellian) altruism. Robots, by contrast can be and are designed to do what we want them to do, in other words to be good (from the point of view of the designer). So perhaps the ones we should fear are not the robots themselves, but those who design them. As for regulation, yes I agree this becomes an increasingly complex challenge as the complexity of the robots themselves increases. But it’s not clear to me that this is insurmountable. Not many regulators have a clue how a car works, but this does not stop us understanding the parameters (safety, emissions etc) that we want to regulate. As for how it works, we don’t really care. Could not the same hold true for superintelligent robots?
Regarding the prospects of humans fading away, a more realistic scenario in my view is that we merge with our technology to the extent that we are no longer recognisable as humans. But this is not destiny: again, what we need to do is develop coherent (and preferably realistic) ideas about what we actually *want*. Then we can see how to go about getting it.
Comments are closed.