cosmopolis rivista di filosofia e politica
Cosmopolis menu cosmopolis rivista di filosofia e teoria politica

Robotics and Public Issues: A Philosophical Agenda in Progress

ANTONIO CARNEVALE and ALBERTO PIRNI
Articolo pubblicato nella sezione Robotics and Public Issues.

1. A New Area of Public Questioning

Several decades after the publication of I, Robot (1950), surely one of the milestones of contemporary fantasy literature, and after a number of extensive interdisciplinary debates, the question about “how we can technically define a robot” still remains completely open. One immediate definition among ethicists and roboticists is possibly that a robot is a machine with sensors, processors and actuators designed to execute one or more tasks repeatedly and along a scale of possible increasing complexity, with speed and precision. In strict accordance with this definition, any computer connected to a printer could qualify as a robot. For this reason (and many others), many scholars prefer not to answer the question about what a robot is (Lin et al. 2008). If the technological definition is ambiguous, an understanding of the social and ethical relevance of robots is even less clear.
However, despite the lack of a sharp definition, there is a constant wave of new applications of robotic technologies. This trend includes the proliferation of context-aware personal assistants, smart advisers, advanced global industrial systems as well as early exemplars of autonomous vehicles. vRobots are colonizing our life as humans as well as our imagined future to the extent that they can no longer be treated as mere “neutral” instruments or the views of a few visionary scientists. Robots are tools but, at the same time, they are also the manifestation of forms of rationality through which the reciprocal connection of humans and machines takes shape (Pirni 2013). They are tools that human beings use in order to experience and, at the same time, to interact with the surrounding world. In the same way as other emerging technologies, robotic artefacts will soon be influencing the way human beings perceive and interact with the surrounding world (Verbeek 2006). vMore and more robots are inhabiting the space of mediation between subjectivity and objectivity. This is a moral and public space. They not only have the function of adapting the world of things to human needs, but also of affecting human needs by adapting them to representations of the world that humans produce, taking into account the framework of values and beliefs of a society. In other words, the overall field of robotics is giving birth to a new form of public sphere and a new and (at least for the moment) unfinished list of problematic contents. On the one hand, such contents open up the way towards a renewed quality of life for humans in each period of their life. On the other, they deeply challenge the entire ethical and legal framework that we are used to dealing with by staying within the democratic decision-making process.
This line of thought constitutes the main premise of this session on “Robotics and Public Issues”. When we argue that robots involve “public issues”, we do not regard such issues as exclusively related to the fact that robots are new technical objects which need to be legally regulated. This is just one side of the debate. On the other, robots represent objects with an intrinsic content of normative “criticality”, i.e. objects involving public issues that challenge the order of justice and the democratic quality of the decision-making process. The case of healthcare robots is worth mentioning. Research in healthcare robotics is focused on the development of diagnostic robots, which explore hard-to-access parts of the human body, thus reducing the invasiveness of diagnostic interventions and alleviating some of the physical and psychological discomfort of patients. Therefore, the high costs of tele-operation technologies for surgery, rehabilitation, personal assistance and diagnosis may lead to unfair access to health care technologies, thus exacerbating the gap between the haves and have-nots (Datteri - Tamburrini 2009).
As a consequences, the public decision making not only consists in identifying the different uses of the technologies involved, but also in defining publicly and politically correct priorities within the identified ethical scenario, so that techno-regulations can be drawn up while respecting the principles of justice (see the interview with van der Berg in this issue). As a result, ethical “criticality” is not only related to the excessive use (abuse) of robots, or to the fact that their existence calls into question established rights. Fundamental issues are at stake when existing robotic technologies, which have already been deemed as good for humans, remain in laboratories or are distributed as a privilege for a few persons or groups, maybe because democratic and fair criteria behind such a distribution simply do not exist (Pirni - Lucivero 2013).
Thus, to return to this session of Cosmopolis, a “philosophical agenda” for robotics can advance primarily if we maintain a broad concept of criticality, from both perspectives: robots generate ethical criticality because they could be abused (a surgery robot costs much money, thus the hospital decides to use it also when not indispensable), and also because their non-use could be a matter of injustice. Thus, they constitute embedded objects of public reflection not only in terms of what they are for the legal system, but also in terms of what they represent for people, for their expectations (Lucivero et al. 2011).
There are two specific tendencies that clearly represent the intrinsic normative public nature of robotics: the evolution of the use of robots, and the question of design.
The first machines used as robots were “industrial robots”. They operated in closed and controlled environments, were programmed with specific functions and served as means to achieve various productive and industrial ends. In this first form of relationship between humans and robots, the public issues arising with respect to robotics regarded the safety and the physical well-being of people, particularly workers, who since the 1970s have worked in close contact with machines in large industrial assembly lines. As a result of the deaths of workers performing activities in close contact with a robot, a number of new standards were introduced, aimed at saving human lives (barriers, training for workers and management in order to create a bigger level of responsibility).
Two further public issues, still related to the ethical consequences of using robots as industrial products, are the fear that humans could be replaced by machines and the risk of worker alienation due to the development of increasingly automated manufacturing systems.
After this first wave of industrial robots, the machines turned towards everyday life, interacting with humans in sensitive areas such as healthcare, surgical medicine, rehabilitation, and companionship. New generations of robots are produced which, despite being products, could have the capacity to act as if they were moral agents (Allen et al. 2000; Coeckelbergh 2009). According to some scholars, this new generation of robots will soon populate real life (not just our imaginaries) with hybrid forms which will integrate sophisticated software with artificial intelligence and autonomous activities. This integration of mechanics, electronics and natural sciences (biology, medicine, neurosciences) opens up unforeseen scenarios, since we have enough knowledge to make a robot an autonomous artificial agent (Wallach - Allen 2008).
This continuous increase in autonomous systems could force engineers and developers to bring moral issues into a robot’s decision making (Anderson - Anderson 2011). Innovative technologies, the development of artificial intelligence, pervasive computing, the introduction of service robots in private areas of human life, autonomous military robots: all these applications open up new frontiers of sensitivity to ethical concerns.
So, besides the simple question of the limitations of robotic production, ethics must question whether and how machines should be programmed to deal with moral issues, for example, consider the importance of a possible moral program to control military robots (Altmann et al. 2013; see also the article by Asaro in this issue).
The changing use of robots also constitutes an innovation in terms of what robots mean for us. The move from factories to public and private spheres of human life shows how robotic technologies become part of the interconnectedness between subjectivity and objectivity. This can be better understood by referring to the design issue. The design process is commonly represented as a series of steps by which the needs or wishes of the customer are translated into a list of functional requirements, which defines the design task. The centrality of the technical artefact and its design is justified by well-established approaches in ethics and the philosophy of technology, by pointing out how values, roles and responsibilities, politics and morality are inscribed in technological artefacts in general and in robotics in particular (Latour -Venn 2002; Verbeek 2005).
To sum up, the evolution of human-robot interaction and issues surrounding design show that there is an intrinsic normativity in the use and the production of robots. This means that a connective and regulatory process links the social practice of robots with the public issues robots raise as emerging technologies. In our view, we should move from this process – hermeneutical and characterized by continuous adjustments – to achieve a different way of regulating emerging technologies (Pirni -Carnevale 2013).


2. The Intrinsic Normativity of Robots: An Attempt to Frame the Question

What kinds of characteristics should what we have called “the intrinsic normativity” of robots have, specifically when robots are considered as a topic that automatically triggers public issues? In the ways we use and design robots there is sufficient moral content to provide some answers to the three key questions that could – and hopefully should – arise regarding the present and future relationships between human beings within social contexts which appear to be increasingly inhabited by robots.
Following the argument outlined above, we need to address three fundamental fields along with the related phenomenological, ethical and pragmatic questions. The first question is related to the phenomenological understanding of robots as “social objects”: from a socio-philosophical point of view, what does it mean to speak about robots, or to act with or through robots? The second question addresses ethical issues and fundamental values: Are robots – understood as relevant outcomes of new technologies – and their possible usages always and unproblematically good? Finally, the third question concerns the pragmatic context of their use: even assuming a “principle of charity”, or a positive prejudice (“robots are always good”), we should ask ourselves what kind of goodness they represent. In other words, we should ask who benefits from what robots do and what factors impact on how effective they are.
Let us try first to outline the framework of the phenomenological understanding of robots: What do we mean when we talk of robots? In order to analyse the normative contents of using and designing robots, it is essential to understand that there is no unique “ontological” definition of robots. Speaking about robots entails paying attention to a fundamental list of variables: the nature of the robot (external machines, hybrid bionic systems, internal neuro-implants); the tasks required or performed (navigation, locomotion, manipulation, etc.); the environment wherein the robot works, also including the field of action (physical or non-physical); the type of human control (autonomous, automatic, tele-operated robots); the human-robot interaction (physical or not physical, permanent, continuous or occasional, invasive or not); the human role (whether or not the human subject remains in a mere “user relation”); the proximity of the robot (humans and robots can be separated, be in contact or the robot can be attached to the human) (Bisol et al. 2013a).
It is also crucial for future frameworks of interactions to understand in which context and for which aims a robot can be used. It is necessary to identify the various sectors of application and contexts of use, which for the moment include healthcare, industry, search and rescue, exploration, and education (see, in this issue, the articles by Ruggiu, de Cózar-Escalante and Laschi). But this is just one part of an undoubtedly complex and wide area of questioning related to the intrinsic normativity of robots.
Following the suggestions evoked by the second question, it is clear that the increasing and massive introduction of robots in human life is changing our ethical paradigms and the idea that we have of “the good life”. Robots will transform human notions and conceptual vocabulary of many spheres of existence, from common life to surveillance, from education to sexuality, also by permitting humans to better explore themselves and to enhance their capabilities (Koops - Pirni, 2013; see also the article by Grion in this issue).
According to some scholars, the transformation of habits and social practices by robots does not have a normative significance in terms of their negative impact on society. Nor is it possible to measure the moral relationship between humans and robots on the base of the kinds of rights that are infringed. Essentially there are aspects of human life that cannot be scientifically observed or measured because they are subjectively variable and interpretable from a wide range of points of view, namely, they are part of our subconscious world (Calo 2011).
This is why – following briefly the third fundamental question – we also need to focus on a specific and pragmatic area of questioning specifically devoted to contexts of technologies, by paying attention to specific aspects and cases of application (Swierstra - Rip 2007). The attention to context is not a matter of philosophical relativism regarding the ethical consequences of new technologies. The proliferation of technologies forces us to reflect on the sources of morality, taking into account the possibility of sources that go beyond the realm of human affairs, possibilities and agency, namely, beyond the sense of morality as outlined by the Enlightenment which still shapes our present life. Just like human beings, material objects (Verbeek 2006) as well as information (Floridi - Sanders 2004) appear to be able to evoke, but also to provide innovative – and, at the same time, disruptive – answers to moral questions.
Thus, an assessment of the ethical relevance of robotic applications can be produced if the future public agenda moves away from a fundamental consideration: the idea of ethical goodness can no longer be founded on prior universally valid normative principles; on the contrary, to justify our actions we must be forced to choose some values that could be considered as “right” because they work in a particular moral and technologically informed context. What we could call the “moralization of things and information” also implies a philosophical reconsideration of the sense of using robots. Thus, it is no longer possible to define a fixed ethical idea of goodness. The “right” principle of good should always be treated in relation to the moral and informed contexts where it arises. This implies an overall rethinking of the role of robots in our societies. We are used to believing that robots constitute objects of philosophical reflection since they are created to replace human beings. In accordance with this anthropological presupposition, the artificial nature of robots contrasts with the imperfect and fragile nature of human beings, therefore an excessive “robotization” of human life involves not only the replacement of certain human functions, but also the risk of a general de-humanization.
However, the new generations of robots are much less on the side of the artificial-natural linkage and much more on the side of technologies as tools of therapy. This implies a very significant philosophical outcome: from the replacement of humans we are moving towards a universal challenge of human enhancement (Carnevale - Battaglia 2013). This involves abandoning the anthropocentric idea of “human nature” – which considers technology as something artificial and inauthentic – and embracing the idea of the human condition, in which technology (if well regulated and democratized) can become a factor that reduces the negative effects of human vulnerability (see the article by Bonino and Carnevale in this issue).
The phenomenological framework within which it is possible to define robots, the morality of things and information in the sense clarified above; the possibility of improving the human condition without replacing human nature; all these components make the intrinsic normativity that has characterized robots as one of the most relevant public issues for the present day and future generations.


3. Proceeding along a complex path

But this framework is not enough to define what kind of normativity robots represent. A clear question remains: if robots contain constitutive elements of normativity, is it sufficient to introduce these elements into the public agenda without doing anything else? No. In order to keep these elements within the field of democratic governance, such elements need to continue to be public, namely accessible in terms of their technical characteristics and tasks, and debatable, from a politically democratic point of view.
Supporting an intrinsic normativity of robots, but without the guarantee that in certain circumstances it will be politically necessary to insert a modification of the free and unregulated course of technological proliferation, means leaving the normative potential of robotics in the hands of a few people. In so doing, the risk lies in using the issue of robotics either to justify ideological visions of technology or to ferment conservative fears towards the future.
On the contrary, we believe that in understanding and publically regulating the future development of robotic technologies we should try to avoid basing our normative assessments on ideological and ethical preconceptions regarding all possible interactions between technologies and humans (Palmerini - Stradella 2013). We need to combine the concern for the effects of robotics over time with the capacity to understand the present. A political analysis of public issues regarding robotics should thus not simply or exclusively focus on what robots “ought to be” in the future, but on the relationship between humans and robots here and now.
However, the “here and now” cannot be simply understood as the representation of all that exists. The present is only one of the infinite possible realizations of a state of things. Since the majority of critical robotic applications regard laboratory research or visionary ideations, there is a common conviction among ethicists and experts that robots are normal electrical appliances and should be treated as such. This implies that we do not need to create alarmism when speaking about ethical issues of robotics: for the foreseeable future cars will continue to be driven by humans (and not by a robot), and humanoids and cyborgs are only prototypes: they will thus not be on the market at any time soon.
However, this reassurance about the future should not prevent us from considering the ethical issues. If it is true that at present it is not technically possible to produce a machine with the full autonomy to be considered as a moral agent, it is however possible to speak about morality with reference to the way we produce and interact with new technologies, including robots. We cannot neglect these aspects, also in terms of an epistemological consideration regarding what is “normative”. From an epistemological point of view, “normative” statements make claims about how things should or ought to be, and how to value them. Thus, we cannot limit to the existing situation the reflection on the social normativity contained in robot-human interactions. If we wanted to operate in this way, we would be eliminating a large part of reality.
“Real” is not only what it “feasible”, i.e. what can be made, produced, or sold; “real” is also what is “possible”. Consequently, relegating ethical concerns to the realism of the facts constitutes a serious restriction in understanding the future public agenda of robotics. If ethics remained stuck in the facts and impassive-faced with respect to the unknowable, then all the legal and bioethical issues related to the experience of death would have no reason to exist. But this is exactly the opposite of what is being experienced in our societies. Therefore, the normativity of robots – their level of ethical criticality – lies in the forms in which robots are produced, as well as in the forms robots assume as offshoots of human desires.
However it is evident that intrinsic normativity is not sufficient per se. When a democratic system intervenes in order to regulate at best the pragmatic and normative connections contained in the relationships between technologies and citizens, it benefits as a whole from such regulation. Understanding the reason why emerging technologies effect the desires and values of people so deeply forces the political order to accept the democratic fundamental asset: a fair and non-coercive moral discourse creates the best conditions for participants to account for their positions, to defend them, to respond to critics and learn from the positions of others (Habermas 1990).
Robots involve many interests and expectations. Consequently the pluralistic ideal that surrounds the concept of democracy is challenged to find a solution starting from the point where the various parties discuss their interests and normative positions. Thus there are theoretical approaches that try to analyse the implicit normativity of these positions. Such a normativity lies in improving the governance of robotics by creating better conditions for a democratic deliberation to happen, and also by enhancing the possibilities of informed public critique. This process is improved by including a broad range of stakeholders in the discussion concerning new technologies and exploring their normative assumptions and positions in order to discuss the values and ideas of the good implied in them (Bisol et al. 2013b).
To conclude, we believe that a future public agenda of robotics is much more than a useful task to pursue. Rather, it is a necessity for any political entity that wants to keep track with the contemporary age. At the same time, we do not believe this agenda should only aim to produce a moratorium on robotics, or to develop regulations based on specific philosophical or even political conceptualization of ethics. On the contrary, we think that public decisions should aim at a kind of regulation that fully makes an interdisciplinary impact assessment of robotics, by combining different approaches to ethical analysis, which at the same time give shape to both a critical approach and a conscious openness to robotic developments. We need to regulate the progressive proliferation and introduction of robotics in human life. This means supporting robotics when it adds value to social and individual lives; but also providing institutions with restrictive tools for assessing effectiveness and the risks of the robotic technologies in question. This is the background from which the European project RoboLaw arose in 2012.

The research leading to this essay and to the section published in this issue of “Cosmopolis” received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement N0. 289092, the RoboLaw project. While Antonio Carnevale is the author of sections I and III, and Alberto Pirni of section II, both collaborated in constructing and concluding the essay in its entirety.


Bibliographical References

Allen C., Varner G., Zinser J. (2000), Prolegomena to any future artificial moral agent, in «Journal of Experimental & Theoretical Artificial Intelligence», vol. 12, No. 3, 2000, pp. 251-261.
Altmann J., Asaro P., Sharkey N., Sparrow R. (eds.) (2013), Special Issue on Armed Military Robots, in: «Ethics and Information Technology», vol. 15, No. 2, June 2013. Anderson M., Anderson S.L. (2011) (eds.), Machine Ethics, Cambridge University Press, New York 2011.
Bisol B., Carnevale A., Lucivero, F. (2013a, unpublished), D5.1 Methodology for Identifying and Analysing Ethical Issues in Robotics Research and Applications.
Bisol B., Carnevale A., Lucivero, F. (2013b), Diritti umani, valori e nuove tecnologie. Il caso dell’etica della robotica in Europa, in «Metodo. International Studies in Phenomenology and Philosophy», vol. 1, No. 4, 2013, forthcoming.
Calo R. (2011), Robots and Privacy, in Lin - Abney - Bekey (2011), pp. 187-202.
Capurro R., Nagenborg M. (2009), Ethics and Robotics, IOS Press, Amsterdam 2009.
Carnevale A., Battaglia F. (2013), A “Reflexive” Approach to Human Enhancement. Some Philosophical Considerations, in Vedder A., Lucivero F. (eds.), Therapy v. Enhancement? Multidisciplinary Analyses of a Heated Debate, Pisa University Press, Pisa 2013, pp. 95-116.
Cerqui D. (2002), The Future of Humankind in the Era of Human and Computer Hybridisation. An Anthropological Analysis , in «Ethics and Information Technology», vol. 4, No. 1, 2002, pp. 101-108.
Coeckelbergh M. (2009), Virtual Moral Agency, Virtual Moral Responsibility: On the Significance of the Appearance, Perception and Performance of Artificial Agents, in «AI and Society», vol. 24, No. 2, September 2009, pp. 181-189.
Datteri E., Tamburrini G. (2009), Ethical Reflections on Health Care Robots, in Capurro -Nagenborg (2009), pp. 35-48.
Floridi L., Sanders J.W. (2004), On the Morality of Artificial Agents, in «Minds and Machines», vol. 14, No. 3, August 2004, pp. 349-379.
Koops B. J., Pirni A. (eds.) (2013), Ethical and Legal Aspects of Enhancing Human Capabilities Through Robotics, in «Law, Innovation and Technology», vol. 5, No. 2, December 2013, Special Issue.
Latour B., Venn C. (2002), Morality and Technology. The End of the Means, in «Theory, Culture & Society», vol. 19, n. 5-6, December 2002, pp. 247-260.
Habermas J. (1990), Moral consciousness and communicative action, MIT Press, Cambridge (Mass.) 1990.
Lin P., Abney, K., Bekey G. A. (eds.) (2011), Robot Ethics: The Ethical and Social Implications of Robotics, MIT Press, Cambridge (Mass.) 2011.
Lucivero F., Swierstra T., Boenink M. (2011), Assessing Expectations: Towards a Toolbox for an Ethics of Emerging Technologies , in «NanoEthics», vol. 5, No. 2, August 2011, pp. 129-141.
Palmerini E., Stradella E. (eds.) (2013), Law and Technology. The Challenge of Regulating Technological Development, Pisa University Press, Pisa 2013.
Pirni A. (2013), Immaginari sociali in trasformazione. Per una fenomenologia della corporeità nell’epoca dello human enhancement, in Pezzanon G., Sisto D. (a cura di), Immagini, immaginari e politica. Orizzonti simbolici del legame sociale, ETS, Pisa 2013, pp. 133-153.
Pirni A., Carnevale A. (2013), The Challenge of Regulating Emerging Technologies. A Philosophical Framework, in Palmerini- Stradella (2013), pp. 59-75.
Pirni A., Lucivero F. (2013), The "Robotic Divide" and the Framework of Recognition: Re-articulating the Question of Fair Access to Robotic Technologies, in «Law, Innovation and Technology», vol. 5, No. 2, December 2013, pp. 147-171.
Siciliano B., Khatib O. (eds.) (2008), Handbook of Robotics, Springer, Berlin 2008.
Verbeek P. (2005), What Things Do: Philosophical Reflections on Technology, Agency, and Design, Pennsylvania State University Press, University Park (Pa.) 2005.
Verbeek P. (2006), Acting Artifacts. The Technological Mediation of Action, in Verbeek P., Slob A. (eds.), User Behavior and Technology Development. Shaping Sustainable Relations Between Consumers and Technologies, Springer, Dordrecht 2006, pp. 53-60.
Verruggio G. (2006), The EURON Roboethics Roadmap.
Wallach W., Allen C. (2008), Moral machines: Teaching robots right from wrong, Oxford University Press, Oxford 2008.



E-mail:



torna su