cosmopolis rivista di filosofia e politica
Cosmopolis menu cosmopolis rivista di filosofia e teoria politica

New Technologies and Personal Identity

BIBI VAN DEN BERG
Interviewed by Antonio Carnevale.
Articolo pubblicato nella sezione Robotics and Public Issues.

Dr. van den Berg, one of your main branch of research is in Law and Technology, focusing not so much on the regulation of various technologies, but on the ways in which technologies regulate the behavior of individuals. In this direction, recently you have investigated the questions of techno-regulation of online world. Particularly, a large majority of youths in Western society is online every day. In their online social interactions, they continuously switch between online and offline modes of communications. As a result, their online and offline worlds have become fully merged. Online youth socialization renders both positive and negative experiences. Surely, this experience allow them to make friends, develop their personality, generate (creative) content. But, on the downside, youths may be confronted with risks, such as bullying, abuse of personal data, fraud, and loss of reputation. Can you explain us how Internet impacts the lives of youths? And is there possibility to regulate this phenomenon? If yes, in which ways?


It is difficult to underestimate the impact of the internet in the lives of children, teenagers and adolescents. They are the first generation to grow up in a world in which the internet was present from birth – they don’t remember a world without the internet. As you say, their everyday lives are a constant back and forth between online and offline modes of communication and interaction. So much so that especially in the case of children and teenagers we can no longer speak of a distinction between the virtual world and the offline world. The boundaries between these two are completely blurred.
Since children tend to start using the internet at an early age in many Western countries, there are concerns over the risks they may run in for example online games, social network sites etc. These risks range from cyberbullying and encountering harmful content, to abuse of personal data or becoming victims of fraud, and from reputation damage to online sexual solicitation. Moreover, aside from encountering risky situations, the internet also provides children, and especially teenagers, with a wealth of opportunities to actively engage in risky behavior, seeking – and sometimes crossing – boundaries. A good example is texting sexually-explicit messages to lovers (‘sexting’), which is part of exploring sexuality but can also lead to utterly embarrassing situations, and even fierce bullying, when these messages are forwarded to third parties.
While most online risks and risk-taking issues also apply to adults, they warrant greater concern in children and teenagers for several reasons. First, children are often more easily misled and less proficient in seeing through (potentially) awkward and perilous situations. Second, it is more difficult for children and teenagers to grasp the long-term consequences of their activities. Actions in the present, such as sharing embarrassing images or spreading copyright-protected content, may come to haunt them years down the line, and it is difficult for children and teenagers to keep in mind such an extended horizon. Finally, it is unclear for youngsters (and adults alike) who the audience is that they are sharing information or content with; this audience is potentially global, since content can easily be searched and copied, and sometimes may spread like wildfire, leading to a loss of control over its dissemination.
These are the main reasons why many people feel that regulating the internet for, or with regard to, children and teenagers is legitimate and warranted, at least in specific areas or for specific situations. The next question then becomes how to go about accomplishing this. One area that has received little attention, so far, is the use of technical means to regulate children’s behavior online, or what you’ve labeled “techno-regulatio”’ in your question. It is surprising that little research has been done in this area, since in fact it is widely used as a method in practice. Businesses use technical restrictions, i.e. they make clearly defined the action space that children and teenagers have, and they make it impossible for them to engage in behaviors that are «not allowed», to ensure that they will – as I’ve called it in a recent book chapter – «color inside the lines». Examples of techno-regulation to regulate the behaviors of children on the internet include (1) the use of special browsers for children, which allow them to visit specific channels and websites only, and do not enable access to just any website on the internet, and (2) the use of parental filtering and blocking controls, which prevent children from finding potentially harmful or dangerous content by simply making this content unavailable on the computer.
In a way, one could argue that using such techno-regulatory tools for children and teenagers, in which normative or legal codes are hard-coded into technologies to make certain behaviours impossible and prompt others is an efficient, safe and effective way of improving child safety on the internet, and hence a wise choice. However, one can also argue that what makes these techno-regulatory solutions successful makes them problematic at the same time: by hard-coding rules into the technology, and by making it impossible to ‘colour outside the lines’ children have no room for manoeuvring, and do not learn about online risks or the choices and opportunities they themselves have in learning to protect themselves from such risks. These tools clearly do not lead to awareness of, or resilience against, potential risks. To my mind, this may not be problematic in the case of young children, but it is worrisome when it also applies to older children and teenagers. In the chapter I mentioned earlier, therefore, I propose that technology developers and regulators ought to focus more on creating and disseminating technical tools that nudge, invite and persuade rather than force through code, so that children can gradually grow into more robust, risk-aware adolescent internet users.


But the use of Internet refers not only to youths. In general many citizens increasingly use the Internet to buy products or engage in interactions with others, both individuals and businesses. In doing so they invariably share (personal) data. In recent articles you have correctly stressed that while extensive data protection legislation exists in many countries around the world, citizens are not always aware (enough) of their rights and obligations with respect to sharing (personal) data. You have also sustained that tools developed to remedy this gap are not enough because they are ‘too legalist’. What do you mean with ‘too legalist’?


Websites have an obligation to share their policies with respect to the collection and processing of personal data with end users, so that when a user visits the site (s)he has a level of control over her data (sharing), and engage in interactions with the website owner with a (higher) degree of consent to such sharing. Many websites, therefore, have a so-called ‘privacy policy’ on their websites, usually with a link at the bottom of the page.
Many empirical studies have been conducted over the years to find out whether this system, in which the website has an obligation to inform the end user through a privacy policy, and the end user has the ability to access this information, works in practice, and whether it is efficient. The actual numbers that such studies have come up with vary somewhat, but the overall conclusion is always exactly the same: in practice, only a very, very small number of end users who visits websites looks at privacy policies on a regular basis, and the vast majority of end users never does – or doesn’t even know such documents exist.
What’s more, even when people make the effort of looking at privacy policies on occasion, the majority of these people tends to stop reading after only a few sentences. The reason why this is so is that most privacy policies are quite difficult to read. They are documents that contain a lot of legal information, and a lot of legal jargon. That is why we’ve called such documents ‘too legalistic’ in the past.
It is not surprising that privacy policies often are too legalistic. In order to comply with their obligations businesses tend to want to ensure that they’ve acted in line with data protection legislation, and hence they refer back to such legislation and the things that are deemed important in that legislation. But what is considered important in data protection legislation doesn’t always align with an end user’s need for information, nor with the form in which (s)he might receive such information. This is why we’ve proposed alternative solutions to informing users of the data collection and processing practices of the websites they visit. For example, we’ve suggested that websites should present such information in language that is much closer to the original OECD “Fair Information Principles”, which were created in 1980 and form the basis of much of the data protection legislation that exists around the globe today. These Fair Information Principles are anything but “legalistic”. They are very short, easy to understand statements, such as “you must ask for consent” (“you” being the data processor in this case!). Any layperson can understand what that is (generally) about, and hence we’ve argued that these Principles could be a good starting point in making privacy policies more accessible and relevant to end users.


However, ICT are only one of so-called emerging technologies. The emergence of new technologies provides the potential for vast and varied applications, bringing with it both promise and peril. Technology has both good and bad implications, but its value does not depend merely on the uses to which it is put: technology is never neutral. However, emerging technologies not only are contextual and not-neutral but they exceed increasingly beyond cultures and social contexts for taking possess of our private worlds, interesting the areas of privacy and identity. According to you, in the future how will emerging technologies effect political and personal identities?


When we look at large trends in the development of information and communication technologies, several things stand out. For one, we see the rise of personalization: of ever more personalized services, personalized information provision, and personalized entertainment. Technology developers and businesses are increasingly attempting to adjust the information and the services they offer us to our individual needs and wishes, building on large profiles that contain information about what we’ve liked and disliked, and what we’ve wanted and not wanted in the past. This trend is everywhere already, from search engine results that are ranked in a particular (and different!) order for each and every one of us, depending on our past search behaviors, to targeted advertisements that align with our past shopping choices (and those of “people like us”), to personalized news services that provide us only with the type of news that we find interesting.
A second, and related, trend is that of the fact that technologies become ever more proactive in their information and service provision. Technologies become ever more capable at ‘guessing’ what kind of information a user might find interesting or relevant, related to the particular context in which this person finds herself, and providing this user with that information even before (s)he has asked for it. Think of navigation software in your car that tells you to reroute because of an upcoming traffic jam, even before you know there is a traffic jam ahead. Or a news service that filters out all of the news on the World Cup in soccer, because it knows you’re not interested in that.
I’ve argued in the past that both of these trends have an impact on the ways in which we construct, experience and express our identities. Because they become more proactive, and take an increasing role in the choice of what we read, see, hear etc. technologies gain a sense of ‘agency’ that they didn’t have before. They become (more) serious interaction partners, steering and guiding our ideas, our perceptions of the world, our opinions and sentiments, precisely because they play such a vital role in our personalized information provision. To phrase it in the jargon of symbolic interactionism (one of the most important schools of thought on identity in 20th century sociology), they may even become ‘significant others’, third parties that influence who we are and how we see ourselves.


For the last question I will refer to a particular technology, robots. As you maybe know, we work for the European project RoboLaw. The most important outcome of the project will consist of some "Guidelines on Regulating Robotics", containing regulatory suggestions for the European Commission, in order to establish a solid framework of “robolaw” in Europe. In your researching paths you have investigated the possibility of a “techno-elicitation” by regulating behaviour through the design of robots. Can you please give use more details about your thesis?


The idea here is that the ways in which we design robots, in terms of their form and their behaviors and responses, has a clear effect on the ways in which human beings will respond to them. That sounds very obvious, but I mean it in quite a literal sense. A vast body of empirical research in for example HCI has consistently revealed that human beings tend to respond to technological artifacts in a strong way, both socially and emotionally. While technology designers already use that principle to create products that are easy to use and understand, so far very little work has been done on the fact that you can use this principle also to regulate people, to steer or guide or influence what they do or don’t do, to nudge or persuade them, or sometimes even to submit them to hard-coded do’s and don’ts. Robotics could be an ideal technology to use for such a purpose, precisely because it combines embodiment (form) and behavior (actions), and because it facilitates more complex interactions than many or our average technological artifacts.


13th December 2013


E-mail:



torna su