Sunday, April 23, 2017

Episode #21 - Mark Coeckelbergh on Robots and the Tragedy of Automation


Mark-portrait-250x250

In this episode I talk to Mark Coeckelbergh. Mark is a Professor of Philosophy of Media and Technology at the Department of Philosophy of the University of Vienna and President of the Society for Philosophy and Technology. He also has an affiliation as Professor of Technology and Social Responsibility at the Centre for Computing and Social Responsibility, De Montfort University, UK. We talk about robots and philosophy (robophilosophy), focusing on two topics in particular. First, the rise of the carebots and the mechanisation of society, and second, Hegel's master-slave dialectic and its application to our relationship with technology.


You can download the episode here. You can also listen below or subscribe on Stitcher and iTunes (via RSS) or here.


Show Notes

  • 0:00 - Introduction
  • 2:00 - What is a robot?
  • 3:30 - What is robophilosophy? Why is it important?
  • 4:45 - The phenomenological approach to roboethics
  • 6:48 - What are carebots? Why do people advocate their use?
  • 8:40 - Ethical objections to the use of carebots
  • 11:20 - Could a robot ever care for us?
  • 13:25 - Carebots and the Problem of Emotional Deception
  • 18:16 - Robots, modernity and the mechanisation of society
  • 21:50 - The Master-Slave Dialectic in Human-Robot Relationships
  • 25:17 - Robots and our increasing alienation from reality
  • 30:40 - Technology and the automation of human beings
 

Relevant Links

Tuesday, April 18, 2017

Heersmink's Taxonomy of Cognitive Artifacts


Polynesian Sailing Map


Polynesian sailors developed elaborate techniques for long-distance sea travel long before their European counterparts. They mapped out the elevation of the stars; they followed the paths of migrating birds; they observed sea swells and tidal patterns. The techniques were often passed down from generation to generation through the medium of song. They are still taught to this day (in some locations). In 1976, there was a famous proof of their effectiveness when Mau Piailug, a practitioner of the techniques, steered a traditional sailing canoe nearly 3,000 miles from Hawaii to Tahiti without relying on more modern methods of navigation.

These Polynesian sailing techniques provide a perfect real-world illustration of distributed cognition theory. According to this theory, cognition is not something that takes place purely in the head. When humans want to perform cognitive tasks, they don’t simply represent and manipulate the cognition-relevant information in their brains, they also co-opt features of their environment to assist them in the performance of cognitive tasks. In the case of the Polynesian sailors, it was the migrational patterns of birds, the movements of the sea and the elevation of the stars that assisted the performance. It was also the created objects and cultural products (e.g. songs) that they used to help to offload the cognitive burden and transmit the relevant knowledge down through the generations. In this manner, the performance of the cognitive task of navigation became distributed between the individual sailor and the wider environment.

Generally speaking, there are three features of the external environment that can assist in the performance of a cognitive task:

Cognitive Artifacts: Intentionally designed objects that are used in the performance of the task, e.g. a map, a calendar, an abacus, or a textbook.

Naturefacts: Natural objects, events or states of affairs that get co-opted into the performance of a cognitive tasks, e.g. the paths of migrating birds and the elevation of the stars.

Other Cognitive Agents: Other humans (or, possibly, robots and AI) that can perform cognitive tasks in collaboration/cooperation with one another.

I think it is important to understand how all three of these cognitive-assisters function and to appreciate some of the qualitative differences between them. One thing that distributed cognition theory enables you to do is to appreciate the complex ecology of cognition. Because cognition is spread out across the agent and its environment, the agent becomes structurally coupled to that environment. If you tamper with or alter one part of the external cognitive ecology it can have knock-on effects elsewhere within the system, changing the kinds of cognitive task that need to be performed, and altering the costs/benefits associated with different styles of cognition (I discussed this, to some extent, in a previous post). Understanding how the different cognitive assisters function provides insight into these effects.

In the remainder of this post, I want to take a first step towards understanding the complexity of our cognitive ecology by taking a look at Richard Heersmink’s proposed taxonomy of cognitive artifacts. This taxonomy gives us some insight into one of the three relevant features of our cognitive ecology (cognitive artifacts) and enables us to appreciate how this feature works and the different possible forms it can take.

The taxonomy itself is fairly simple to represent in graphical form. It divides all cognitive artifacts into two major families: (i) representational and (ii) ecological. It then breaks these major families down into a number of sub-types. These sub-types are labelled using a somewhat esoteric conceptual vocabulary. The labels make sense once you have mastered the vocabulary. The remainder of this post is dedicated to explaining how it all works.





1. Representational Cognitive Artifacts
Cognition is an informational activity. We perform cognitive tasks by acquiring, manipulating, organising and communicating information. Consequently, cognitive artifacts are able to assist in the performance of cognitive tasks precisely because they have certain informational properties. As Heersmink puts it, the functional properties of these artifacts supervene on their informational properties. One of the most obvious things a cognitive artifact can do is represent information in different forms.

’Representation’ is a somewhat subtle concept. Heersmink adopts CS Peirce’s classic analysis. This holds that representation is a triadic relation between an object, sign and interpreter. The object is the world that the sign is taken to represent, the sign is that which represents the world, and the interpreter is the one who determines the relation between the sign and the object. To use a simple example, suppose there is a portrait of you hanging on the wall. The portrait is the sign; it represents the object (in this case you); and you are the interpreter. The key thing about the sign is that it stands in for something else, namely the represented object. Signs can represent objects in different ways. Some forms of representation are straightforward: the sign simply looks like the object. Other forms of representation are more abstract.

Heersmink argues that there are three main forms of representation and, as a result, three main types of representational cognitive artifact. The first form of representation is iconic. An iconic representation is one that is isomorphic with or highly similar to the object it is representing. The classic example of an iconic cognitive artifact is a map. The map provides a scaled down picture of the world. The visual imagery on the map is supposed to stand in a direct, one-to-one relation with the features in the real world. A lake is depicted as an blue blob; a forest is depicted as a mass of small green trees, a mountain range is depicted as a series of humps, coloured in different ways to represent their different heights.

The second form of representation is indexical. An indexical representation is one that is causally related to the object it is representing. The classic example of an indexical cognitive artifact would be a thermometer. The liquid within the thermometer expands when it is heated and contracts when it is cooled. This results in a change in the temperature reading on the temperature gauge. This means there is a direct causal relationship between the information represented on the temperature gauge and the actual temperature in the real world.

The third form of representation is symbolic. A symbolic representation is one that is neither iconic nor indexical. There is no discernible relationship between the sign and the object. The form that the sign takes is arbitrary and people simply agree (by social convention) that it represents a particular object or set of objects. Represented language is the classic example of a symbolic cognitive artifact. The shapes of letters and the order in which they are presented bears no direct causal or isomorphic relationship to the objects they describe or name (pictographic or ideographic languages are different). The word ‘cat’, for example, bears no physical similarity to an actual cat. There is nothing about those letters that would tell you that they represented a cat. You simply have to learn the conventions to understand the representations.

The different forms of representation may be combined in any one cognitive artifact. For example, although maps are primarily iconic in nature, they often include symbolic elements such as place-names or numbers representing elevation or distance.


2. Ecological Cognitive Artifacts

The other family of cognitive artifacts are ecological in nature. This is a more difficult concept to explain. The gist of the idea is that some artifacts don’t merely provide representations of cognition-relevant information; rather, they provide actual forums in which information can be stored and manipulated. The favourite example of this — one originally posed by the distributed cognition pioneer David Kirsh — is the game of Tetris. For those who are not familiar, Tetris is a game in which you must manipulate differently shaped ‘bricks’ (technically known as ‘zoids’) into sockets or slots at the bottom of the game screen so that they form a continuous line of zoids. Although you could, in theory, play the game by mentally rotating the zoids in your head, and then deciding how to move them on the game screen, this is not the most effective way to play the game. The most effective way to play the game is simply to rotate the shapes on the screen and see how they will best fit into the wall forming at the bottom of the screen. In this way, the game creates an environment in which the cognition-relevant manipulation of information is performed directly. The artifact is thus its own cognitive ecology.

Heersmink argues that there are two main types of ecological cognitive artifact. The first is the spatial ecological artifact. This is any artifact that stores information in its spatial structure. The idea behind it is that we encode cognition-relevant information into our social spaces, thereby obviating the need to store that information in our heads. A simple example would be the way in which we organise clothes into piles in order to keep track of which clothes have been washed, which need to be washed, which have been dried, and which need to be ironed. The piles, and their distribution across physical space, stores the cognition-relevant information. Heersmink points out that the spaces in which we encode information need not be physical/real-world spaces. They can also be virtual, e.g. the virtual ‘desktop’ on your computer or phonescreen.

The other kind of ecological cognitive artifact is the structural artifact. I don’t know if this is the best name for it, but the idea is that some artifacts don’t simply encode information into physical or virtual space; they also provide forums in which that information can be manipulated, reorganised and computed. The Tetris gamescreen is an example: it provides a virtual space in which zoids can be rearranged and rotated. Another example would be scrabble tiles: constantly reorganising the tiles into different pairs or triplets makes it easier to spot words. The humble pen and paper can also, arguably, be used to create structures in which information can be manipulated and reorganised (e.g. writing out the available letters and spaces when trying to solve a crossword clue).


3. Conclusion
This then is Heersmink’s taxonomy of cognitive artifacts. One thing that is noticeable about it (and this is a feature, not a bug) is that it focuses on the properties of the artifacts themselves, not on human uses. It is, thus, an artifact-centred taxonomy not an anthropomorphic one. Also the taxonomy does not divide the world of cognitive artifacts into a set of jointly exhaustive and mutually exclusive categories. As is clear from the descriptions, particular artifacts can sit within several of the categories at one time.

Nevertheless, I think the taxonomy is a useful one. It sheds light on the different ways in which artifacts can figure in our cognitive tasks, it makes us more sensitive to the rich panoply of cognitive artifacts we encounter in our everyday lives, and it can shed light on the propensity of these artifacts to enhance our cognitive performance. For example, symbolic cognitive artifacts clearly have a higher cognitive burden associated with them. The user must learn the conventions that determine the meaning of the representations before they can effectively use the artifact. At the same time, the symbolic representations probably allow for more complex and abstract cognitive operations to be performed. If we relied purely on iconic forms of representation we would probably never have generated the rich set of concepts and theories that litter our cognitive landscapes.

Saturday, April 15, 2017

The Art of Lecturing: Four Tips




The lecture is much maligned. An ancient art form, practiced for centuries by university lecturers, writers and public figures, it is now widely-regarded as an inferior mode of education. Lectures are one-sided, information dumps. They are more about the ego of the lecturer than the experience of the audience. They are often dull, boring, lacking in dynamism. They need to be replaced by ‘flipped’ classrooms, small-group activities, and student-led peer instruction.

And yet lectures are persistent. In an era of mass higher education, there is little other choice. An academic with teaching duties simply must learn to lecture to large groups of (apathetic) students. The much-celebrated paradigm of the Oxbridge-style tutorial, whatever its virtues may be, is simply too-costly to realise on a mass scale. So how can we do a better job lecturing? How can we turn the lecture into a useful educational tool?

I claim no special insight. I have been lecturing for years and I’m not sure I am any good at it. There are times when I think it goes well. I feel as if I got across the point I wanted to get across. I feel as if the students understood and engaged with what I was trying to say. Many times the evaluations I receive from them are encouraging. But these evaluations are global not local in nature: they assess the course as a whole, not particular lectures. Furthermore, I’m not sure that one-time, snapshot evaluations of this nature are all that useful. Not only is there a significant non-response rate, there is also the fact that the value of the particular lecture may take time to materialise. When I think back to my own college days, I remember few, if any, of the lectures I attended. It’s the odd one or two that have stuck in mind and proven useful. It would have been impossible for me to know this at the time.

So the sad reality is that most of the time we lecture in the dark. We try our best (or not) and never know for sure whether we are doing an effective job. The only measures we have are transient and immediate: how did I (qua lecturer) act in the moment? Was I fluent in my exposition? Did the class engage with what I was saying? Did they ask questions? Was their curiosity piqued? Did any of the students come up to me afterwards to ask more questions about the topic? Did I create a positive atmosphere in the class?

Despite this somewhat pessimistic perspective, I think there are things that a lecturer can do to improve the lecturing experience, both for themselves and for their students. To this end, I created a poster with four main tips on how to lecture more effectively. I created this some time ago, after reading James Lang’s useful book On Course: A Week-by-Week Guide to Your First Semester of College Teaching, and by reflecting on my own classroom experiences. You can view the poster below; I elaborate on its contents in what follows.




1. Cultivate the Right Attitude
The first thing to do in order to improve the lecturing experience is simply to improve one’s own attitude towards it. If you read books on pedagogy or attend classes on teaching in higher education, you’ll come across a lot of anti-lecture writings. And if you do enough lectures yourself, you can end up feeling pretty jaded and cynical. The main critique of the lecture as a pedagogical tool is that it is antiquated. It may have had value at a time when students didn’t have easy access to the information being presented by the lecturer, but in today’s information rich society it makes no sense. Students can acquire all the information that is presented to them in the lecture through their own efforts — all the more so if you are providing them with class notes and lecture slides. So why bother?

The answer is that the lecture is still valuable and it’s important to appreciate its value before you start lecturing. For starters, I would argue that in today’s information-rich society, the lecture possibly has more value than ever before. The lecture is not just an information-dump; it is a lived experience. Just because student’s have easy access to the information contained within your lecture doesn’t mean they will actually access it. Most probably won’t, not unless they are cramming for their final exams. Not only is today’s society information-rich; it is also distraction-rich. When students leave the classroom they will have to exert exceptional willpower in order to avoid those distractions and engage with the relevant information. Thus, there is some value to the lecture as a ‘special’ lived experience when students are forced to confront the information and ideas relevant to their educational programme. They can, of course, supplement this with own reading and learning, but students who don’t avail of the ‘special time’ of the lecture face an additional hurdle.

On top of this, there are things that a lecture can do that cannot be easily replicated by textbooks and lecture notes and the like. First, they can effectively summarise the most up-to-date research and synthesise complex bodies of information. This is particularly true if you are lecturing on your research interests and you keep abreast of the latest research in a way that textbooks and other materials do not. Lectures can also translate complex ideas to particular audiences. If you are lecturing to a group (in person) you can get a good sense of whether they ‘grok’ the material being presented by constantly checking-in. This allows you to adjust the pace of presentation or the style of explanation to a manner that best suits the group. Another value of lectures is that they allow the lecturer to present themselves as an intellectual model to their students — to inspire them to engage with the world of ideas.

Finally, if all else fails, lectures have value for the lecturer because they learn more about their field of study through the process of preparing for lectures. It is an oft-repeated truism that you don’t really know something until you have to explain it to someone else. Lectures give you the opportunity to do that several times a week.


2. Organise the Material
The second thing to do is to organise the material effectively. It’s an obvious point, but if the lecture consists largely in you presenting information to students, it is important that the information is presented in some comprehensible and compelling format. There are many ways to do this effectively, but three general principles are worth keeping in mind:


  • (i) Less is more: Lecturers have a tendency to overstuff their lectures with material, often because they have done a lot of reading on the topic and don’t want it to go to waste. What seems manageable to the lecturer is often too much for the students. I tend to think 3-5 main ideas per fifty-minute lecture is a good target.

  • (ii) Coherency: The lecture should have some coherent structure. It should not be just one idea after another. Organising the lecture around one key argument, story, or research study is often an effective way to achieve coherency. I lecture in law or legal theory so I tend to organise lectures around legal rules and the exceptions to them, or policy arguments and the critiques of them. I’m not sure this is always effective. I think it might be better to organise lectures around stories. Fortunately, law is an abundant source of stories: every case that comes before the court is a story about someone’s life and how it was affected by a legal rule. I’m starting to experiment with structuring my lectures around the more compelling of these stories.

  • (iii) Variation: It’s always worth remembering that attention spans are short so you should build some variation into the lecture. Occasionally pausing for question breaks or group activities are good way to break up the monotony.



3. Manage the Performance
The third thing to do is to manage the physical performance of lecturing. This might be the most difficult part of lecturing when you are starting out. I know when I first started lecturing I never thought of lecturing as a performance art. But over time I have come to learn that it is. Being an effective lecturer is just as much about mastering the physical space of the lecture theatre as it is about knowing the material. I tended to focus on the latter when I was a beginner, now I tend to focus more on the former.

The general things to keep in mind here are (i) your lecturing persona and (ii) the way in which you land your energy within the classroom.

When you are lecturing you are, to at least some extent, playing a character. Who you are in the lecture theatre is different from who you are in the rest of your life. I know some lecturers craft an intimidating persona, eager to impress their students with their impressive learning and being dismissive of what they perceive to be silly questions. Such personas tend to stem from insecurity. At the same time, I know other lecturers who try to be incredibly friendly and open in their classroom personas, while oftentimes being more insular and closed in the rest of their worklife. I try to land somewhere in between these extremes with my lecturing persona. I don’t like being overly friendly, but I don’t like being intimidating either.

’Landing your energy’ refers to the way in which you direct your attention and gaze within the classroom. I remember one lecturer I had who used to land his energy on a clock at the back of the lecture theatre. At the start of every lecture he would open up his powerpoint presentation, gaze at the clock on the back wall of the lecture theatre, tilt his head to one side, and then start talking. Never once did he look at the expressions on his students faces. Suffice to say, this was not a very effective way to manage the physical space within the classroom. It wasn’t engaging. It didn’t make students feel like they were important to the performance.

A good resource for managing the physical aspects of lecturing is this video from the Derek Bok Center on ‘The Act of Teaching’.


4. Engage the Students
The final thing to do is to make sure that lectures are not purely one-way. This is the biggest criticism of lectures and it can be avoided by building-in opportunities for genuine student engagement during the 50 or so minutes you have in the typical lecture. There are some standard methods for doing this. The most obvious is to encourage students to take notes. This might seem incredibly old-fashioned, but I always emphasise it to students in my courses. The note-taking process forces students to cognitively engage with what is being said and to translate it into a language that makes sense to them. To some extent, it doesn’t even matter if the students use the notes for revision purposes.

Other things you can do include: building discussion moments into the class when you pause to ask questions, get students to think about them, and then ask follow up questions; using in-class demonstrations of key ideas and concepts; and using the peer-instruction model (pioneered by Erik Mazula) where you pose conceptual tests during the lecture and get students to answer in peer groups. Of these, my favourite are the first two. I like to pause during lectures to get students to think about some question for a minute; get them to discuss it with the person sitting next to them for another minute; and then to develop this into a classroom discussion. I find this to be the most effective technique for stimulating classroom discussion — much more so than simply posing a question to the group as a whole. Demonstrations can also work well, but only for particular subjects or ideas. I use game theory in some of my classes and I find demonstrating how certain legal, political and commercial ‘games’ work, using volunteers from the class, is an effective way to facilitate student engagement.

Monday, April 10, 2017

Abortion and the People Seeds Thought Experiment




(Entry on the violinist thought experiment)

The most widely discussed argument against abortion focuses on the right to life. It starts from something like the following premise:


  • (1) If an entity X has a right to life, it is impermissible to terminate X’s existence.


This premise seems plausible but needs to be modified. It does deal with the clash of rights. There are certain cases in which rights conflict and need to be balanced and traded off against each other. The most obvious case in the one in which one person’s right to life conflicts with another person’s right to life. In those cases (typically referred to as ‘self defence’ cases) it may be permissible for one individual to terminate another individual’s existence. Abortion may occasionally be permitted on these grounds. For example, the foetus may pose a genuine threat to the life of the mother and so her right to life might be taken to trump the foetus’s right to life (assuming, for the sake of argument, that it has such a right).

The more difficult case is where the foetus poses no threat to the life of the mother. The question then becomes whether the mother’s right to control what happens to her body trumps the foetus’ right to life. Judith Jarvis Thomson’s famous article ‘A Defense of Abortion’ tries to argue the affirmative answer to this question. It does so through a series of fanciful and ingenious thought experiments. The most widely-discussed of those thought experiments is the violinist thought experiment, which supposedly shows that the right to control one’s body trumps the right to life in cases of pregnancy resulting from rape. I presented a lengthy analysis of that thought experiment in a recent post.

Less widely-discussed is Thomson’s ‘People Seeds’ thought experiment and it’s that thought experiment that I wish to discuss over the remainder of this post. I do so with some help from John Martin Fischer’s article ‘Abortion and Ownership’, as well as William Simulket’s article ‘Abortion, Property and Liberty’.


1. People Seeds and Contraceptive Failure
Here is Thomson’s original presentation of the ‘People Seeds’- thought experiment.

[S]uppose it were like this: people-seeds drift about in the air like pollen, and if you open your windows, one may drift in and take root in your carpets or upholstery. You don’t want children, so you fix up your windows with fine mesh screens, the very best you can buy. As can happen, however, and on very, very rare occasions does happen, one of the screens is defective; and a seed drifts in and takes root. 
(Thomson 1971, 59)

Now ask yourself two questions about this thought experiment: (1) Do you have a right to remove the seed if it takes root? and (2) What is this scenario like?

In answer to the first question, Thomson suggests that the answer is ‘yes’. You have no duty to allow the people-seed to gestate on the floor of your house just because one happened to get through your meshed curtains. Your voluntary opening of the windows does not give an insurmountable right to the people-seeds. In answer to the second question, it is supposed to be like the case of pregnancy resulting from contraceptive failure. Arguing by analogy, Thomson’s claim is that the moral principle governing the ‘People-Seed’-case carries over to the case of pregnancy resulting from contraceptive failure. So just as the right to control what happens to one’s property trumps the people-seed’s right to life in the former, so too does the right to control what happen’s to one’s body trump the foetus’ right to life (assuming it has one) in the latter. I have tried to illustrate this reasoning in the diagram below.



This argument is significant, if it is right. Thomson’s violinist thought experiment could only establish the permissibility of abortion in cases of involuntary pregnancy (i.e. pregnancy resulting from rape). The ‘People-seeds’ thought experiment goes further and purports to establish the permissibility of abortion in cases of voluntary sexual intercourse involving contraceptive failure. Is the argument right?


2. Counter-Analogies to People-Seeds
I’m going to look at John Martin Fischer’s analysis of the ‘People-Seeds’-thought experiment. I’ll start with an important preliminary point. Whenever we develop and evaluate a thought experiment, we have to be careful to ensure that our intuitions about what is happening in the thought experiment are not being contaminated or affected by irrelevant variables.

Thomson’s stated goal in her article is to consider the permissibility of abortion if we take for granted that the foetus has a right to life. Obviously, this is a controversial assumption. Many people argue that the foetus does not have a right to life because the foetus is not a person (or other entity capable of having a right to life). Thomson is trying to set that controversy to the side. She is willing to accept that the foetus really does have a right to life. Consequently, it is important for her project that she uses thought experiments involving entities that clearly do have a right to life. The violinist thought experiment clearly succeeds in this regard. It involves a fully competent adult human being — an entity that uncontroversially has a right to life. It’s less clear whether the people-seeds thought experiment shares this quality. It could be that when people are imagining the scenario they don’t think of the people-seeds as entities possessing a right to life (perhaps they think of them as the equivalent to sperm cells getting lodged in your carpet - they will take a bit of time to become people). Consequently, their conclusion that there is nothing wrong with removing the people-seeds from the carpet might not be driven by intuitions regarding the trade off between the right to life and the right to control one’s property but rather by intuitions about the right to control one’s property simpliciter.

Fischer thinks there is some evidence for this interpretation of the thought experiment. If you run an alternative, but quite similar, thought experiment involving an entity that clearly does possess a right to life, the conclusion Thomson wishes to draw is much less compelling. Here’s one such thought experiment coming from the philosopher Kelly Sorensen:

Imagine you live in a high-rise apartment. The room is stuffy, and so you open a window to air it out. You don’t want anyone coming in…so you fix up your windows with metal bars, the very best you can buy. As can happen, though, the bars and/or their installation are defective, and the Spiderman actor [who is filming in the local area]…falls in, breaks his back in a special way, and cannot be moved, without ending his life, for nine months. Are you morally required to let him stay? 
(Fischer 2013, 291)

The suggestion from Fischer is that you might be under such an obligation. But if this is right, then it possibly provides a better analogy with the case of pregnancy resulting from contraceptive failure and a reason to think that the right to control one’s body does not trump the right to life.

Another point that Fischer makes is that your role in causing the entity in question to become dependent on you (your body or your property) might make a relevant difference to our moral beliefs. Thus, the fact that Thomson’s thought experiment asks us to suppose that the people-seeds are just out there already, floating around on the breeze, waiting to take up residency on somebody’s carpet, might be affecting our judgment. In this world, you are constantly in a defensive posture, trying to block the invasion of the people-seeds. If we changed the scenario so that you actually play some positive causal role in drawing them into your house/apartment we might reach a different conclusion. So here’s a slight variation on Thomson’s thought experiment:

Suppose that you can get some fresh air by simply opening the window (with the fine mesh screen), but still, you would get so much more if you were to use your fan, suitably placed and positioned so that it is sucking air from outside into the room. The only problem is that this sucks people-seeds into the room along with the fresh air. 
(Fischer 2013, 292)

The suggestion is that this is much closer to the case of pregnancy resulting from contraceptive failure. After all, voluntarily engaging in sexual intercourse (even with contraception) involves playing a positive causal role in drawing into your body the sperm cells that make pregnancy possible.

In sum, then, we have two counter-analogies to Thomson’s ‘People-Seeds’-thought experiment. The suggestion is that both of these thought experiments are closer to pregnancy resulting from contraceptive failure and so the moral principle that applies in both should carry over to that case. The right to control one’s body does not trump the right to life.




3. Analysis of the Counter-Analogies
There are two problems with these counter-analogies. The first is simply that they do not compare like with like. This is a problem with all thought experiments that are intended to provide analogies with pregnancy, including Thomson’s. Pregnancy is, arguably, a sui generis phenomenon: there are no good analogies with it, period. Consequently, it is very difficult to build a moral argument for (or against) abortion by simply constructing elaborate and highly artificial thought experiments that pump our intuitions about the right to life in various ways. Furthermore, even if you hold out some hope for the analogical strategy, there is something pretty obviously disanalogous about the two scenarios: all the thought experiments involve interferences with the right to property not with the right to control over one’s body. Perhaps one has a property right over one’s body. Even still, the degree of invasiveness and dependency involved in pregnancy is quite unlike someone taking up residency on your carpet.

Another problem with the thought experiments is the normative principles underlying them. The whole discussion about pregnancy and contraceptive failure is motivated by the belief that consent matters when it comes to determining the rights claims that others have over us. Pregnancy from rape is distinctive because it involves a lack of consent. One person impregnates another against their will. It seems intuitively plausible (irrespective of the ranking one has of different rights) to assume that duties cannot be easily imposed on someone without their consent. Pregnancy from contraceptive failure is different because (a) everyone knows that pregnancy is a possible (if not probable) result of sexual intercourse even when it takes place with contraceptive protection and (b) by consenting to the sexual intercourse it seems like you must be willing to run the risk of this possible result. Consequently, it doesn’t seem quite so far-fetched to suppose that you might be voluntarily incurring some duties by engaging in the activity.

This line of reasoning, as William Simulket sees it, is motivated by the following consent principle:

Consent principle: When an agent A freely engages in action X, A consents to all possible foreseeable consequences of X.

At first glance, this seems like a plausible principle and if it is correct it would seem to imply that A incurs certain obligations or duties with respect to X. But according to Simulket (and Thomson) this consent principle cannot possibly be correct because it entails absurd consequences. It entails that women are ‘on the hook’ (so to speak) for all the possible pregnancies that might befall them (irrespective of whether they consented to the sexual activity that led to the pregnancy) because rape is a possible foreseeable consequence of being alive and walking about in the world, and hence women who refuse to get hysterectomies must have consented to the possibility of pregnancy resulting from rape. Thomson put it like this in her original article:

…by the same token anyone can avoid a pregnancy due to rape by having a hysterectomy, or anyway by never leaving home without a (reliable!) army. 
(Thomson 1971, 59)

And Simulket explained the idea in his article as follows:

The circumstances that we face are, largely, outside of our control. But whether we have invasive surgery to remove our reproductive organs is, largely, within our control. It is uncontroversially true that any of us might be raped at some point in the future. Therefore, according to this argument, women who realize that rape is possible but who do not have a hysterectomy have consented to becoming pregnant from sexual assault. 
(Simulket 2015, 376)

Simulket also suggests, along similar lines, that the consent principle, if true, would entail that we all consent to all the possible foreseeable misfortunes that befall us because we could have avoided them by committing suicide. It is, of course, absurd to assume that if we wish to avoid responsibility for what happens to us we must get hysterectomies or commit suicide, hence the consent principle must be wrong.

I’m not sure what to make of this. I agree with Simulket and Thomson that the strong version of the consent principle — the one that holds that we are on the hook for all possible foreseeable consequences of what we do — must be wrong. But obviously some version of the consent principle must be correct (perhaps one that focuses on results that are reasonably foreseeable or probable). After all it is essential of our systems of contract law and legal responsibility that we incur duties through our voluntary activity.

If this is correct, then maybe Thomson’s thought experiments succeed in showing that the right to control one’s body trumps the right to life of the foetus (assuming it has one) in cases of pregnancy resulting from contraceptive failure, but it does nothing to show whether the same result holds in cases of unprotected consensual sexual intercourse. Those cases might be covered by a suitably modified version of the consent principle. If we want to argue for a pro-choice stance in relation to those cases, we may need to focus once more on the question of who or what bears a right to life.

Sunday, April 2, 2017

New Paper - Could there ever be an app for that? Consent Apps and the Problem of Sexual Assault




I have a new paper coming out in Criminal Law and Philosophy. The final version won't be out for a few weeks, but you can access a pre-publication version at the links below.

Title: Could there ever be an app for that? Consent Apps and the Problem of Sexual Assault
Journal: Criminal Law and Philosophy
Links: Official; Academia.edu; Philpapers
Abstract:  Rape and sexual assault are major problems. In the majority of rape and sexual assault cases consent is the central issue. Consent is, to borrow a phrase, the ‘moral magic’ that converts an impermissible act into a permissible one. In recent years, a handful of companies have tried to launch ‘consent apps’ which aim to educate young people about the nature of sexual consent and allow them to record signals of consent for future verification. Although ostensibly aimed at addressing the problems of rape and sexual assault on university campuses, these apps have attracted a number of critics. In this paper, I subject the phenomenon of consent apps to philosophical scrutiny. I argue that the consent apps that have been launched to date are unhelpful because they fail to address the landscape of ethical and epistemic problems that would arise in the typical rape or sexual assault case: they produce distorted and decontextualised records of consent which may in turn exacerbate the other problems associated with rape and sexual assault. Furthermore, because of the tradeoffs involved, it is unlikely that app-based technologies could ever be created that would significantly address the problems of rape and sexual assault. 
 
 

Friday, March 31, 2017

Robot Rights: Intelligent Machines (Panel Discussion)





I participated in a debate/panel discussion about robot rights at the Science Gallery (Trinity, Dublin) on the 29th March 2017. A video from the event is above. Here's the description from the organisers:

What if robots were truly intelligent and fully self aware? Would we give them equal rights and the same protection under the law as we provide ourselves? Should we? But if a machine can think, decide and act on its own volition, if it can be harmed or held responsible for its actions, should we stop treating it like property and start treating it more like a person with rights?

Moderated by Lilian Alweiss from the philosophy department at Trinity College Dublin, panellists include Conor Mc Ginn, Mechanical & Engineering Department, Trinity College Dublin; John Danaher, Law department NUI Galway; and Eoghan O'Mahoney from McCann Fitzgerald.

Join us as we explore these issues as part of our HUMANS NEED NOT APPLY exhibition with a panel discussion featuring leaders in the fields of AI, ethics and law.

Tuesday, March 28, 2017

BONUS EPISODE - Pip Thornton on linguistic capitalism, Google's ad empire, fake news and poetry

slide1.jpg


[Note: This was previously posted on my Algocracy project blog; I'm cross-posting it here now. The audio quality isn't perfect but the content is very interesting. It is a talk by Pip Thornton, the (former) Research Assistant on the project].

My post as research assistant on the Algocracy & Transhumanism project at NUIG has come to an end. I have really enjoyed the five months I have spent here in Galway - I  have learned a great deal from the workshops I have been involved in, the podcasts I have edited, the background research I have been doing for John on the project, and also from the many amazing people I have met both in and outside the university.

I  have also had the opportunity to present my own research to a  wide audience and most recently gave a talk on behalf of the Technology and Governance research cluster entitled A Critique of Linguistic Capitalism (and an artistic intervention)  as part of a seminar series organised by the  Whitaker Institute's Ideas Forum,  which I managed to record.

Part of my research involves using poetry to critique linguistic capitalism and the way language is both written and read in an age of algorithmic reproduction. For the talk I invited Galway poet Rita Ann Higgins to help me explore the the differing 'value' of words, so the talk includes Rita Ann reciting an extract from her award winning poem Our Killer City, and my own imagining of what the poem 'sounds like' - or is worth, to Google. The argument central to my thesis is that the power held by the tech giant Google, as it mediates, manipulates and extracts economic value from the language (or more accurately the decontextualised linguistic data) which flows through its search, communication and advertising systems, needs both transparency and strong critique. Words are auctioned off to the highest bidder, and become little more than tools in the creation of advertising revenue. But there are significant side effects, which can be both linguistic and political. Fake news sites are big business for advertisers and Google, but also infect the wider discourse as they spread through social media networks and national consciousness. One of the big questions I am now starting to ask is just how resilient is language to this neoliberal infusion, and what could it mean politically? As the value of language shifts from conveyor of meaning to conveyor of capital, how long will it be before the linguistic bubble bursts?

You can download it HERE or listen below:



Track Notes



  • 0:00- introduction and background 4:30 - Google Search & autocomplete - digital language and semantic escorts 
  • 6:20 - Linguistic Capitalism and Google AdWords - the wisdom of a linguistic marketplace?
  • 9:30 - Google Ad Grants - politicising free ads: the Redirect Method, A Clockwork Orange and the neoliberal logic of countering extremism via Google search 
  • 16:00 - Google AdSense - fake news sites, click-bait and ad revenue  -  from Chicago ballot boxes to Macedonia - the ads are real but the news is fake 
  • 20:35 - Interventions #1 - combating AdSense (and Breitbart News) - the Sleeping Giants Twitter campaign 
  • 23:00 - Interventions #2 - Gmail and the American Psycho experiment 
  • 25:30 - Interventions #3 - my own {poem}.py project - critiquing AdWords using poetry, cryptography and a second hand receipt printer 
  • 30:00 - special guest poet Rita Ann Higgins reciting Our Killer City 
  • 33:30 - Conclusions - a manifestation of postmodernism? sub-prime language - when does the bubble burst? commodified words as the master's tools - problems  of method


Relevant Links


Monday, March 20, 2017

Abortion and the Violinist Thought Experiment




Here is a simple argument against abortion:


  • (1) If an entity (X) has a right to life, it is, ceteris paribus, not permissible to terminate that entity’s existence.
  • (2) The foetus has a right to life.
  • (3) Therefore, it is not permissible to kill or terminate the foetus’s existence.


Defenders of abortion will criticise at least one of the premises of this argument. Many will challenge premise (2). They will argue that the foetus is not a person and hence does not have a right to life. Anti-abortion advocates will respond by saying that it is person or that it has some other status that gives it a right to life. This gets us into some abstruse questions on the metaphysics of personhood and moral status.

The other pro-choice strategy is to challenge premise (1) and argue that there are exceptions to the principle in questions. Indeed, exceptions seem to abound. There are situations in which one right to life must be balanced against another and in those situations it is permissible for one individual to kill another. This is the typical case of self-defence: someone immediately and credibly threatens to end your life and the only way to neutralise that threat is to end theirs. Killing them is permissible in these circumstances. A pro-choice advocate might argue that there are some circumstances in which pregnancy is analogous to the typical case of self-defence, i.e. there are cases where the foetus poses an immediate and credible threat to the life of the mother and the only way to neutralise that threat is to end the life of the foetus.

The trickier scenario is where the mother’s life is unthreatened. In those cases, if the foetus has a right to life, anti-abortionists will argue that the following duty holds:

Gestational duty: If a woman’s life is unthreatened by her being pregnant, she has a duty to carry the foetus to term.

The rationale for this is that the woman’s right to control her body cannot trump the foetus’ right to life. In the moral pecking order, the right to life ranks higher than the right to do with one’s body as one pleases.

It is precisely this understanding of the gestational duty that Judith Jarvis Thomson challenged in her famous 1971 article ‘A Defense of Abortion’. She did so by way of some ingenious thought experiments featuring sick violinists, expanding babies and floating ‘people-seeds’. Much has been written about those thought experiments in the intervening years. I want to take a look at some recent criticism and commentary from John Martin Fischer. He tries to show that Thomson’s thought experiments don’t provide as much guidance for the typical case of pregnancy as we initially assume, but this, in turn, does not provide succour for the opponents of abortion.

I’ll divide my discussion up over two posts. In this post, I’ll look at Fischer’s analysis of the Violinist thought experiment. In the next one, I’ll look at his analysis of the ‘people seeds’ thought experiment.


1. The Violinist Thought Experiment
The most famous thought experiment from Thomson’s article is the one about the violinist. Even if you know nothing about the broader abortion debate, you have probably come across this thought experiment. Here it is in all its original glory:

The Violinist: ‘You wake up in the morning and find yourself back to back in bed with an unconscious violinist. A famous unconscious violinist. He has been found to have a fatal kidney ailment, and the Society of Music Lovers has canvassed all the available medical records and found that you alone have the right blood type to help. They have therefore kidnapped you, and last night the violinist’s circulatory system was plugged into yours, so that your kidneys can be used to extract poisons from his blood as well as your own. The director of the hospital now tells you, “Look, we’re sorry the Society of Music Lovers did this to you — we would never have permitted it if we had known. But still, they did it, and the violinist is now plugged into you. To unplug you would be to kill him. But never mind, it’s only for nine months. By then he will have recovered from his ailment, and can safely be unplugged from you.”’ (1971: 132)

Do you have a duty to remain plugged into the violinist? Thomson argues that you don’t; that intuitively, in this case, it is permissible to unplug yourself from the violinist. That doesn’t mean we would praise you for doing it — we might think it is morally better for you to stay plugged in — but it does mean that we don’t think you are blameworthy for unplugging. In this case, your right to control your own body trumps the violinist’s right to life.

Where does that get us? The argument is that the case of the violinist is very similar to the case of pregnancy resulting from rape. In both cases you are involuntarily placed in position whereby somebody else’s life is dependent on being attached to your body for nine months. By analogy, if your right to control your own body trumps the violinist’s right to life, it will also trump the foetus’ right to life:


  • (4) In the violinist case, you have no duty to stay plugged into the violinist (i.e. your right to control your own body trumps his right to life).
  • (5) Pregnancy resulting from rape is similar to the violinist case in all important respects.
  • (6) Therefore, probably, you have no duty to carry the foetus to term in the case of pregnancy resulting from rape (i.e. your right to control your own body trumps the foetus’ right to life).


Since it will be useful for later purposes, I’ve tried to map the basic logic of this argument from analogy in the diagram below. The diagram is saying that the two cases are sufficiently similar so that it is reasonable to suppose that the moral principle that applies to the first case carries over to the second.



2. Fischer’s Criticism of the Violinist Thought Experiment
In his article, ‘Abortion and Ownership’, Fischer challenges Thomson’s intuitive reaction to The Violinist. His argumentative strategy is subtle and interesting. He builds up a chain of counter-analogies (i.e. analogies in which the opposite principle applies) and argues that they are sufficient to cast doubt on the conclusion that your right to control your own body trumps the violinist’s right to life.

He starts with a thought experiment from Joel Feinberg:

Cabin Case 1: “Suppose that you are on a backpacking trip in the high mountain country when an unanticipated blizzard strikes the area with such ferocity that your life is imperiled. Fortunately, you stumble onto an unoccupied cabin, locked and boarded up for the winter, clearly somebody else’s private property. You smash in a window, enter, and huddle in a corner for three days until the storm abates. During this period you help yourself to your unknown benefactor’s food supply and burn his wooden furniture in the fireplace to keep warm.” (Feinberg 1978, 102)

Feinberg thinks that in this case you have a right to break into the house and use the available resources. The problem is that this clearly violates the cabin-owner’s right to control their property. Still, the fact that you are justified in violating that right tells us something interesting. It tells us that, in this scenario, the right to life trumps the right to control one’s own property.

So what? The right of the cabin-owner to control his/her property is very different from your right to control your body (in the case of the violinist and pregnancy-from-rape). For one thing, the violation in the case of cabin owner is short-lived, only lasting three days, until the storm abates. Furthermore, it requires no immediate interference with their enjoyment of the property or with their bodies. We are explicitly told that the cabin is unoccupied at the time. So, on an initial glance, it doesn’t seem like the Cabin Case 1 tells us anything interesting about abortion.

Fischer begs to differ. He tries to construct a series of thought experiments that bridge the gap between the Cabin Case 1 and The Violinist. He does so by first imagining a case in which the property-owner is present at the time of the interference and in which the interference will continue for at least nine months:

Cabin Case 2: "You have secured a cabin in an extremely remote and inaccessible place in the mountains. You wish to be alone; you have enough supplies for yourself, and also some extras in case of an emergency. Unfortunately, a very evil man has kidnapped an innocent person and [left] him to die in the desolate mountain country near your cabin. The innocent person wanders for hours and finally happens upon your cabin…You can radio for help, but because of the remoteness and inaccessibility of your cabin and the relatively primitive technology of the country in which it is located, the rescue party will require nine months to reach your cabin…You can let the innocent stranger into your cabin and provide food and shelter until the rescue party arrives in nine months, or you can forcibly prevent him from entering your cabin and thus cause his death (or perhaps allow him to die)." (Fischer 1991, 6)

Fischers argues that, intuitively, in this case the innocent person still has the right to use your property and emergency resources and you have a duty of beneficence to them. In other words, their right to life trumps your right to control and use your property. Of course, a fan of Thomson’s original thought experiment might still resist this by arguing that the rights violation in this second Cabin Case is different because it does not involve any direct bodily interference. So Fischer comes up with a third variation that involves precisely that:

Cabin Case 3: The same scenario as Cabin Case 2, except that the innocent person is tiny and injured and would need to be carried around on your back for the nine-months. You are physically capable of doing this.

Fischer argues that the intuition doesn’t change in this case. He thinks we still have a duty of beneficence to the innocent stranger, despite the fact that it involves a nine-month interference with our right to control our properties and our bodies. The right to life still trumps both. This is important because Cabin Case 3 is, according to Fischer, very similar to the Violinist.

What Fischer is arguing, then, is sketched in the diagram below. He is arguing that the principle that applies in Cabin Case 1 carries over to Cabin Case 3 and that there is no relevant moral difference between Cabin Case 3 and the Violinist. Thomson’s original argument is, thereby, undermined.



For what it’s worth, I’m not entirely convinced by this line of reasoning. I don’t quite share Fischer’s intuition about Cabin Case 3. I think that if you really imagined the inconvenience and risk that would be involved in carrying another person around on your back for nine months you might not be so quick to imply a duty of beneficence. That reveals one of the big problems with this debate: esoteric thought experiments can generate different intuitive reactions.


3. What does this mean for abortion?
Let’s suppose Fischer is correct in his reasoning. What follows? One thing that follows is that the right to life trumps the right to control one’s body in the case of the Violinist. But does it thereby follow that the right to life trumps the right to control one’s body in the case of pregnancy from rape? Not necessarily. Fischer argues that there could be important differences between the two scenarios, overlooked in Thomson’s original discussion, that warrant a different conclusion in the rape scenario. A few examples spring to mind.

In the case of pregnancy resulting from rape, both the woman and the rapist will have a genetic link with the resulting child and will be its natural parents. The woman is likely to have some natural affection and feelings of obligation toward the child, but this may be tempered by the fact that the child (innocent and all as it is) is a potential reminder (trigger) of the trauma of the rape that led to its existence. The woman may give the child up for adoption — and thereby absolve herself of legal duties toward it — but this may not dissolve any natural feelings of affection and obligation.  Furthermore, the child may be curious about its biological parentage in later years and may seek a relationship with its natural mother or father (it may need to do so because it requires information about its genetic lineage). All of which is to say, that the relationship between the mother and child is very different from the relationship between you and the violinist or you and the tiny innocent person you have to carry on your back. Those relationships normatively and naturally dissolve after the nine-month period of dependency. This is not true in the case of the mother and her offspring. The interference with her rights lingers.

These differences may be sufficient to warrant a different conclusion in the case of pregnancy resulting from rape. But this is little advantage for the pro-choice advocate for it says nothing about other pregnancies. There are critics of abortion who are willing to concede that it should be an option in cases of rape. They argue that this doesn’t affect the gestational duty in the larger range of cases where pregnancy results from consensual sexual intercourse. That’s where Thomson’s other thought experiment (People Seeds) comes into play. I’ll look at that thought experiment, along with Fischer’s analysis of it, in the next post.

Tuesday, March 14, 2017

How to Plug the Robot Responsibility Gap




Killer robots. You have probably heard about them. You may also have heard that there is a campaign to stop them. One of the main arguments that proponents of the campaign make is that they will create responsibility gaps in military operations. The problem is twofold: (i) the robots themselves will not be proper subjects of responsibility ascriptions; and (ii) as they gain autonomy, there is more separation between what they do and the acts of the commanding officers or developers who allowed their use, and so less ground for holding these people responsible for what the robots do. A responsibility gap opens up.

The classic statement of this ‘responsibility gap’ argument comes from Robert Sparrow (2007, 74-75):

…the more autonomous these systems become, the less it will be possible to properly hold those who designed them or ordered their use responsible for their actions. Yet the impossibility of punishing the machine means that we cannot hold the machine responsible. We can insist that the officer who orders their use be held responsible for their actions, but only at the cost of allowing that they should sometimes be held entirely responsible for actions over which they had no control. For the foreseeable future then, the deployment of weapon systems controlled by artificial intelligences in warfare is therefore unfair either to potential casualties in the theatre of war, or to the officer who will be held responsible for their use.

This argument has been debated a lot since Sparrow first propounded it. What is often missing from those debates is some application of the legal doctrines of responsibility. Law has long dealt with analogous scenarios — e.g. people directing the actions of others to nefarious ends — and has developed a number of doctrines that plug the potential responsibility gaps that arise in these scenarios. What’s more, legal theorists and philosophers have long analysed the moral appropriateness of these doctrines, highlighting their weaknesses, and suggesting reforms that bring them into closer alignment with our intuitions of justice. Deeper engagement with these legal discussions could move the debate on killer robots and responsibility gaps forward.

Fortunately, some legal theorists have stepped up to the plate. Neha Jain is one example. In her recent paper ‘Autonomous weapons systems: new frameworks for individual responsibility’, she provides a thorough overview of the legal doctrines that could be used to plug the responsibility gap. There is a lot of insight to be gleaned from this paper, and I want to run through its main arguments in this post.


1. What is an autonomous weapons system anyway?

To get things started we need a sharper understanding of robot autonomy and the responsibility gap. We’ll being with the latter. The typical scenario that is imagined by proponents of the gap is where some military officer or commander has authorised the battlefield use of an autonomous weapons system (or AWS), that AWS has then used its lethal firepower to commit some act that, if it had been performed by a human combatant, would almost certainly be deemed criminal (or contrary to the laws of war).

There are two responsibility gaps that arise in this typical scenario. There is the gap between the robot and the criminal/illegal outcome. This gap arises because the robot cannot be a fitting subject for attributions of responsibility. I looked at the arguments that can be made in favour of this view before. It may be possible, one day, to create a robot that meets all the criteria for moral personhood, but this is not going to happen for a long time, and there may be reason to think that we would never take claims of robot responsibility seriously. The other gap arises because there is some normative distance between what the AWS did and the authorisation of the officer or commander. The argument here would be that the AWS did something that was not foreseeable or foreseen by the officer/commander, or acted beyond their control or authorisation. Thus, they cannot be fairly held responsible for what the robot did.

I have tried to illustrate this typical scenario, and the two responsibility gaps associated with it, in the diagram below. We will be focusing the gap between the officer/commander and the robot for the remainder of this post.



As you can see, the credibility of the responsibility gaps hinges on how autonomous the robots really are. This prompts the question: what do we mean when we ascribe ‘autonomy’ to a robot? There are two competing views. The first describes robot autonomy as being essentially analogous to human autonomy. This is called ‘strong autonomy’ in Jain’s paper:

Strong Robot Autonomy: A robotic system is strongly autonomous if it is ‘capable of acting for reasons that are internal to it and in light of its own experience’ (Jain 2016, 304).

If a robot has this type of autonomy it is, effectively, a moral agent, though perhaps not a responsible moral agent due to certain incapacities (more on this below). A responsibility gap then arises between a commander/officer and a strongly autonomous robot in much the same way that a responsibility gap arises between two human beings.

A second school of thought rejects this analogy-based approach to robot autonomy, arguing that when roboticists describe a system as ‘autonomous’ they are using the term in a distinct, non-analogous fashion. Jain refers to this as emergent autonomy:

Emergent Robot Autonomy: A robotic system is emergently autonomous if its behaviour is dependent on ‘sensor data (which can be unpredictable) and on stochastic (probability-based) reasoning that is used for learning and error correction’ (Jain 2016, 305)

This type of autonomy has more to do with the dynamic and adaptive capabilities of the robot, than with its powers of moral reasoning and its capacity for ‘free’ will. The robot is autonomous if it can be deployed in a variety of environments and can respond to the contingent variables in those environments in an adaptive manner. Emergent autonomy creates a responsibility gap because the behaviour of the robot is unpredictable and unforeseeable.

Jain’s goal is to identify legal doctrines that can be used to plug the responsibility gap no matter what type of autonomy we ascribe to the robotic system.


2. Plugging the Gap in the Case of Strong Autonomy
Suppose a robotic system is strongly autonomous. Does this mean that the officer/commander that deployed the system cannot be held responsible for what it does? No; in fact legal systems have long dealt with this problem, developing two distinct doctrines for dealing with it. The first is the doctrine of innocent agency or perpetration; the second is the doctrine of command responsibility.



The doctrine of innocent agency or perpetration is likely to be less familiar. It describes a scenario in which one human being (the principal) uses another human being (or, as we will see, a human-run organisational apparatus) to commit a criminal act on their behalf. Consider the following example:

Poisoning-via-child: Grace has grown tired of her husband. She wants to poison him. But she doesn’t want to administer the lethal dose herself. She mixes the poison in with sugar and she asks her ten-year-old son to ‘put some sugar in daddy’s tea’. He dutifully does so.

In this example, Grace has used another human being to commit a criminal act on her behalf. Clearly that human being is innocent — he did not know what he was really doing — so it would be unfair or inappropriate to hold him responsible (contrast with a hypothetical case in which Grace hired a hitman to do her bidding). Common law systems allow for Grace to be held responsible for the crime through the doctrine of innocent agency. This applies whenever one human being uses another human being with some dispositional or circumstantial incapacity for responsibility to perform a criminal act on their behalf. The classic cases involve taking advantage of another person’s mental illness, ignorance or juvenility.

Similarly, but perhaps more interestingly, there is the civil law doctrine of perpetration. This doctrine covers cases in which one individual (the indirect perpetrator) gets another (the direct perpetrator) to commit a criminal act on their behalf. The indirect perpetrator uses the direct perpetrator as a tool and hence the direct perpetrator must be at some sort of disadvantage or deficit relative to the indirect perpetrator. The German Criminal Code sets this out in Section 25 and has some interesting features:

Section 25 of the Strafgesetzbuch The Vordermann is the indirect perpetrator. He or she uses a Hintermann as a direct perpetrator. The Vordermann possesses Handlungsherrschaft (act hegemony) and exercises Willensherrschaft (domination) over the will of the Hintermann.

Three main types of willensherrschaft are recognised: (i) coercion; (ii) taking advantage of a mistake made by the hintermann or (iii) possessing control over some organisational apparatus (Organisationsherrschaft). The latter is particularly interesting because it allows us to imagine a case in which the direct perpetrator uses some bureaucratic agency to carry out their will. It is also interesting because Article 25 of the Rome Statute establishing the International Criminal Court recognises the doctrine of perpetration and the ICC has held in their decisions that it covers perpetration via organisational apparatus.

Let’s now bring it back to the issue at hand. How do these doctrine apply to killer robots and the responsibility gap? The answer should be obvious enough. If robots possess the strong form of autonomy, but they have some deficit that prevents them from being responsible moral agents, then they are, in effect, like the innocent agents or direct perpetrators. Their human officers/commanders can be held responsible for what they do, through the doctrine of perpetration, provided those officers/commanders intended for them to do what they did, or knew that they would do what they did.

The problem with this, however, is that it doesn’t cover scenarios in which the robot acts outside or beyond the authorisation of the officer/commander. To plug the gap in those cases you would probably need the doctrine of command responsibility. This is a better known doctrine, though it has been controversial. As Jain describes it, there are three basic features to command responsibility:

Command Responsibility: A doctrine allowing for ascriptions of responsibility in cases where (a) there is a superior-subordinate relationship where the superior has effective control over the subordinate; (b) the superior knew or had reason to know (or should have known) of the subordinates’ crimes and (c) the superior failed to control, prevent or punish the commission of the offences.

Command responsibility covers both military and civilian commanders, though it is usually applied more strictly in the case of military commanders. Civilian commanders must have known of the actions of the subordinates; military commanders can be held responsible for failing to know when they should have known (a so-called ‘negligence standard’).

Command responsibility is well-recognised in international law and has been enshrined in Article 28 of the Rome Statute on the International Criminal Court. For it to apply, there must be a causal connection between what the superior did (or failed to do) and the actions of the subordinates. There must also be some temporal coincidence between the superior’s control and the subordinates’ actions.
Again, we can see easily enough how this could apply to the case of the strongly autonomous robot. The commander that deploys that robot could held responsible for what it does if they have effective control over the robot, if they knew (or ought to have known) that it was doing something illegal, and if they failed to intervene and stop it from happening.

The problem with this, however, is that it assumes the robot acts in a rational and predictable manner — that its actions are ones that the commander could have known about and, perhaps, should have known about. If the robot is strongly autonomous, that might hold true; but if the robot is emergently autonomous, it might not.


3. Plugging the Gap in the Case of the Emergent Autonomy
So we come to the case of emergent autonomy. Recall, the challenge here is that the robot behaves in a dynamic and adaptive manner. It responds to its environment in a complex and unpredictable way. The way in which it adapts and responds may be quite opaque to its human commanders (and even its developers, if it relies on certain machine learning tools) and so they will be less willing and less able to second guess its judgments.

This creates serious problems when it comes to plugging the responsibility gap. Although we could imagine using the doctrines of perpetration and/or command responsibility once again, we would quickly be forced to ask whether it was right and proper to do so. The critical questions will relate to the mental element required by both doctrines. I was a little sketchy about this in the previous section. I need to be clearer now.

In criminal law, responsibility depends on satisfying certain mens rea (mental element) conditions for an offence. In other words, in order to be held responsible you must have intended, known, or been reckless/negligent with respect to some fact or other. In the case of murder, for example, you must have intended to kill or cause grievous bodily harm to another person. In the case of manslaughter (a lesser offence) you must have been reckless (or in some cases grossly negligent) with respect to the chances that your action might cause another’s death.

If we want to apply doctrines like command responsibility to the case of an emergently autonomous robot, we will have to do so via something like the recklessness or negligence mens rea standards. The traditional application of the perpetration doctrine does not allow for this. The principal or vordermann must have intention or knowledge with respect to the elements of the offence committed by the hintermann. The command responsibility doctrine does allow for the use of recklessness and negligence. In the case of civilian commanders, a recklessness mental element is required; in the case of military commanders, a negligence standard is allowed. So if we wanted to apply perpetration to emergently autonomous robots, we would have to lower the mens rea standard.



Even if we did that it might be difficult to plug the gap. Consider recklessness first. There is no uniform agreement on what this mental element entails. The uncontroversial part of it is that in order to be reckless one must have recognised and disregarded a substantial risk that the criminal act would occur. The controversy arises over the standards by which we assess whether there was a consciously disregarded substantial risk. Must the person whose conduct led to the criminal act have recognised the risk as substantial? Or must he/she simply have recognised a risk, leaving it up to the rest of us to decide whether the risk was substantial or not? It makes a difference. Some people might have different views on what kinds of risks are substantial. Military commanders, for instance, might have very different standards from civilian commanders or members of the general public. What we perceive to be a substantial risk might be par for the course for them.

There is also disagreement as to whether the defendant must consciously recognise the specific type of harm that occurred or whether it is enough that they recognised a general category of harm into which the specific harm fits. So, in the case of a military operation gone awry, must the commander have recognised the general risk of collateral damage or the specific risk that a particular, identified group of people, would be collateral damage? Again, it makes a big difference. If it is the more general category that must be recognised and disregarded, it will be easier to argue that commanders are reckless.

Similar considerations arise in the case of negligence. Negligence covers situations where risks were not consciously recognised and disregarded but ought to have been. It is all about standards of care and deviations therefrom. What would the reasonable person or, in the case of professionals, the reasonable professional have foreseen? Would the reasonable military commander have foreseen the risk of an AWS doing something untoward? What if it is completely unprecedented?

It seems obvious enough that the reasonable military commander must always foresee some risk when it comes to the use of AWSs. Military operations always carry some risk and AWSs are lethal weapons. But should that be enough for them to fall under the negligence standard? If we make it very easy for commanders to be held responsible, it could have a chilling effect on both the use and development of AWSs.

That might be welcomed by the Campaign against Killer Robots, but not everyone will be so keen. They will say that there are potential benefits to this technology (think about the arguments made in favour of self-driving cars) and that setting the mens rea standard too low will cut us off from these benefits.

Anyway, that’s it for this post.

Thursday, March 9, 2017

TEDx Talk: Symbols and Consequences in the Sex Robot Debate




The video from the TEDx talk I did last month is now available for your viewing pleasure. A text version is available here. Some people worry about the symbolic meaning of sex robots and their consequences for society. I argue that these worries may be misplaced.


Wednesday, March 8, 2017

Virtual Sexual Assault: A Classificatory Scheme


Party scene from Second Life


In 1993, Julian Dibbell wrote an article in The Village Voice describing the world’s first virtual rape. It took place in a virtual world called LambdaMOO, which still exists to this day. It is a text-based virtual environment. People in LambdaMOO create virtual avatars (onscreen ‘nicknames’) and interact with one another through textual descriptions. Dibbell’s article described an incident in which one character (Mr. Bungle) used a “voodoo doll” program to take control of two other users’ avatars and force them to engage in sexual acts.

In 2003, a similar incident took place in Second Life. Second Life is a well-known virtual world. It is visual rather than textual. People create virtual avatars that can interact with other user’s avatars in a reasonably detailed virtual environment. In 2007, the Belgian Federal Police announced that they would be investigating a ‘virtual rape’ incident that took place in Second Life back in 2003. Little is known about what actually happened, but taking control of another character’s avatar and forcing it to engage in sexual acts was not unheard of in Second Life.

More recently, in October 2016 to be precise, the journalist Jordan Belamire reported how she had been sexually assaulted while playing the VR game QuiVR, using the HTC Vive. The HTC Vive (for those that don’t know) is an immersive VR system. Users don a headset that puts a virtual environment into their visual field. The user interacts with that environment from a first person perspective. QuiVR is an archery game where players fight off marauding zombies. It can be played online with multiple users. Players appear in a disembodied form as a floating helmet pair of hands. The only indication of gender comes through choice of name and voice used to communicate with other players. Jordan Belamire was playing the game in her home. While playing, another user — with the onscreen name of ‘BigBro442 — started to rub the area near where her breasts would be (if they were depicted in the environment). She screamed at him to ‘stop!’ but he proceeded to chase her around the virtual environment and then to rub her virtual crotch. Other female users of VR have reported similar experiences.

These three incidents raise important ethical questions. Clearly, there is something undesirable about this conduct. But how serious is it and what should we do about it? As a first step to answering this question, it seems like we need to have a classificatory scheme for categorising the different incidents of virtual rape and sexual assault. Maybe then we can say something useful about their ethical importance? Prima facie, there is something different about the virtual sexual assaults that took place in LambdaMOO and QuiVR and these differences might be significant. In this post, I’m going to try to pin down these differences by developing a classificatory scheme.

In developing this scheme, I am heavily indebted to Litska Strikwerda’s article “Present and Future Instances of Virtual Rape…”. What I present here is a riff off the classificatory scheme she develops in her article.


1. Defining Virtual Sexual Assault
I’ll start with a couple of definitions. I’ll define ‘virtual sexual assault’ in the following manner:

Virtual Sexual Assault: Unwanted, forced or nonconsensual, sexually explicit behaviour that is performed by virtual representations acting in a virtual environment.

This is a pretty vague definition. This is deliberate: I want it to cover a range of possible scenarios. There are many different kinds of virtual representations and virtual environments and hence many forms that virtual sexual assault can take. Nevertheless, since the focus is on sexual behaviour, we have to assume that these virtual representations and environments include beings who are capable of symbolically representing sexual acts. The paradigmatic incident of virtual sexual assault would thus be a scenario like the one in Second Life where two humanoid avatars engage in sexual behaviour. You may wonder what it means for sexual behaviour to be ‘unwanted, forced or nonconsensual’ in a virtual environment. I’ll assume that this can happen in one of two ways. First, if one of the virtual representations is depicted as not wanting or not consenting to the activity (and/or one is depicted as exerting force on the other). Second, and probably more importantly, if the human who controls one of the virtual representations does not want or consent to the activity being represented.

That’s virtual sexual assault. What about virtual rape? This is much trickier to define. Rape is a sub-category of sexual assault. It is the most serious and violative kind of sexual assault. But its definition is contested. Most legal definitions of rape focus on penetrative sex and get into fine details about specific organs or objects penetrating specific bodily orifices. The classic definition is ‘penile penetration of the vagina’, but this has been broadened in most jurisdictions to include oral and anal penetration. As Strikwerda points out, these biologically focused definitions might seem to rule out the concept of ‘virtual rape’. They suggest that rape can only take place when the right biological organ violates the right biological orifice. This is not possible if actions take place through non-biological virtual representations.

So I’m going to be a bit looser and less biologically-oriented in my definition. I’m going to define a virtual rape as any virtual sexual assault in which the represented sexual behaviour depicts what would, in the real world, count as rape. Thus, for instance, a virtual sexual assault in which one character is depicted as sexually penetrating another, without that other’s consent (etc) would count as a ‘virtual rape’.

Due to its less contentious nature, I’ll focus mainly on virtual sexual assault in this post.


2. Who is the perpetrator and who is the victim?
These definitions bring us to the first important classificatory issue. When thinking about virtual sexual assault we need to think about who is the victim and who is the perpetrator. The three incidents described in the introduction involved humans interacting with other humans through the medium of a virtual avatar. Thus, the perpetrators and victims were, ultimately, human-controlled. But one of the interesting things about actions in virtual worlds is that they need not always involve human controlled agents. They could also involve purely virtual agents.* A couple of years back, I wrote a blogpost about the ethics of virtual rape. The blogpost focused on games in which human controlled players were encouraged to ‘rape’ onscreen characters. These raped characters were not being controlled by other human players. They existed solely within the game environment. It was a case of a human perpetrator and a virtual victim. We could also imagine the reverse happening — i.e. a situation where purely virtual characters sexually assault human controlled characters — as well as a case involving two purely virtual characters.

This suggests that we can categorise the possible forms of virtual rape and virtual assault, using a two-by-two matrix, with the categories varying depending on whether they involve a virtual perpetrator/victim or a human perpetrator/victim. As follows:



In the top left-hand corner we have a case involving a virtual perpetrator and a virtual victim. In the top right-hand corner we have a case involving a virtual perpetrator and a human victim. In the bottom left-hand corner we have a case involving a human perpetrator and a virtual victim. And in the bottom right-hand corner we have a case involving a human perpetrator and human victim.

Is it worth taking all four of these cases seriously? My sense is that it is not. At least, not right now. The virtual-virtual case is relatively uninteresting. Unless we assume that virtual agents have a moral status and are capable of being moral agents/patients, the interactions they have with one another seem to be of symbolic significance only. That’s not to say that symbols are unimportant. They are important and I have discussed their importance on previous occasions. It is just that cases of virtual sexual assault involving at least one moral agent/patient seem like they are of more pressing concern. That’s why I suggest we limit our focus to cases involving at least one human participant.


3. How do the human agents interact with the virtual environment?
If we limit our focus in this way, we run into the next classificatory problem. How exactly do the human agents interact with the virtual environment. It seems like there are two major modes of interaction:

Avatar interaction: This is where the human creates a virtual avatar (character, on-screen representation) and uses this avatar to perform actions in the virtual world.

Immersive interaction: This is where the human dons some VR helmet and/or haptic clothing/controller and acts in the virtual world from a first person perspective (i.e. they act ‘as if’ they were really in the virtual world). They may still be represented in the virtual world as an avatar, but the immersive equipment enables them to see and potentially feel what is happening to that avatar from the first person perspective.

Avatar interaction is the historical norm but immersive interaction is becoming more common with the advent of Occulus Rift and rival technologies. As these technologies develop we can expect the degree of immersion to increase. This is important because it reduces the psychological and physical distance between us and what happens in the virtual world. This ‘distance’ could have a bearing on how morally harmful or morally blameworthy the conduct is deemed to be.

Anyway, we can use the distinction between avatar and immersive interaction to construct another two-by-two matrix for classifying cases of virtual sexual assault. This one focuses on whether we have a human victim or perpetrator and the mode of interaction. This one is a little bit more complicated than the previous one. To interpret it, suppose that you are the human victim/perpetrator and that you either interact with the virtual world using an avatar or using immersive technology. If you are the human victim and interact using an avatar, for instance, there are two further scenarios that could arise: either you are assaulted by another human or by a virtual agent. This means that for each of the four boxes in the matrix there are two distinct scenarios to imagine.



This, then, is the classificatory scheme I propose for dealing with virtual sexual assault (and rape). I think it is useful because it focuses on cases involving at least one human agent and encourages us to think about the mode of interaction, the role of the human in that interaction (victim or perpetrator), and to consider the ethics of the interaction from the perspective of the victim or perpetrator. All of the scenarios covered by this classificatory scheme strike me as being of ethical and, potentially, legal interest. We should be interested in cases involving human perpetrators because what they do in virtual worlds probably says something about their moral character, even if the victim is purely virtual. And we should be interested in cases involving human victims because what happens to them (the trauma/violation they experience) is of ethical import, irrespective of whether the perpetrator was ultimately human or not. Finally, we should care about the mode of interaction because it can be expected to correlate with the degree of psychological/physical distance experienced by the perpetrator/victim and possibly with the degree of moral harm implicated by the experience.

There is one complication that I have not discussed in developing this scheme. That is the distinction that some people might like to draw between robotic interactions and virtual ones. Robotic interactions would involve embodied, artificial agents acting with humans or other robots in the real world. There are cases in which robot-human interactions can be distinguished from virtual interactions of the sort discussed here (I wrote an article about some of the issues before). But there is one scenario that I think should fall under this classificatory scheme. That is the case where humans interact via robotic avatars (i.e. remote controlled robots). I think these can be classed as avatar-interactions or (if they involve haptic/immersive technologies) as immersive interactions. The big difference, of course, is that the effects of robotic interactions are directly felt in the real world.


That’s enough for now. In the future, I will try to analyse the different scenarios from an ethical and legal perspective. In the meantime, I’d be interested in receiving feedback on this classificatory scheme. Is it too simple? Does it miss something important? Or is it overly complicated?


* Of course, there is no such thing as a purely virtual agent. All virtual agents (for now) have ultimately been created or programmed into being by humans. What I mean by ‘purely’ virtual is that they are not under the immediate control of a human being, i.e. their actions in the game are somewhat autonomous.