Misbehaving corporations are in the news again. In the New York Times, Jack Ewing and Graham Bowley provide an interesting look into the ‘corporate culture’ behind Volkswagen’s emissions-cheating scandal. As Ewing and Bowley note, Volkswagen has blamed “a small group of engineers.” But as their reporting suggests, any anatomy of blame in the Volkswagen case should consider a wide range of social influences – for example, Volkswagen’s institutionalized commitment to aggression, and more local factors such as fear of those in positions of power on engineering teams.
But who is really at fault? It is natural to think that some individuals are responsible, at least in part. Are any individuals responsible in whole? Or is it possible that the corporation – Volkswagen itself – bears some of the responsibility? This kind of idea is something a number of philosophers have recently suggested. These philosophers argue that above the level of individual agency, there is such a thing as group agency. Groups (like Volkswagen) can be constituted by individuals (and also by historical and socio-structural features). Groups can intend to act – even when no member of the group has a similar intention – and act intentionally. Two philosophers (Björnsson and Hess forthcoming) have even argued that corporations are full moral agents, capable of expressing emotions like guilt, and open to the same kinds of blaming and praising attitudes we typically direct at individuals.
I’m not sure whether that is right. Corporations may be less like full moral agents, and more like extremely dangerous psychopaths – capable of manipulating their own responses to achieve the ends they truly value (i.e., maintaining profit margins). Or, corporations may be capable of a kind of agency, but one very unlike our own – one that is masked by thinking of them by analogy with human agents. It is unclear whether all the features associated with human agency are appropriately applied to the issue of corporate agency.
Stop killer robots now, UN asks: the UN special rapporteur on extrajudicial, summary or arbitrary executions Christof Heyns has delivered a report about Lethal Autonomous Robots arguing that there should be a moratorium on the development of autonomous killing machines, at least until we can figure out the ethical and legal issues. He notes that LARs raise far-reaching concerns about the protection of life during war and peace, including whether they can comply with humanitarian and human rights law, how to device legal accountability, and “because robots should not have the power of life and death over human beings.”
Many of these issues have been discussed on this blog and elsewhere, but it is a nice comprehensive review of a number of issues brought up by the new technology. And while the machines do not yet have fully autonomous capabilities the distance to them is chillingly short: dismissing the issue as science fiction is myopic, especially given the slowness of actually reaching legal agreements. However, does it make sense to say that robots should not have the power of life and death over human beings?