Misbehaving corporations are in the news again. In the New York Times, Jack Ewing and Graham Bowley provide an interesting look into the ‘corporate culture’ behind Volkswagen’s emissions-cheating scandal. As Ewing and Bowley note, Volkswagen has blamed “a small group of engineers.” But as their reporting suggests, any anatomy of blame in the Volkswagen case should consider a wide range of social influences – for example, Volkswagen’s institutionalized commitment to aggression, and more local factors such as fear of those in positions of power on engineering teams.
But who is really at fault? It is natural to think that some individuals are responsible, at least in part. Are any individuals responsible in whole? Or is it possible that the corporation – Volkswagen itself – bears some of the responsibility? This kind of idea is something a number of philosophers have recently suggested. These philosophers argue that above the level of individual agency, there is such a thing as group agency. Groups (like Volkswagen) can be constituted by individuals (and also by historical and socio-structural features). Groups can intend to act – even when no member of the group has a similar intention – and act intentionally. Two philosophers (Björnsson and Hess forthcoming) have even argued that corporations are full moral agents, capable of expressing emotions like guilt, and open to the same kinds of blaming and praising attitudes we typically direct at individuals.
I’m not sure whether that is right. Corporations may be less like full moral agents, and more like extremely dangerous psychopaths – capable of manipulating their own responses to achieve the ends they truly value (i.e., maintaining profit margins). Or, corporations may be capable of a kind of agency, but one very unlike our own – one that is masked by thinking of them by analogy with human agents. It is unclear whether all the features associated with human agency are appropriately applied to the issue of corporate agency.
Written by Anke Snoek
In the UK around 500 soldiers each year get fired because they fail drug-testing. The substances they use are mainly recreational drugs like cannabis, XTC, and cocaine. Some call this a waste of resources, since new soldiers have to be recruited and trained, and call for a revision of the zero tolerance policy on substance use in the army.
This policy stems from the Vietnam war. During the First and Second World War, it was almost considered cruel to deny soldiers alcohol. The use of alcohol was seen as a necessary coping mechanism for soldiers facing the horrors of the battlefield. The public opinion on substance use by soldiers changed radically during the Vietnam War. Influenced by the anti-war movement, the newspapers then were dominated by stories of how stoned soldiers fired at their own people, and how the Vietnamese sold opioids to the soldiers to make them less capable of doing their jobs. Although Robins (1974) provided evidence that the soldiers used the opioids in a relatively safe way, and that they were enhancing rather than impairing the soldiers’ capacities, the public opinion on unregulated drug use in the army was irrevocably changed. Continue reading
Today, I noticed two news stories: BBC future reported about the Korean work on killer robots (autonomous gun turrets that can identify, track and attack) and BBC news reported on the formation of a campaign to ban sex robots, clearly mirrored on the existing campaign to stop killer robots.
Much of the robot discourse is of course just airing hopes and fears about the future, projected onto futuristic devices. But robots are also real things increasingly used for real applications, potentially posing actual threats and affecting social norms. When does it make sense to start a campaign to stop the development of robots that do X?
1. The fact that you disagree with the author’s conclusion is not a reason for advising against publication. Quite the contrary, in fact. You have been selected as a peer reviewer because of your eminence, which means (let’s face it), your conservatism. Accordingly if you think the conclusion is wrong, it is far more likely to generate interest and debate than if you agree with it.
2. A very long review will simply indicate to the editors that you’ve got too much time on your hands. And if you have, that probably indicates that you’re not publishing enough yourself. Accordingly excessive length indicates that you’re not appropriately qualified. Continue reading
Authors: William Isdale & Julian Savulescu
Last week the Federal Government announced that there would be a review of Australia’s tissue and organ transplantation systems. The impetus for the review appears to be continually disappointing donation rates, despite the adoption of a national reform agenda in 2008.
Since 2008 there has been an increase from 12.1 dpmp (donations per million population) to a peak of 16.9 in 2013 – but the dip last year (to 16.1) indicates that new policies need to be considered if rates are to be substantially increased.
Australia’s donation levels remain considerably below world’s best practice, even after adjusting for rates and types of mortality. At least twenty countries achieve better donation rates than Australia, including comparable countries like Belgium (29.9), USA (25.9), France (25.5) and the UK (20.8).
The review will focus in particular on the role of the national Organ and Tissue Authority, which helps coordinate donation services. However, many of the key policy settings are in the hands of state and territory governments.
It is time to go beyond improving the mechanisms for implementing existing laws, and to consider more fundamental changes to organ procurement in Australia.
What to do with Google—nothing, break it up, nationalise it, turn it into a public utility, treat it as a public space, or something else?
Google has become a service that one cannot go without if one wants to be a well-adapted participant in society. For many, Google is the single most important source of information. Yet people do not have any understanding of the way Google individually curates contents for its users. Its algorithms are secret. For the past year, and as a result of the European Court of Justice’s ruling on the right to be forgotten, Google has been deciding which URLs to delist from its search results on the basis of personal information being “inaccurate, inadequate or no longer relevant.” The search engine has reported that it has received over 250,000 individual requests concerning 1 million URLs in the past year, and that it has delisted around 40% of the URLs that it has reviewed. As was made apparent in a recent open letter from 80 academics urging Google for more transparency, the criteria being used to make these decisions are also secret. We have no idea about what sort of information typically gets delisted, and in what countries. The academics signing the letter point out how Google has been charged with the task of balancing privacy and access to information, thereby shaping public discourse, without facing any kind of public scrutiny. Google rules over us but we have no knowledge of what the rules are.
A recent series of papers have constructed a biochemical pathway that allows yeast to produce opiates. It is not quite a sugar-to-heroin home brew yet, but putting together the pieces looks fairly doable in the very near term. I think I called the news almost exactly five years ago on this blog.
People, including the involved researchers, are concerned and think regulation is needed. It is an interesting case of dual-use biotechnology. While making opiates may be somewhat less frightening than making pathogens, it is still a problematic use of biotechnology: millions of people are addicted, and making it easier for them to get access would worsen the problem. Or would it?
Let us suppose we have a treatment and we want to find out if it works. Call this treatment drug X. While we have observational data that it works—that is, patients say it works or, that it appears to work given certain tests—observational data can be misleading. As Edzard Ernst writes:
Whenever a patient or a group of patients receive a medical treatment and subsequently experience improvements, we automatically assume that the improvement was caused by the intervention. This logical fallacy can be very misleading […] Of course, it could be the treatment—but there are many other possibilities as well. Continue reading
Written By Professor Jeff McMahan
On this day in the US, around thirty people will be killed with a gun, not including suicides. Many more will be wounded. I can safely predict this number because that is the average number of homicides committed with a gun in the US each day. Such killings have become so routine that they are barely noticed even in the local news. Only when a significant number of people are murdered, particularly when they include children or are killed randomly, is the event considered newsworthy.
Yet efforts to regulate the possession of guns in the US are consistently defeated. Continue reading
Despite all the jokes there are, in fact, a lot of things that lawyers won’t do. Or at least shouldn’t do. In many jurisdictions qualified lawyers are subject to strict ethical codes which are self-policed, usually effectively, and policed too by alert and draconian regulatory bodies.
Is there any point, then, in law firms having their own ethics committees which would decide:
(a) how the firm should deal with ethical questions arising in the course of work?; and/or
(b) whether the firm should accept particular types of work, particular clients or particular cases? Continue reading