Internet

The Gulf Between Japanese and English Google Image Search

By Anri Asagumo, Oxford Uehiro/St Cross Scholar, (with input from Dr Tom Douglas and Dr Carissa Veliz)

 

Trigger Warning: This article deals with sexual violence, which could be potentially upsetting for some people.

Although Google claims in its policy that it restricts promotion of adult-oriented content, there is a district in the online world where their policy implementation seems loose: Google image search in the Japanese language. If one looks up ‘reipu’, a Japanese word for rape on Google, the screen fills up with a heap of explicit thumbnails of porn movies, manga, and pictures of women being raped by men. The short descriptions of the thumbnails are repugnant: ‘Raping a girl at my first workplace’, ‘Raping a junior high-school girl’, ’Raping cute girls’, ‘Raping a female lawyer’, ‘Raping a girl in a toilet’. As if rape in itself were not repulsive enough, many descriptions go even further, implying child rape. Similar results show up with ‘reipu sareta; I was raped’. It is strikingly different from the world of English Google image search, in which the top images usually send strong messages of support for victims and zero-tolerance for sexual offenders. Another example of how the Japanese Google world is different from that of English is ‘Roshia-jin; Russian people’. Searching in Japanese yields 17 pictures of young, beautiful Russian women, while searching in English returns pictures of people of different age and sex. Continue reading

Carissa Véliz on how our privacy is threatened when we use smartphones, computers, and the internet.

Smartphones are like spies in our pocket; we should cover the camera and microphone of our laptops; it is difficult to opt out of services like Facebook that track us on the internet; IMSI-catchers can ‘vacuum’ data from our smartphones; data brokers may  sell our internet profile to criminals and/or future employees; and yes, we should protect people’s privacy even if they don’t care about it. Carissa Véliz (University of Oxford) warns us: we should act now before it is too late. Privacy damages accumulate, and, in many cases, are irreversible. We urgently need more regulations to protect our privacy.

Could ad hominem arguments sometimes be OK?

By Brian D. Earp

Follow Brian on Twitter by clicking here.

Could ad hominem arguments sometimes be OK? 

You aren’t supposed to make ad hominem arguments in academic papers — maybe not anywhere. To get us on the same page, here’s a quick blurb from Wikipedia:

An ad hominem (Latin for “to the man” or “to the person”), short for argumentum ad hominem, is a general category of fallacies in which a claim or argument is rejected on the basis of some irrelevant fact about the author of or the person presenting the claim or argument. Ad hominem reasoning is normally categorized as an informal fallacy, more precisely as a genetic fallacy, a subcategory of fallacies of irrelevance.

Some initial thoughts. First, there are some clear cut cases where an ad hominem argument is plainly worthless and simply distracting: it doesn’t help us understand things better; it doesn’t wend toward truth. Let’s say that a philosopher makes an argument, X, concerning (say) abortion; and her opponent points out that the philosopher is (say) a known tax cheat — an attempt to discredit her character. Useless. But let’s say that a psychologist makes an argument, Y, about race and IQ (i.e., that black people are less “intelligent” than white people), and his opponent points out that he used to be a member of the KKK. Well, it’s still useless in one sense, in that the psychologist’s prior membership in the KKK can’t by itself disprove his argument; but it does seem useful in another sense, in that it might give us at least a plausible reason to be a little bit more cautious in interpreting the psychologist’s results.

Continue reading

Let’s Talk About Death: Millennials and Advance Directives

Sarah Riad, College of Nursing and Health Sciences, University of Massachusetts Boston

Melissa Hickey, School of Nursing, Avila University 

Kyle Edwards, Uehiro Centre for Practical Ethics, University of Oxford

As advances in medical technology have greatly increased our ability to extend life, the conversation on end-of-life care ethics has become exceedingly complex. With greater options both to end life early and extend it artificially, advance directives have arisen in an effort to preserve patient autonomy in situations in which he or she becomes incapable of making a medical decision. However, most people—especially young adults—do not think to plan for such moments of incapacity and the potentiality of an untimely death. With a youthful sense of invincibility comes a lack of foresight that prevents us from confronting these issues. The reality is that unexpected events happen. When they do, it is often very difficult to imagine what a person would have wanted and make medical decisions accordingly on his or her behalf. In this post, we suggest both a transition from action-based to value-based advance directives and an interactive website that would make the contemplation of these issues and the construction of a value-based advance directive appealing to and accessible for Millennials, the 20-somethings of today.  Continue reading

Censorship, pornography and divine swan-on-human action

The Prime Minister has declared that Internet service providers should by default block access to pornography, and that some “horrific” internet search terms to be “blacklisted” on the major search engines, not bringing up any search results. The main motivation of the speech appears to be that access to pornography is “corroding childhood” by having children inadvertently seeing images or visiting websites their parents do not want them to see. There is no shortage of critics, both anti-censorship groups, anti surveillance groupstechnology groups and people concerned with actual harm-reduction. There are two central problems: defining pornography, and finding its harms. Continue reading

The censor and the eavesdropper: the link between censorship and surveillance

Cory Doctorow makes a simple but important point in the Guardian: censorship today is inseparable from surveillance. In modern media preventing people from seeing proscribed information requires systems that monitor their activity. To implement copyright-protecting censorship in the UK systems must be in place to track where people seek to access and compare it to a denial list, in whatever medium is used.

Continue reading

Cabs, censorship and cutting tools

The smith was working hard on making a new tool. A passer-by looked at his work and remarked that it looked sharp and dangerous. The smith nodded: it needed to be very sharp to do its work. The visitor wondered why there was no cross-guard to prevent the user’s hand to slide onto the blade, and why the design made it easy to accidentally grip the blade instead of the grip. The smith explained that the tool was intended for people who said they knew how to use it well. “But what if they were overconfident, sold it to somebody else, or had a bad day? Surely some safety measures would be useful?” “No”, said the smith, “my customers did not ask for them. I could make them with a slight effort, but why bother?”

Would we say the smith was doing his job in an ethical manner?

Here are two other pieces of news: Oxford City Council has decided to make it mandatory for taxicabs in Oxford to have CCTV cameras and microphones recording conversations of the passengers. As expected, many people are outraged. The stated reason is to improve public safety, although the data supporting this decision doesn’t seem to be available. The surveillance footage will supposedly not be made available other than as evidence for crimes, and not stored for more than 28 days. Meanwhile in the US, there are hearings about the Stop Online Piracy Act (SOPA) and the PROTECT IP Act, laws intended to make it easier to block copyright infringement and counterfeiting. Besides concerns that critics and industries most affected by the laws are not getting access to the hearings, a serious set of concerns is that they would make it easy to censor websites and block business on fairly loose grounds, with few safeguards against false accusations (something that occurs regularly), little oversight, few remedies for the website, plus the fact that a domestic US law would apply internationally due to the peculiarities of the Internet and US legal definitions.

Continue reading

The unexpected turn: from the democratic Internet to the Panopticon

In the last ten years ICTs (information and communication technologies) have been increasingly used by militaries both to develop new weapons and to improve communication and propaganda campaigns. So much so that military often refers to ‘information’ as the fifth dimension of warfare in addition to land, sea, air and space. Given this scenario does not surprise that the Pentagon would invest part of its resources to develop a new program called Social Media in Strategic Communication (SMISC) allegedly to ‘to get better at both detecting and conducting propaganda campaigns on social media’ as reported a few days ago on Wired (http://www.wired.com/dangerroom/2011/07/darpa-wants-social-media-sensor-for-propaganda-ops/on ).

The program has two main functions, it will support the military in their propaganda and it will allows for identifying the “formation, development and spread of ideas and concepts (memes)” in social groups. Namely, the program will be able to spot on the web rumours or emerging themes, figure out whether such themes are randomly coming up or are the results of a propaganda operation by ‘adversary’ individuals or group. To any one even also slightly concerned with ethical problems all this rings more than one bell.

SMISC is one more surveillance tool empowered by ICTs. We all know that the information that we put on the web, on social networks or on websites, even our queries on search engines, is mined and analysed for second purposes. But it becomes more scaring when the analysis is done by government agencies, as in this case the Internet becomes a tool for surveillance. A surveillance, which may go far behind the one we may be already accustomed to. The unexpected turn is that the Internet, which has been for long time considered a ‘democratic place’, where anyone could express his/her thoughts and act more or less freely, could become the next Panopticon and provide the tool for monitoring both a wide range of information, from the newspaper one reads in the morning to one’s political commitment, and a vast amount of people, virtually all the web users.

This can have serious consequences. Consider the case of the recent riots and revolutions in middle East. In most cases, the Internet was the media through which people could talk about the political situation of their countries, organise protests and also describe their conditions to other people all over the world. What would have happened if middle East government could have spot the protest movements in their early days? Until now, governments, like the Egyptian one, have shut down the web in their countries to limit the circulation of information about what was happening; but the development of SMISC shows that there is a further step that could be soon taken, that is the proactive use of the Internet by governments for surveillance purposes. In this case, as the technologies for data mining evolves, the Internet may represent the most powerful surveillance/intelligence tool developed so far. If so, it seems that it is time to start worry about the rights of the Internet users and to find out ways of protecting them.

Recent Comments

Authors

Affiliations