technology

Political Campaigning, Microtargeting, and the Right to Information

Written by Cristina Voinea 

 

2024 is poised to be a challenging year, partly because of the important elections looming on the horizon – from the United States and various European countries to Russia (though, let us admit, surprises there might be few). As more than half of the global population is on social media, much of political communication and campaigning moved online. Enter the realm of online political microtargeting, a game-changer fueled by data and analytics innovations that changed the face of political campaigning.  

Microtargeting, a form of online targeted advertisement, relies on the collection, aggregation, and processing of both online and offline personal data to target individuals with the messages they will respond or react to. In political campaigns, microtargeting on social media platforms is used for delivering personalized political ads, attuned to the interests, beliefs, and concerns of potential voters. The objectives of political microtargeting are diverse, as it can be used to inform and mobilize or to confuse, scare, and demobilize. How does political microtargeting change the landscape of political campaigns? I argue that this practice is detrimental to democratic processes because it restricts voters’ right to information. (Privacy infringements are an additional reason but will not be the focus of this post). 

  Continue reading

Cross Post: Brainpower: Use it or Lose it?

This is the first in a series of blogposts by the members of the Expanding Autonomy project, funded by the Arts and Humanities Research Council

Written By: J Adam Carter, COGITO, University of Glasgow

E-mail: adam.carter@glasgow.ac.uk

 

What are things going to be like in 100 years? Here’s one possible future, described in Michael P. Lynch’s The Internet of Us. He invites us to imagine:

smartphones are miniaturized and hooked directly into a person’s brain. With a single mental command, those who have this technology – let’s call it neuromedia – can access information on any subject ….

That sounds pretty good. Just think how quickly you could gain information you need, and how easy and intellectually streamlined the process would be. But here is the rest of the story:

Now imagine that an environmental disaster strikes our invented society after several generations have enjoyed the fruits of neuromedia. The electronic communication grid that allows neuromedia to function is destroyed. Suddenly no one can access the shared cloud of information by thought alone. . . . [F]or the inhabitants of this society, losing neuromedia is an immensely unsettling experience; it’s like a normally sighted person going blind. They have lost a way of accessing information on which they’ve come to rely. Continue reading

Simulate Your True Self

Written by Muriel Leuenberger

A modified version of this post is forthcoming in Think edited by Stephen Law.

Spoiler warning: if you want to watch the movie Don’t Worry Darling, I advise you to not read this article beforehand (but definitely read it afterwards).

One of the most common reoccurring philosophical thought experiments in movies must be the simulation theory. The Matrix, The Truman Show, and Inception are only three of countless movies following the trope of “What if reality is a simulation?”. The most recent addition is Don’t Worry Darling by Olivia Wilde. In this movie, the main character Alice discovers that her idyllic 1950s-style housewife life in the company town of Victory, California, is a simulation. Some of the inhabitants of Victory (most men) are aware of this, such as her husband Jack who forced her into the simulation. Others (most women) share Alice’s unawareness. In the course of the movie, Alice’s memories of her real life return, and she manages to escape the simulation. This blog post is part of a series of articles in which Hazem Zohny, Mette Høeg, and I explore ethical issues connected to the simulation theory through the example of Don’t Worry Darling.

One question we may ask is whether living in a simulation, with a simulated and potentially altered body and mind, would entail giving up your true self or if you could come closer to it by freeing yourself from the constraints of reality. What does it mean to be true to yourself in a simulated world? Can you be real in a fake world with a fake body and fake memories? And would there be any value in trying to be authentic in a simulation?

Continue reading

Guest Post: Dear Robots, We Are Sorry

Written by Stephen Milford, PhD

Institute for Biomedical Ethics, Basel University

 

The rise of AI presents humanity with an interesting prospect: a companion species. Ever since our last hominid cousins went extinct from the island of Flores almost 12,000 years ago, homo Sapiens have been alone in the world.[i] AI, true AI, offers us the unique opportunity to regain what was lost to us. Ultimately, this is what has captured our imagination and drives our research forward. Make no mistake, our intentions with AI are clear: artificial general intelligence (AGI). A being that is like us, a personal being (whatever person may mean).

If any of us are in any doubt about this, consider Turing’s famous test. The aim is not to see how intelligent the AI can be, how many calculations it performs, or how it shifts through data. An AI will pass the test if it is judged by a person to be indistinguishable from another person. Whether this is artificial or real is academic, the result is the same; human persons will experience the reality of another person for the first time in 12 000 years, and we are closer now than ever before. Continue reading

Track Thyself? Personal Information Technology and the Ethics of Self-knowledge

Written by Muriel Leuenberger

The ancient Greek injunction “Know Thyself” inscribed at the temple of Delphi represents just one among many instances where we are encouraged to pursue self-knowledge. Socrates argued that “examining myself and others is the greatest good” and according to Kant moral self-cognition is ‘‘the First Command of all Duties to Oneself’’. Moreover, the pursuit of self-knowledge and how it helps us to become wiser, better, and happier is such a common theme in popular culture that you can find numerous lists online of the 10, 15, or 39 best movies and books on self-knowledge.

Continue reading

Judgebot.exe Has Encountered a Problem and Can No Longer Serve

Written by Stephen Rainey

Artificial intelligence (AI) is anticipated by many as having the potential to revolutionise traditional fields of knowledge and expertise. In some quarters, this has led to fears about the future of work, with machines muscling in on otherwise human work. Elon Musk is rattling cages again in this context with his imaginary ‘Teslabot’. Reports on the future of work have included these replacement fears for administrative jobs, service and care roles, manufacturing, medical imaging, and the law.

In the context of legal decision-making, a job well done includes reference to prior cases as well as statute. This is, in part, to ensure continuity and consistency in legal decision-making. The more that relevant cases can be drawn upon in any instance of legal decision-making, the better the possibility of good decision-making. But given the volume of legal documentation and the passage of time, there may be too much for legal practitioners to fully comprehend.

Continue reading

Ambient Intelligence

Written by Stephen Rainey

An excitingly futuristic world of seamless interaction with computers! A cybernetic environment that delivers what I want, when I want it! Or: A world of built on vampiric databases, fed on myopic accounts of movements and preferences, loosely related to persons. Each is a possibility given ubiquitous ambient intelligence. Continue reading

Oxford Uehiro Prize in Practical Ethics: What, if Anything, is Wrong About Algorithmic Administration?

This essay received an honourable mention in the undergraduate category.

Written by University of Oxford student, Angelo Ryu.

 

Introduction

 The scope of modern administration is vast. We expect the state to perform an ever-increasing number of tasks, including the provision of services and the regulation of economic activity. This requires the state to make a large number of decisions in a wide array of areas. Inevitably, the scale and complexity of such decisions stretch the capacity of good governance.

In response, policymakers have begun to implement systems capable of automated decision making. For example, certain jurisdictions within the United States use an automated system to advise on criminal sentences. Australia uses an automated system for parts of its welfare program.

Such systems, it is said, will help address the costs of modern administration. It is plausibly argued that automation will lead to quicker, efficient, and more consistent decisions – that it will ward off a return to the days of Dickens’ Bleak House. Continue reading

Oxford Uehiro Prize in Practical Ethics: Why Is Virtual Wrongdoing Morally Disquieting, Insofar As It Is?

This essay was the winning entry in the undergraduate category of the 6th Annual Oxford Uehiro Prize in Practical Ethics.

Written by University of Oxford student, Eric Sheng.

In the computer game Red Dead Redemption 2 (henceforward, RDR2), players control a character in a virtual world. Among the characters represented by computer graphics but not controlled by a real-world player are suffragettes. Controversy arose when it became known that some players used their characters to torture or kill suffragettes. (One player’s character, for example, feeds a suffragette to an alligator.) In this essay, I seek to explain the moral disquiet ­– the intuition that things are awry from the moral perspective – that the players’ actions (call them, for short, ‘assaulting suffragettes’) provoke. The explanation will be an exercise in ‘moral psychology, philosophical not psychological’:[1] I seek not to causally explain our disquiet through the science of human nature, but to explain why things are indeed awry, and thus justify our disquiet.

My intention in posing the question in this way is to leave open the possibilities that our disquiet is justified although the players’ actions are not wrong, or that it’s justified but not principally by the wrongness of the players’ actions. These possibilities are neglected by previous discussions of virtual wrongdoing that ask: is this or that kind of virtual wrongdoing wrong? Indeed, I argue that some common arguments for the wrongness of virtual wrongdoing do not succeed in explaining our disquiet, and sketch a more plausible account of why virtual wrongdoing is morally disquieting insofar as it is, which invokes not the wrongness of the players’ actions but what these actions reveal about the players. By ‘virtual wrongdoing’ I mean an action by a player in the real world that intentionally brings about an action φV by a character in a virtual world V such that φV is wrong-in-V; and the criteria for evaluating an action’s wrongness-in-V are the same as those for evaluating an action’s wrongness in the real world.[2] Continue reading

Recent Comments

Authors

Affiliations