Skip to content

Twitter, Apps, and Depression

The Samaritans have launched a controversial new app that alerts Twitter users when someone they ‘follow’ on the site tweets something that may indicate suicidal thoughts.

To use the app, named ‘Samaritan Radar’, Twitter members must visit the Samaritans’ website, and choose to activate the app on their device. Having entered one’s twitter details on to the site to authorize the app, Samaritan Radar then scans the Twitter users that one ‘follows’, and uses an algorithm to identify phrases in tweets that suggest that the tweeter may be distressed. For example, the algorithm might identify tweets that involve phrases like “help me”, “I feel so alone” or “nobody cares about me”. If such a tweet is identified, an email will be sent to the user who signed up to Samaritan Radar asking whether the tweet should be a cause for concern; if so, the app will then offer advice on what to do next.

Whilst no-one doubts the good intentions of the charity in developing the app, many Twitter users have strongly objected to Samaritan Radar, claiming that it raises major privacy concerns. Whilst Joe Ferns, the executive director of policy, research and development at Samaritans told the BBC that “Radar is only picking up tweets that are public, giving you an opportunity to see tweets that you would have seen anyway”, opponents have argued that those being monitored by the system are not aware that this is the case.

I shall not press the privacy objection here; for the record, I do not believe that it is particularly convincing. One challenge that the objection faces is to explain why we do not believe that it would be problematic for a concerned Twitter user to similarly monitor a friend’s tweets for such language without the app’s assistance; however, this too could happen without their friend being aware of the fact that their tweets are being so monitored. Why, in other words, does the use of an app make such a moral difference here?

Whilst there may be ways that opponents of Samaritan Radar might be able to meet this challenge, I shall not consider this here. Rather, I shall suggest a different kind of objection. In short my worry is that the app might not only be ineffective, but it might have a significant indirect bad consequence.

To explain how, it is useful to consider the justification for the app. Presumably a major reason that the app has been developed is that very often people suffering from depression do not talk to their loved ones about their suffering; however, they may be willing to post messages that at least hint at their suffering onto social media outlets. The app, we may presume, has been developed to identify these particular sufferers of depression. As one tweeter commented in support of the app “sometimes the only way people reach out is through cryptic statuses (on social media).”

There are some interesting psychological questions to ask about why this might be the case that I can’t fully address here. Here is one speculation though: it at least seems plausible to speculate that part of the reason that depressed individuals might be willing to post cryptically on social media is that they believe that it does not ‘give anything away’ about themselves in the same way that talking to a friend or a parent would. Further, they may feel that they get some benefit from engaging with an online community, or by venting their emotions in this forum. Perhaps they might even feel that someone would understand that they are reaching out for help in their messages if only someone cared and knew enough about them to be able to work out what they are really saying in the posts.

In view of this, I see two potential problems with the effectiveness of Samaritan Radar if it becomes commonly used. First, if those suffering from depression are unwilling to talk about their suffering to a loved one, it seems unlikely that they would want to tweet anything that they believe might be detected by an algorithm that they know would alert family members or friends who might be using the app. This might lead them to avoid posting on social media altogether, depriving them of a space in which they are willing to communicate their feelings, however cryptically. Notice that if loved ones already do pay attention to the social media statuses of friends that they believe may be depressed, the app may result in them being unable to gain insight into their friend’s emotional lives through this medium.

Interestingly, the second problem turns the privacy objection considered above on its head. Recall that those who object to the app have claimed that part of their concern is that users would not be aware that they are being monitored; this, they claim, would be a problem for user privacy. However, I believe that something like the opposite may be more likely, and perhaps more problematic. If the app were to enjoy widespread use, it seems plausible to claim that those suffering from depression (at least those who post on Twitter) might come to automatically assume that any attempt by a friend or loved one to enquire into their well-being is not borne out of their friend’s intimate knowledge of them as an individual who is cared about and valued. Rather they will most naturally assume, given the proclivities of their condition, that their friend’s concern is simply the upshot of an impersonal and uncaring algorithm; and this would simply serve to reinforce the user’s depressive beliefs about their self-worth.

Of course, one response to this objection is that a friend or family member would probably be motivated to use the app by their care for a certain individual. This might be true. However, the point I am raising here is that convincing a sufferer of depression that someone truly cares about their welfare, and values their existence can, due to the very nature of the disease, be a hugely difficult task. Depression can prompt sufferers to reframe any display of affection towards them as one of apathy, so that it better fits their irrational depressive beliefs about their self-worth. Therefore, although the use of the app may be motivated by care and love for the sufferer, the fact that any inquiry into their well-being could have been motivated by the app provides the sufferer with a basis to reframe any enquiry into their well-being as being probably being motivated ‘just by an algorithm’; this in turn provides will be understood to provide them with ‘further evidence’ of the ‘fact’ that no-body really enough cares enough about them to learn about their suffering through interactions with, and knowledge of them individually.

Notice that this could be the case even if the friend’s concern was not prompted by the app or even by the user’s use of social media, and was borne out of an intimate care and knowledge of the sufferer. As I mentioned above, depressed individuals can be adept at reframing the available evidence so that it fits their depressive view that nobody cares about them. As such, this raises the worrying prospect that the app could indirectly affect the way in which non-users are able to help their loved ones.

I am not suggesting that these are knock-down objections to the use of the app, or that they are certain consequences; the two problems delineated above are based on speculation. However, I believe that these potential consequences of using the app merit consideration when we consider consequentialist attempts to justify its use. In assessing consequentialist arguments in favour of using the app, we should be sensitive to the nature of the disease that the app is intended to identify, and the sufferers that we are hoping to help.

Share on

1 Comment on this post

  1. I’m not sure I buy the arguments you’re raising.

    Both of the arguments that you raise, that people might stop using social media to express themselves, and that depressed individuals will have a lowered self-worth are common arguments for not intervening when people contemplating suicide. (If I confront someone who is contemplating suicide, they might not talk to me in the future when they really need my help. If I confront someone who is contemplating suicide I might make them feel even worse because I noticed that they are depressed.)

    It doesn’t matter how or why people are intervening (app vs noticed in real life), rationalizations that stop people from intervening are more harmful to suicidal people than the supposed harm that they would suffer because of the intervention.

    I don’t see the app doing anything much different than what a dispatcher through emergency services or a suicide hotline staff would do. Dealing with suicide can be broken down into discrete steps and interventions, and that is exactly what the app is doing for the user. It’s giving the user the “flow chart” that the emergency services dispatcher or suicide hotline staff has and helps a person walk through the steps.

Comments are closed.