One late Friday night, I sipped away at my cup of coffee and mindlessly scrolled through dozens of Facebook posts.

Wedding images. Typical.

An adorable cat video. Predictable.

A status stating, “I HATE my life! No one cares, goodbye.”

Wait a minute. Let me backtrack there.

Such an alarming statement was posted by a casual acquaintance of mine – what should I do? 

Cue the entry of Facebook’s suicide prevention tools that were globally released this past June (2016), in collaboration with National Suicide Prevention Lifeline, Forefront and Save.org [1]. 

In cases surrounding suicidality, Facebook users can now report another user’s post and be offered resources, such as helplines. Other resources include the concerned individual being given carefully-worded phrases to offer help or support directly to the individual they are concerned about – that is, learning what to say and how to help from suicide prevention experts. For alarmed users who are uncomfortable with approaching the situation themselves, they can ask Facebook to personally look at the post.

Once the post is reported, Facebook redirects the user to a Support Inbox, where they can track the progress of their report. If Facebook considers the individual in question to be in distress, that individual will be directed to a series of screens in their next Facebook log-in, with suggestions for help, such as talking with a friend, contacting a helpline, and tips for self-support [2].

The issue that arises with such tools on a social platform is that they are susceptible to misinterpretation by the users doing the reporting and also by the Facebook team that monitors the flagged posts. What may simply be a lyric of a rock song, or some intense poetry of a darker genre posted as a status, can alarm others and lead them to report their friends accidentally.

It is better to be safe than sorry, but how does this scrutiny of Facebook posts flagged down by concerned users affect the efficiency in detecting posts that are not indicative of genuine distress?

In a statement put forth by Antigone Davis, Facebook’s Global Head of Safety, and Jennifer Guadagno, market researcher of Facebook’s Compassion team: “We have teams [consisting of hundreds of people] working around the world, 24/7, who carefully review reports that come in and decide on how to proceed. They prioritize the most serious reports, like self-injury.” [1].

Thus, user reports and team member evaluations allows the process of addressing suicide to become less automated through the use of human judgment in an online platform. At the same time, there is no evaluation of information by artificial intelligence or algorithms that may be able to perceive subtle, written cues that otherwise go unnoticed [3].

Facebook’s suicide prevention tools give members of the community the heavy responsibility of being constantly aware and perceptive of their online environment. A more personal voice is given to start a conversation about mental health and wellness, and mental health stigma is weakened as the dialogue between individuals increases.

Shifting attitudes about mental health has fostered a greater sense of connection within the online community by encouraging people to speak out about the impact of life’s difficulties on mental health. However, without understanding the severity of a situation, reaching out in efforts to aid another may lead to more harm than good.

Certain affected users may feel this initiative is an unwelcome intrusion of privacy. Many factors are at play that contribute to the unwillingness of someone to ask for help outright. An imperative need arises for users to be equipped with adequate knowledge prior to approaching someone.

In the online world, Facebook’s efforts to bring a greater focus and reshape the perspectives of mental health are to be applauded. Employing the human touch to reach out to someone on a social platform has started off a powerful movement. Excessive efforts and resources dedicated towards mental health wellness have been accumulated and are provided to Facebook’s online users, but what is yet to be seen is how truly effective the use of human judgment online really is.

 

Written by: Shagun Kanwar

Edited by: Veerpal Bambrah

 

References:

[1] Davis, A., Guadagno, J. (2016, 14 June). Facebook rolling out suicide prevention tools worldwide. Message posted to https://www.facebook.com/fbsafety

[2] Belanger, L. (2016, June 16). Facebook’s Suicide Prevention Tools: Invasive or Essential? Entrepreneur. Retrieved from https://www.entrepreneur.com/article/277665

[3] Bach, D., & John, S. L. (n.d.). FOREFRONT AND FACEBOOK LAUNCH SUICIDE PREVENTION TOOL. Forefront. Retrieved from https://www.intheforefront.org/forefront-and-facebook-launch-suicide-prevention-tool

[4] Lopatto, E. (2013, January 23). Swartz Suicide Propels Facebook Search for Danger Signs. Retrieved from http://www.bloomberg.com/news/articles/2013-01-23/swartz-suicide-propels-facebook-search-for-danger-signs

Categories: Articles Featured