If artificial intelligence often raises fears of a dramatic scenario worthy of Terminator, it can also save lives. At least this is the opinion of Facebook, which has been experimenting for several months in the United States with software to detect the suicidal tendencies of its users. According to the World Health Organization (WHO), suicide is the cause of more than 800,000 deaths per year worldwide, or one death every 40 seconds. The organization also estimates that suicide is the second leading cause of death globally for 15-29 year olds.
Facebook’s tool is based on artificial intelligence technology to automatically identify messages giving rise to fears of suicidal impulses. To do this, this technology has drawn on the analysis of existing messages reported as suicidal by humans. On the social network, the software looks for questions, such as “Are you okay?” or “Do you need help?”. If a post is considered suicidal, the software issues an alert to Facebook employees who are assigned to process those reports.
Global deployment… except in the European Union
To prevent the affected user from taking action, Facebook provides them with resources, such as a hotline number and advice through a difficult time, and offers them to call a trusted friend. The social network also offers friends of the affected user to help him by calling a helpline for people in distress or by talking with other friends to find a solution. Facebook can even choose to contact emergency services directly if an emergency is identified.