Many of us have seen a call for help before, but as social media becomes more and more pervasive, it’s little surprise that these calls for help might materialise online. We’ve heard of suicide notes being posted on Facebook, and in one extreme example, someone self harmed on a Facebook live stream.
As you may imagine, Facebook wants to do something about this and is deploying it’s artificial intelligence capability to intervene where appropriate.
This isn’t exactly new; Facebook has been doing this in the US for a while but now the feature is being brought to other countries, except for the EU owing to restrictive privacy rules.
Here’s how it works:
- Facebook proactively detect posts that may be talking about self-harm and suicide by using pattern recognition technology. The tech looks at text in the post itself and comments like “Are you ok?” or “Can I help?”. Previously, someone had to report a post or account that seemed suicidal before Facebook could act on it.
- They combine this with human reviewers and first responders so that the posts can also be analysed in context.
- This ensures that they get resources for people who seem to be in distress, whether by asking friends or family members to check up on them or connecting them to local crisis hotlines or organizations that can help them.
You can also still report a post or account who you think might need help and Facebook will reach out to them directly.
Of course, there are legitimate privacy concerns about this behaviour, which Facebook’s chief security officer, Alex Stamos, has addressed. He admits that malicious use of AI will always be there but they are committed to “weighing data use versus utility”.
Hopefully, this will be a big help from people struggling with these issues and that they will get the help they need.
Support is available for anyone who may be distressed by phoning Lifeline 13 11 14; Mensline 1300 789 978; Kids Helpline 1800 551 800.