Instagram launched a new tool to detect posts with self-harm and suicidal tones and ban them in Europe and the UK, as the social media giant revealed on its blog.
"We recognize that these are deeply personal issues for the people who are affected," said Adam Mosseri, Head of Instagram.
14-year-old Molly Russell took her own life in 2017, and her death seemed to be a wake-up call for tech giants to step up to protect children. After her death, Russel's family found out that she had been posting graphic imageries about suicide and self-harm tendencies.
Ian, Russell's father, called out Instagram for its failure to protect its users, and it "helped kill my daughter."
Last September, tech giants like Instagram, Google, Facebook, YouTube, Twitter, and Pinterest all agreed on the new regulation set by mental health charity Samaritans. As reported by the BBC, the guidelines set high standards for tackling harmful online content.
"We want all sites and platforms to recognize that self-harm and suicide content has potential for serious harms," said Jacqui Morrisey. Morrisey is the Assistant Director of Research and Influencing.
However, this is not an entirely new technology for the US residents and the rest of the world.
Read also: Apple Green Lights Apps That Promote Mask-Wearing, Previously Rejected Them
How Does It Work?
The new tool helps Instagram break down captions, pictures, words, and moving images that violate its community guidelines. In some exceedingly awful cases, the post would be automatically deleted.
The algorithm can now hunt down words and texts after only managed to crawl on images. Fictional depictions, like drawings, memes, or other forms of imagery that show anything associated with suicide, will immediately be taken down by the algorithm.
Instagram claims to have taken down over 90% of such harmful content before human moderation, but they're aiming for 100% perfection. The tool will either make the post less visible or take it down immediately.
Read also: These App Terms and Conditions Are Longer Than Novels, Reports Reveal
Not the First Time
Over the years, Instagram, which Facebook has now owned, has been taking some serious steps in tackling the problem.
The first feature to tackle harmful content was rolled in 2016. The platform launched an option to anonymously report potential self-harm posts and started connecting people to helpful organizations.
Three years later, Instagram hosted regular consultation with experts worldwide and expanded its policies to remove all graphic content from its platforms. Several hashtags have also been blocked.
In 2019, Mosseri announced that Instagram was to remove the feature that displays the number of likes under a post. The decision was made to improve the emotional and mental health of its users.
Instagram's Enforcement Report in the second quarter of 2020 outlines that the platform has taken down over 275,000 pieces of content (94% of it). Although we may see a lower number amidst the current work-from-home policy during the pandemic, Instagram vows to tackle the issue better.
Read also: Chan Zuckerberg Initiative: Former Employee Sues Company for Racial Discrimination.