Instagram: New tools to ban self-harm and suicide posts

  • Published
Girl texting on smartphone at homeImage source, Getty Images
Image caption,
There are fears over the impact of self-harm and suicide content on young and vulnerable people

Instagram has launched new technology to recognise self-harm and suicide content on its app in the UK and Europe.

The new tools can identify both images and words that break its rules on harmful posts.

It will make posts less visible in the app and, in extreme cases, remove it automatically.

Facebook, which owns Instagram, said it was an "important step" but the company wanted to do "a lot more".

Head of Instagram Adam Mosseri detailed the new system, which uses artificial intelligence, in a blog post on its website.

Human referral

The technology already exists outside Europe on Facebook and Instagram.

Posts identified as harmful by the algorithm can be referred to human moderators, who choose whether to take further action - including directing the user to help organisations and informing emergency services.

But Instagram told the UK's Press Association news agency that human referral was not currently part of the new tools in the UK and Europe because of data privacy considerations linked to the General Data Protection Regulation (GDPR).

The social media firm said implementing a referral process would be its next step.

"In the EU at the moment, we can only use that mix of sophisticated technology and human review element if a post is reported to us directly by a member of the community," Instagram's public policy director in Europe, Tara Hopkins, said.

She added that because in a small number of cases a judgement would be made by a human reviewer on whether to send additional resources to a user, this could be considered by regulators to be a "mental health assessment" and therefore a part of special category data, which receives greater protection under GDPR.

Facebook and Instagram have come under fire in recent years for a lack of regulation over suicide and self-harm material.

Fears about the impact of this content on young and vulnerable people been raised since the suicide of 14-year-old schoolgirl Molly Russell.

Molly's father, Ian, has previously said the "pushy algorithms" of social media "helped kill my daughter".

Media caption,

After Molly Russell took her own life, her family discovered distressing material about suicide on her Instagram account

In September, social media companies including  Facebook, Instagram, Google, YouTube, Twitter and Pinterest agreed to guidelines published by mental health charity Samaritans, in an effort to set industry standards on the issue.

"While we have seen a number of positive steps in the right direction in recent months, we know that there is still more work that needs to be done in order to tackle harmful online content," said Lydia Grace, Samaritans programme manager for online harm.

"We need regulation to ensure technology platforms take swift action to remove harmful content and that they can use the tools at their disposal to do this, while ensuring vulnerable users can access supportive content when they need it.

"Our Online Excellence Programme aims to develop a hub of excellence in suicide prevention and the online environment. As part of this, we recently launched our new guidelines for the technology industry to help sites and platforms to create safer online spaces by minimising access to potentially harmful content relating to self-harm and suicide, and maximising opportunities for support."

But Instagram said it also wanted to be a place where users could admit they have considered self-harm or suicide.

"It's okay to admit that and we want there to be a space on Instagram and Facebook for that admission," Ms Hopkins added.

"We're told by experts that can help to destigmatise issues around suicide. It's a balancing act and we're trying to get to the right spot where we're able to provide that kind of platform in that space, while also keeping people safe from seeing this kind of content if they're vulnerable."

This IS a step forward, but even one Instagram insider admitted to me that it represented only "limited progress".

So what's actually changed?

Well, before today in the EU, Instagram's complex algorithms could only hunt for images - now they can trawl for both images AND text.

That's really important because a seemingly innocent image can be accompanied by wording that breaches guidelines.

Once found those posts will be pushed out of the "Explore" feature, making them harder but not impossible to find.

It also means that the very worst posts will be automatically deleted, after 24 hours.

But the truth is even after these changes, users in the EU will not be protected from harmful content in quite the same way as those in the rest of the world.

The algorithms which analyse posts outside the EU automatically send suspect ones direct to human moderators for a final decision. Inside the EU that still won't happen.

That's because European Data Regulators are concerned about Instagram's ability to respect rules on privacy.

Privacy apart, Instagram globally is still struggling with a complex balancing act.

How does it allow support groups and those in recovery to post about self harm and suicide?

While at the same time protecting its most vulnerable users - many of them children.