Health Secretary Matt Hancock has summoned executives from Facebook, Google, Snapchat and Instagram to a “social media summit”, where he plans to tell them they should develop technology that can identify and tackle harmful content.
The meeting will focus on messages promoting self-harm and suicide, which have been under intense scrutiny since the death of teenager Molly Russell, who killed herself in 2017 after viewing content on social media linked to anxiety, depression, self-harm and suicide.
Mr Hancock will also ask the social media giants to adopt a “zero-tolerance approach to those who spread anti-vaccination messages online”.
Coming three weeks after the government announced plans to make tech giants and social networks more accountable for the material on their platforms, this marks the latest front in the government’s attempt to pin down the tech giants.
The question that campaigners, charities and grieving parents will be asking is this: will it produce real results?
After the last social media summit, in February, Facebook and Instagram agreed that the proliferation of content glamorising eating disorders and suicide was causing real harm.
But journalists – including my colleague Jason Farrell – were quickly able to find material normalising, and even encouraging, self-harm.
Instagram introduced a raft of changes to decrease the visibility of suicide, self-harm and eating disorder content, by reducing the ability to search for it, stopping it being recommended, and removing hashtags relating to it from the search bar.
But if you type relevant hashtags into Instagram, many still appear. Some are prefaced by a warning that the material may be upsetting. But this is as easily clicked away as a cookie pop-up.
To be fair to the social media companies, it is not an entirely straightforward issue. Identifying which posts encourage self-harm, rather than just commenting on it, is not easy even for experienced observers.
To take an extreme example, Facebook could remove every single post containing the word “suicide”. But that would introduce a nightmare of ineffective censorship, which could possibly end up interfering with attempts to tackle suicidal ideation.
Ruth Sutherland, CEO of the Samaritans, who will be attending the summit as part of a new partnership with the government, says there is “no black and white solution that protects the public from content on self-harm and suicide, as they are such specific and complex issues”.
But she adds that “that is why we need to work together with tech platforms to identify and remove harmful content whilst being extremely mindful that sharing certain content can be an important source of support for some”.
While these difficulties are undoubtedly very real, they are also very old. We’ve been hearing about them for years. So when will we have a solution?
The government has announced plans to make tech giants and social networks more accountable for harmful material online, by introducing an online “duty of care”.
But that is many months from coming into force, and even when it does there’s no guarantee it’ll be rigorously enforced.
And for grieving parents, feeling a terrible sense of urgency, this summit may well seem like the latest in a long line of performances without results.
Keith Watts, whose daughter Zoe killed herself after viewing images of self-harm online, told Sky News: “If they don’t start acting soon, putting this into place within a very short space of time, we are going to lose a lot more young children.
“We need to stop this and it needs to be stopped immediately.”