Facebook is to use a form of artificial intelligence to help spot posts expressing thoughts of suicide.
This follows criticism of the social network for allowing people to live-stream videos of violent events including their own deaths, self-harm and murder.
Guy Rosen, a strategist with the Silicon Valley firm, said a ‘pattern recognition’ program would be used to enable reports about suicidal posts reach local authorities “twice as quickly” as they otherwise would.
The technology will be available everywhere except in the European Union, where strict data protection rules apparently bar it from being deployed.
Rosen said: “We are starting to roll out artificial intelligence outside the US to help identify when someone might be expressing thoughts of suicide, including on Facebook Live.”
He added: “This approach uses pattern recognition technology to help identify posts and live streams as likely to be expressing thoughts of suicide.
“We continue to work on this technology to increase accuracy and avoid false positives before our team reviews.”
Comments left beneath posts will be analysed by the technology, as offers of help that appear there can sometimes be a sign that a user may be in danger.
“We use signals like the text used in the post and comments – for example, comments like ‘Are you OK?” and “Can I help?’ can be strong indicators,” Rosen said.
He added: “In some instances, we have found that the technology has identified videos that may have gone unreported.”
Facebook already has a team of people in charge of reviewing reports about content posted to the site. This includes specialists who have been trained in speaking to those expressing thoughts of self-harm. The company also works with several mental health organisations worldwide.