Facebook Implements New Restrictions on Who Can Go Live to Stop the Spread of Disturbing Content

0
751

In the wake of the Christchurch terror attacks, in which the perpetrator broadcast his crimes live on Facebook, The Social Network scrambled to remove the footage, with various users posting clips, and/or variations of them, in quick succession.

Facebook came under intense scrutiny over its perceived inability to limit the sharing of such content, with government officials from several nations calling for new regulations which would essentially hold Facebook executives accountable for extremist material hosted on their platform. 

But the very nature of live streaming makes this impossible to enforce – most of the wording around such regulations has centered on holding Facebook to account if the content is not removed in adequate time. 

For example, under new laws implemented after the Christchurch attacks in Australia

“Social media companies risk fines of up to 10% of the platform’s annual turnover if they fail to remove violent content “expeditiously”

But what does ‘expeditiously’ mean? It’s a fairly flexible parameter, with no definitive legal standing. And given that content is being broadcast in real-time, with no filter between the user and the audience, it makes even less sense as a ruling.

So what can Facebook do?

This week, The Social Network has announced new restrictions on who can actually use Facebook Live, with users who’ve previously violated Facebook’s rules losing their live privileges – first for a month, then extending longer for every subsequent infraction.  

As explained by Facebook:

“We will now apply a ‘one strike’ policy to Live in connection with a broader range of offenses. From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time – for example 30 days – starting on their first offense. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time.”

Will that work? Would that have stopped the Christchurch attacker, for example, from being able to broadcast his violent rampage?

According to reports, the attacker was not on any terror watch list, but there are suggestions that he may have previously been reported for his activity on Facebook before the incident. That would mean that, under these new regulations, he may well have lost Facebook Live access – but it’s just as likely that we’d then be talking about Periscope or Instagram Live, or another outlet through which he could share the same.

That doesn’t mean that Facebook should do nothing, any action The Social Network can take is a positive, and it is additionally considering expanded restrictions to other Facebook tools – like ads – for those who break the platform’s rules.

It may help, but the actual impact is hard to predict. Which is why Facebook’s also investing in new systems to better detect such content through automated means.

One of the challenges we faced in the days after the Christchurch attack was a proliferation of many different variants of the video of the attack. People – not always intentionally – shared edited versions of the video, which made it hard for our systems to detect. Although we deployed a number of techniques to eventually find these variants, including video and audio matching technology, we realized that this is an area where we need to invest in further research.”

For this, Facebook’s investing $7.5 million into new research partnerships with leading academics from The University of Maryland, Cornell University and The University of California, Berkeley, in order to improve its automated detection tools. 

Again, it’s impossible to predict how successful those efforts will be, but it’s another step towards improving Facebook’s systems – which, given the stakes, is most definitely worth the effort.

You can read more about Facebook’s new initiatives here.

Source