Facebook and its sister app Instagram made a record number of mistakes while policing their services this year as the coronavirus pandemic forced them to rely heavily on automated systems.
New data released by the company on Thursday showed a sharp rise in cases where it had wrongly taken down certain kinds of content and then reversed its decision following an audit or appeal.
The number of posts, images and videos being restored after they were branded hate speech soared by almost 300pc on Facebook and 100pc on Instagram between the end of March and the end of September.
Restorals of content judged to have praised or glorified hate organisations doubled from 65,000 to 127,200, while restorals of accused terrorist content shot up by 140pc during the summer before falling back below their original level.
Guy Rosen, Facebook’s vice president of product integrity, said much of the rise was due to a vast machine-powered moderation blitz which saw it take down 12m instances of coronavirus misinformation, 45m cases of hate speech and 18m pieces of terrorist content between March and October.
It came after more than 200 content moderators wrote to Facebook’s chief executive, Mark Zuckerberg, accusing him of risking their lives by "forcing" them to return to their offices after months of remote working.
Almost all of Facebook’s moderation force was sent home in March to protect them from the virus, straining the company’s ability to police its service and forcing it to rely more than ever before on artificial intelligence (AI).
Technology intelligence — newsletter promo — EOA
Mr Rosen said: "As part of our response to Covid, and the reduced availability of our reviewer workforce, we provided people an ability to indicate that they think we made a mistake.
"Our teams look at those in the aggregate and understand where we’ve made mistakes – where the system has had an issue and has taken down a series of posts in error – and we restore those.
"Some of those numbers are growing in tandem with the growth of content actions. If you take more action, remove more content, there’s more opportunity also for those to be in error."
That echoed a warning from Facebook’s chief executive Mark Zuckerberg, who said in May: "We do unfortunately expect to make more mistakes until we’re able ramp everything back up."
The latest statistics bore that out, adding tens of millions to the tally of removals in categories such as hate speech, terrorism and harassment. Other categories, such as drug sales and graphic violence, saw drops of a similar size.
Overall, the company reversed its decisions on as many as 135m pieces of content between April and September, compared to 130m during the previous six months, although both figures may include duplicates.
Hate speech in particular surged on both Facebook and Instagram as the anxiety and camaraderie of the pandemic’s early weeks gave way to the George Floyd protests, a spike in far-right terrorism and a bitterly-fought US election campaign.
On Facebook, moderators took action against 9.5m instances of hate speech in the first three months of the year, compared to 22.1m in the months through September. On Instagram the rise was tenfold, from 578,000 to 6.5m.
A spokeswoman said that Instagram’s numbers were partly driven by the expansion of AI detection technology into Arabic, Indonesian and Spanish, and she noted that both AI systems on both services continues to improve. Facebook’s AI clear-up rate was already very high at the start of the year.
For the first time, the company also estimated how just how common hate speech actually is on its services, saying that about 1 in 1,000 views included such content.
While small, that number would make hate speech about twice as common as other types of rule violations such as nudity and graphic violence. Facebook has also said that only 6pc of what people see on its US news feed is political.
Addressing Facebook’s recent election crackdown, Mr Rosen said that it had imposed more than 180m warning labels on "debunked" misinformation, and removed 265,000 pieces of content for breaking its rules against voter interference.
In some areas, a rise in restorals did coincide with a jump in posts being removed. In other cases, however, such as terrorist content on Facebook and self-harm material on Instagram, the actual rate of errors also increased, suggesting that Facebook’s moderation had become more zealous and less accurate.
Conversely, hate speech detection appeared to have grown more accurate: the error rate has dropped significantly over the past year.
Mr Rosen said: "We can talk about taking down content and all of these big numbers, but we also need to make sure we’re constantly balancing that with giving people the ability to indicate when we’ve made a mistake, and ensuring that we are held accountable to correcting those mistakes."
Facebook uses a combination of AI and human moderators to enforce its rules, with machines flagging up examples for humans to look at and then enforcing their decisions automatically across swathes of duplicated or very similar content.
Свежие комментарии