Home Life Posts disappear from Facebook, Twitter, YouTube as robots police content
Posts disappear from Facebook, Twitter, YouTube as robots police content

Posts disappear from Facebook, Twitter, YouTube as robots police content

46
0

March 18, 2020 | 5:15pm

The coronavirus isn’t the only bug going around.

Big tech companies that left content moderation efforts to their artificial intelligence systems after sending legions of workers home are now scrambling to fix systems that have been accidentally marking posts as spam.

Facebook’s anti-spam system has been automatically blocking the publication of links to news stories about the coronavirus pandemic, as well as to stories with information about school closings and other impacts of the virus.

Users took to Twitter to complain about Facebook’s moderation efforts, including one who said that they tried to share a New York Post story about a 21-year-old Spaniard who died of the virus, but was blocked by the site.

Another wondered if it was a case of anti-conservative bias — an accusation that the Menlo Park, California-based company has strongly and repeatedly denied since the 2016 election.

“Facebook has a ‘Spam Bug’ that just so happened to take down all negative CoronaVirus articles about Trump. How convenient,” user MJGWrites posted, adding the hashtag “#FacebookCensorship.”

An executive at the social network said late Tuesday that posts were “incorrectly removed,” and that Facebook has since restored them, claiming that the mistake was due to a technical error and “unrelated to any changes in our content moderator workforce.”

“This was an issue with an automated system that removes links to abusive websites, but incorrectly removed a lot of other posts too,” VP of integrity Guy Rosen said on Twitter.

Francisco Garcia Facebook coronavirus
Facebook stopped one user from sharing a story from The Post on the death of Francisco Garcia, pictured, from coronavirus.Atlético Portada Alta

Facebook this week said it would send its human content moderators home with pay as the pandemic led many company employees to work remotely. The contracted content reviewers could not do their jobs from home because of security issues.

The social-media giant is now having more full-time employees police highly sensitive content such as child pornography, CEO Mark Zuckerberg said Wednesday.

But it’s not just Facebook that has been at the mercy of sometimes unpredictable robo-moderators. YouTube and Twitter — which have both begun to rely more heavily on automated content monitoring — have also incurred the wrath of users by blocking posts.

Twitter and YouTube both warned earlier in the week that they expect growing pains as they increase their usage of machine learning and automation to police their platforms.

As such, Twitter said that it would “not permanently suspend any accounts based solely on our automated enforcement systems.”

YouTube, meanwhile, has told users to expect an increase in video removals, and to prepare to see videos taken down “that may not violate policies.”

The video-sharing platform said that users can appeal removals, but said to expect slower-than-usual responses due to it being short-staffed.

Appealed videos that have not yet been reviewed would not be able to be promoted by YouTube’s algorithm, or found through the site’s search function.

While Facebook did remove some posts by mistake, the company has taken a hard stance against content that spreads false claims and conspiracy theories about the coronavirus. It has also worked to crack down on ads promising to prevent or cure the disease.

With Post wires

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here