Skip to main content

Facebook's spam filter blocked the most popular articles about its 50m user breach

When news broke yesterday that Facebook had suffered a breach affecting at least 50,000,000 users, Facebook users (understandably) began to widely share links to articles about the breach.

The articles were so widely and quickly shared that they triggered Facebook's spam filters, which blocked the most popular stories about the breach, including an AP story and a Guardian story.

There's no reason to think that Facebook intentionally suppressed embarrassing news about its own business. Rather, this is a cautionary tale about the consequences of content filtering on big platforms.

Facebook's spam filter is concerned primarily with stopping spam, not with allowing through storm-of-the-century breaking news headlines that everyone wants to share. On a daily basis, Facebook gets millions of spams and (statistically) zero stories so salient that every Facebook user shares them at once. Any kind of sanity-check on a spam filter that allowed through things that appeared to be breaking news would represent a crack in Facebook's spam defenses that would let through much more spam than legitimate everywhere-at-once stories, because those stories almost never occur, while spam happens every second of every minute of every hour of every day.

And yet, storm-of-the-century stories are incredibly important (by definition) and losing our ability to discuss them -- or having that ability compromised by having to wait hours for Facebook to discover, diagnose and repair the problem -- is a very high price to pay.

It's a problem with the same underlying mechanics as the incident in which a man was sent an image of his mother's grave decorated with dancing cartoon characters and party balloons on the anniversary of her funeral. Facebook sends you these annual reminders a year after you post an image that attracts a lot of "likes" and images that attract a lot of likes are far more likely to be happy news than they are to be your mother's tombstone. You only bury your mother once, while you celebrate personal victories repeatedly.

So cartoon characters on your mother's grave is a corner-case; an outlier, just like a spam filter suppressing a story about a breach of 50,000,000 Facebook accounts. But they are incredibly important outliers, outliers that the system should never, ever miss.

It may not ever be possible to design a system with two billion users that doesn't involve these kinds of outliers: a one-in-a-billion outlier in a system with two billion users will happen twice a day, on average. We don't really know how to design a system that can address the majority of cases and also every one-in-a-billion corner-case.

But the answer shouldn't be to shrug our shoulders and give up. If it's impossible to run a system for two billion users without committing grave, unforgivable sins on a daily basis, then we shouldn't have systems with two billion users.

Unfortunately, the rising chorus of calls for the platforms to filter their users are trapped in the idea that the platforms can fix their problems -- not that the platforms are the problems. Filtering for harassment will inevitably end up filtering out many discussions of harassment itself, in which survivors of harassment are telling their stories and getting support. Same goes for filtering for copyright infringement, libel, "extremist content" and other "bad speech" (including a lot of speech that I personally find distasteful and never want to see in my own online sessions).

It's totally true that filtering doesn't scale up to billion-user platforms -- which isn't to say that we should abandon our attempts to have civil and civilized online discussions, but that the problem may never be solved until we cut the platforms down to manageable scales.

When going to share the story to their news feed, some users, including members of the staff here at TechCrunch who were able to replicate the bug, were met with the following error message which prevented them from sharing the story.

According to the message, Facebook is flagging the stories as spam due to how widely they are being shared or as the message puts it, the system’s observation that “a lot of people are posting the same content.”

Facebook blocked users from posting some stories about its security breach [Taylor Hatmaker/Techcrunch]

from Boing Boing https://ift.tt/2NQigNZ
via IFTTT

Comments

Popular posts from this blog

Instagram accidentally reinstated Pornhub’s banned account

After years of on-and-off temporary suspensions, Instagram permanently banned Pornhub’s account in September. Then, for a short period of time this weekend, the account was reinstated. By Tuesday, it was permanently banned again. “This was done in error,” an Instagram spokesperson told TechCrunch. “As we’ve said previously, we permanently disabled this Instagram account for repeatedly violating our policies.” Instagram’s content guidelines prohibit  nudity and sexual solicitation . A Pornhub spokesperson told TechCrunch, though, that they believe the adult streaming platform’s account did not violate any guidelines. Instagram has not commented on the exact reasoning for the ban, or which policies the account violated. It’s worrying from a moderation perspective if a permanently banned Instagram account can accidentally get switched back on. Pornhub told TechCrunch that its account even received a notice from Instagram, stating that its ban had been a mistake (that message itse...

California Gov. Newsom vetoes bill SB 1047 that aims to prevent AI disasters

California Gov. Gavin Newsom has vetoed bill SB 1047, which aims to prevent bad actors from using AI to cause "critical harm" to humans. The California state assembly passed the legislation by a margin of 41-9 on August 28, but several organizations including the Chamber of Commerce had urged Newsom to veto the bill . In his veto message on Sept. 29, Newsom said the bill is "well-intentioned" but "does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it."  SB 1047 would have made the developers of AI models liable for adopting safety protocols that would stop catastrophic uses of their technology. That includes preventive measures such as testing and outside risk assessment, as well as an "emergency stop" that would completely shut down...

If only your bike had a trunk. Oh wait, now it does.

Just to let you know, if you buy something featured here, Mashable might earn an affiliate commission. Biking is one of the best ways to get around, especially if you live in a city. It's quick, it's eco-friendly, and you get a bit of exercise.  If you already commute on two wheels or are thinking of starting, there's a storage device you kinda need. SEE ALSO: This bamboo keyboard combo adds a touch of tranquility to your workspace The Buca Boot is a pretty magical two-in-one hybrid: It’s a super secure storage box for your bike that works like the trunk of a car. You can lock your helmet or whatever else in it and leave it safely behind. It’s also a basket—open it up, and you can carry a bouquet of flowers and a baguette like the picturesque cyclist of your dreams.    Read more... More about Storage , Car , Bicycle , Trunk , and Cyclist from Mashable http://ift.tt/2eHNwLB via IFTTT