Skip to main content

In its latest confusing decision, Twitter reinstates The New York Post

Twitter

Rupert Murdoch’s tabloid The New York Post is back on Twitter, after Twitter updated its policy on policy changes. This story is going to be confusing, but not as confusing as Twitter’s attempts at moderation.

To recap: On October 14th, The New York Post published a (contested and possibly part of a disinformation campaign, though this is absolutely not the point I am here to tell you about) story about Hunter Biden, the son of presidential candidate Joe Biden. Very little of the contents of the Post story are pertinent to the discussion we are about to have, except this: some of the materials in it, Twitter alleges, seem to be the result of hacking.

Twitter suspended The New York Post’s account for six tweets that linked to the story and blocked links to the story in question, citing its hacked materials policy, as well as a policy about private information. This caused, perhaps predictably, a massive uproar. On October 15th, Twitter’s trust and safety lead, Vijaya Gadde, tweeted that Twitter’s hacked materials policy would change, and the company would “no longer remove hacked content unless it is directly shared by hackers or those acting in concert with them.”

On October 16th, Jack Dorsey tweeted that blocking the URL “was wrong,” and a Twitter spokesperson told The New York Times that the information that was previously “private information” had spread so widely that it no longer counted as “private.” Therefore, the Post article no longer violated the private information policy.

Got all that so far? Great, there’s more. Despite inspiring the policy change on hacked materials and no longer violating the policy on private information, The New York Post remained suspended, because of a different policy. See, Twitter has a policy on policy changes. If you were, say, a tabloid that had been suspended because of an old policy, a new policy wouldn’t supercede your suspension. Not even if you’d inspired the new policy.

So today, Twitter has updated its policy on policy changes, and The New York Post is taking a victory lap.

It didn’t have to go like this. Facebook, for instance, chose to limit the article’s reach while fact-checkers combed through it — but the company didn’t remove it. Basically, Facebook triggered its “virality circuit breaker,” which, as Casey Newton points out, allowed The Post to post without giving it unwarranted lift, in case the article was disinformation. That decision was also controversial, but it was less severe.

Pilfered documents are unquestionably part of the journalistic tradition. This tradition was particularly part of the 2016 presidential election, when reporters published stories with emails from the Democratic National Committee that had been obtained through hacking. As a result, platforms began planning for what they would do in case of a similar 2020 hack-and-leak operation. Twitter evidently felt that The New York Post’s article rose to that level.

Anyway, the Republican party called foul on the whole thing and made everyone sit through a tiresome Senate hearing on October 28th.

So, here we are, one Senate hearing and two policy changes later. Insofar as it is possible to draw a moral from this bizarre saga, it seems to be this: Twitter’s moderation still doesn’t make any damn sense. But congratulations to them on updating their policy on policy changes.



from The Verge - Teches https://ift.tt/35SAc0P
via IFTTT

Comments

Popular posts from this blog

Instagram accidentally reinstated Pornhub’s banned account

After years of on-and-off temporary suspensions, Instagram permanently banned Pornhub’s account in September. Then, for a short period of time this weekend, the account was reinstated. By Tuesday, it was permanently banned again. “This was done in error,” an Instagram spokesperson told TechCrunch. “As we’ve said previously, we permanently disabled this Instagram account for repeatedly violating our policies.” Instagram’s content guidelines prohibit  nudity and sexual solicitation . A Pornhub spokesperson told TechCrunch, though, that they believe the adult streaming platform’s account did not violate any guidelines. Instagram has not commented on the exact reasoning for the ban, or which policies the account violated. It’s worrying from a moderation perspective if a permanently banned Instagram account can accidentally get switched back on. Pornhub told TechCrunch that its account even received a notice from Instagram, stating that its ban had been a mistake (that message itse...

If only your bike had a trunk. Oh wait, now it does.

Just to let you know, if you buy something featured here, Mashable might earn an affiliate commission. Biking is one of the best ways to get around, especially if you live in a city. It's quick, it's eco-friendly, and you get a bit of exercise.  If you already commute on two wheels or are thinking of starting, there's a storage device you kinda need. SEE ALSO: This bamboo keyboard combo adds a touch of tranquility to your workspace The Buca Boot is a pretty magical two-in-one hybrid: It’s a super secure storage box for your bike that works like the trunk of a car. You can lock your helmet or whatever else in it and leave it safely behind. It’s also a basket—open it up, and you can carry a bouquet of flowers and a baguette like the picturesque cyclist of your dreams.    Read more... More about Storage , Car , Bicycle , Trunk , and Cyclist from Mashable http://ift.tt/2eHNwLB via IFTTT

California Gov. Newsom vetoes bill SB 1047 that aims to prevent AI disasters

California Gov. Gavin Newsom has vetoed bill SB 1047, which aims to prevent bad actors from using AI to cause "critical harm" to humans. The California state assembly passed the legislation by a margin of 41-9 on August 28, but several organizations including the Chamber of Commerce had urged Newsom to veto the bill . In his veto message on Sept. 29, Newsom said the bill is "well-intentioned" but "does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it."  SB 1047 would have made the developers of AI models liable for adopting safety protocols that would stop catastrophic uses of their technology. That includes preventive measures such as testing and outside risk assessment, as well as an "emergency stop" that would completely shut down...