Skip to main content

MPs want tech giants to pay the police to find antisemitic and neo-Nazi content online (GOOG, FB, TWTR)

Trolls

UK politicians have said that Google, Twitter, and Facebook should pay the Metropolitan Police to find extremist content on their sites, because they're not doing a good enough job by themselves.

MPs investigating the tech giants described them as "a disgrace" because they don't delete illegal material quickly enough.

The MPs are part of the Home Affairs Committee, which released a report today about hate speech online and its impact on the real world.

In the report, they used examples like MPs receiving antisemitic abuse online, Facebook hosting sexualised images of children, and YouTube hosting terrorist recruitment and neo-Nazi videos.

Social media companies, they said, should help fund the Metropolitan Police's online counter-terrorism unit to find extremist content online on their behalf. That unit is currently funded by UK taxpayers, and flags hateful content to Facebook, Twitter, and Google.

Google's Peter Barron, Facebook's Simon Milner and Twitter's Nick Pickles

This is what the MPs proposed in their report:

"Football teams are obliged to pay for policing in their stadiums and immediate surrounding areas on match days. Government should now consult on adopting similar principles online— for example, requiring social media companies to contribute to the Metropolitan Police's CTIRU [counter-terrorism internet referral unit] for the costs of enforcement activities which should rightfully be carried out by the companies themselves."

The MPs also proposed "meaningful fines" if the tech giants didn't take down illegal content in a short time, and quarterly reports which showed how much hate speech they had removed from their platforms.

Committee chair Yvette Cooper added:

"The biggest and richest social media companies are shamefully far from taking sufficient action to tackle illegal and dangerous content, to implement proper community standards or to keep their users safe. Given their immense size, resources and global reach, it is completely irresponsible of them to fail to abide by the law, and to keep their users and others safe."

At the moment, it doesn't look like the government will change the law to force tech giants to take hate speech more seriously. According to the report, MPs have pressured the trio to do more in a series of meetings. Last month, the three firms promised to develop new tools to identify terrorist propaganda online after meeting with home secretary Amber Rudd.

Facebook, Twitter, and Google did not immediately respond to a request for comment.

Join the conversation about this story »

NOW WATCH: This man spent 6 weeks working undercover in an iPhone factory in China — here's what it was like



from Tech Insider http://ift.tt/2pkNS0i
via IFTTT

Comments

Popular posts from this blog

The Nintendo Switch has been the US’s bestselling console for 23 straight months

Photo by James Bareham / The Verge It’s been a good two years for the Nintendo Switch. According to Nintendo, the gaming tablet has been the bestselling console in the US for 23 straight months. And according to data from the NPD Group, it just had its best October ever, moving 735,926 units of both the Switch and Switch Lite in the US. The company says that represents a 136 percent increase compared to last year. To date, the Switch has sold 22.5 million units in the US, and last week Nintendo revealed that more than 68 million units have been sold globally . “We’re excited about our momentum,” says Nick Chavez, Nintendo of America’s SVP of sales and marketing. Chavez puts the company’s big October down to two main factors. One is a better supply of stock; this year in particular, it’s often been hard to find a Switch on store shelves. This has only been exacerbated by increased demand due to a combination of the pandemic and the breakout success of Animal Crossing: New Horizons . ...

Instagram accidentally reinstated Pornhub’s banned account

After years of on-and-off temporary suspensions, Instagram permanently banned Pornhub’s account in September. Then, for a short period of time this weekend, the account was reinstated. By Tuesday, it was permanently banned again. “This was done in error,” an Instagram spokesperson told TechCrunch. “As we’ve said previously, we permanently disabled this Instagram account for repeatedly violating our policies.” Instagram’s content guidelines prohibit  nudity and sexual solicitation . A Pornhub spokesperson told TechCrunch, though, that they believe the adult streaming platform’s account did not violate any guidelines. Instagram has not commented on the exact reasoning for the ban, or which policies the account violated. It’s worrying from a moderation perspective if a permanently banned Instagram account can accidentally get switched back on. Pornhub told TechCrunch that its account even received a notice from Instagram, stating that its ban had been a mistake (that message itse...

MVP versus EVP: Is it time to introduce ethics into the agile startup model?

Anand Rao Contributor Share on Twitter Anand Rao is global head of AI at PwC . The rocket ship trajectory of a startup is well known: Get an idea, build a team and slap together a minimum viable product (MVP) that you can get in front of users. However, today’s startups need to reconsider the MVP model as artificial intelligence (AI) and machine learning (ML) become ubiquitous in tech products and the market grows increasingly conscious of the ethical implications of AI augmenting or replacing humans in the decision-making process. An MVP allows you to collect critical feedback from your target market that then informs the minimum development required to launch a product — creating a powerful feedback loop that drives today’s customer-led business. This lean, agile model has been extremely successful over the past two decades — launching thousands of successful startups, some of which have grown into billion-dollar companies. However, building high-performing product...