Skip to main content

MPs want tech giants to pay the police to find antisemitic and neo-Nazi content online (GOOG, FB, TWTR)

Trolls

UK politicians have said that Google, Twitter, and Facebook should pay the Metropolitan Police to find extremist content on their sites, because they're not doing a good enough job by themselves.

MPs investigating the tech giants described them as "a disgrace" because they don't delete illegal material quickly enough.

The MPs are part of the Home Affairs Committee, which released a report today about hate speech online and its impact on the real world.

In the report, they used examples like MPs receiving antisemitic abuse online, Facebook hosting sexualised images of children, and YouTube hosting terrorist recruitment and neo-Nazi videos.

Social media companies, they said, should help fund the Metropolitan Police's online counter-terrorism unit to find extremist content online on their behalf. That unit is currently funded by UK taxpayers, and flags hateful content to Facebook, Twitter, and Google.

Google's Peter Barron, Facebook's Simon Milner and Twitter's Nick Pickles

This is what the MPs proposed in their report:

"Football teams are obliged to pay for policing in their stadiums and immediate surrounding areas on match days. Government should now consult on adopting similar principles online— for example, requiring social media companies to contribute to the Metropolitan Police's CTIRU [counter-terrorism internet referral unit] for the costs of enforcement activities which should rightfully be carried out by the companies themselves."

The MPs also proposed "meaningful fines" if the tech giants didn't take down illegal content in a short time, and quarterly reports which showed how much hate speech they had removed from their platforms.

Committee chair Yvette Cooper added:

"The biggest and richest social media companies are shamefully far from taking sufficient action to tackle illegal and dangerous content, to implement proper community standards or to keep their users safe. Given their immense size, resources and global reach, it is completely irresponsible of them to fail to abide by the law, and to keep their users and others safe."

At the moment, it doesn't look like the government will change the law to force tech giants to take hate speech more seriously. According to the report, MPs have pressured the trio to do more in a series of meetings. Last month, the three firms promised to develop new tools to identify terrorist propaganda online after meeting with home secretary Amber Rudd.

Facebook, Twitter, and Google did not immediately respond to a request for comment.

Join the conversation about this story »

NOW WATCH: This man spent 6 weeks working undercover in an iPhone factory in China — here's what it was like



from Tech Insider http://ift.tt/2pkNS0i
via IFTTT

Comments

Popular posts from this blog

Instagram accidentally reinstated Pornhub’s banned account

After years of on-and-off temporary suspensions, Instagram permanently banned Pornhub’s account in September. Then, for a short period of time this weekend, the account was reinstated. By Tuesday, it was permanently banned again. “This was done in error,” an Instagram spokesperson told TechCrunch. “As we’ve said previously, we permanently disabled this Instagram account for repeatedly violating our policies.” Instagram’s content guidelines prohibit  nudity and sexual solicitation . A Pornhub spokesperson told TechCrunch, though, that they believe the adult streaming platform’s account did not violate any guidelines. Instagram has not commented on the exact reasoning for the ban, or which policies the account violated. It’s worrying from a moderation perspective if a permanently banned Instagram account can accidentally get switched back on. Pornhub told TechCrunch that its account even received a notice from Instagram, stating that its ban had been a mistake (that message itse...

Colorado police identified the serial killer who murdered 4 women 40 years ago after exhuming his body to analyze a DNA sample

A scientist examines computer images of DNA models. Getty Images Police in Colorado have cracked the cold cases of four women killed 40 years ago. Denver PD said genetic genealogy and DNA analysis helped them identify the serial killer. He had died by suicide in jail in 1981. DNA from his exhumed body matched evidence from the murders. Police in Colorado have cracked the code on four murder cases that went unsolved for 40 years, using DNA from the killer's exhumed body. The cases pertain to four women killed in the Denver metro area between 1978 and 1981. They were 33-year-old Madeleine Furey-Livaudais, 53-year-old Dolores Barajas, 27-year-old Gwendolyn Harris, and 17-year-old Antoinette Parks. The four women were stabbed to death. Denver Police Commander Matt Clark said in a press conference Friday that there was an "underlying sexual component" to the murders but didn't elaborate further. In 2009, a detective reviewed Parks' case and picked several p...

Gemini vs. ChatGPT: Which one planned my wedding better?

I was all about the wedding bells after getting engaged in June, but after seeing some of these wedding venue quotes, it’s more like alarm bells. "Ding-dong" has been remixed to "cha-ching" – and I need help. I don’t even know how to begin wedding planning. What are the first steps? What do I need to prioritize first? Which tasks are pressing – and which can wait a year or two? I decided to enlist the help of an AI assistant. Taking it one step further, I thought it’d be interesting to see which chatbot – Gemini Advanced or ChatGPT Plus (i.e., ChatGPT 4o) – is the better wedding planner. Gemini vs ChatGPT: Create a to-do list I’m planning on have my wedding in the summer of 2026 – sometime between August and September. Besides that, I don’t have anything else nailed down, so I asked both Gemini and ChatGPT to give me a to-do list based on the following prompt: “My wedding is between August 2026 and September 2026. Give me a to-do list of things to do for the...