Skip to main content

Meta will reportedly soon use AI for most product risk assessments instead of human reviewers

According to a report from NPR, Meta plans to shift the task of assessing its products' potential harms away from human reviewers, instead leaning more heavily on AI to speed up the process. Internal documents seen by the publication note that Meta is aiming to have up to 90 percent of risk assessments fall on AI, NPR reports, and is considering using AI reviews even in areas such as youth risk and "integrity," which covers violent content, misinformation and more. Unnamed current and former Meta employees who spoke with NPR warned AI may overlook serious risks that a human team would have been able to identify.

Updates and new features for Meta's platforms, including Instagram and WhatsApp, have long been subjected to human reviews before they hit the public, but Meta has reportedly doubled down on the use of AI over the last two months. Now, according to NPR, product teams have to fill out a questionnaire about their product and submit this for review by the AI system, which generally provides an "instant decision" that includes the risk areas it's identified. They'll then have to address whatever requirements it laid out to resolve the issues before the product can be released.

A former Meta executive told NPR that reducing scrutiny "means you're creating higher risks. Negative externalities of product changes are less likely to be prevented before they start causing problems in the world." In a statement to NPR, Meta said it would still tap "human expertise" to evaluate "novel and complex issues," and leave the "low-risk decisions" to AI. Read the full report over at NPR.

It comes a few days after Meta released its latest quarterly integrity reports — the first since changing its policies on content moderation and fact-checking earlier this year. The amount of content taken down has unsurprisingly decreased in the wake of the changes, per the report. But there was a small rise in bullying and harassment, as well as violent and graphic content.

This article originally appeared on Engadget at https://ift.tt/sHre39R

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/sHre39R
via IFTTT

Comments

Popular posts from this blog

Instagram accidentally reinstated Pornhub’s banned account

After years of on-and-off temporary suspensions, Instagram permanently banned Pornhub’s account in September. Then, for a short period of time this weekend, the account was reinstated. By Tuesday, it was permanently banned again. “This was done in error,” an Instagram spokesperson told TechCrunch. “As we’ve said previously, we permanently disabled this Instagram account for repeatedly violating our policies.” Instagram’s content guidelines prohibit  nudity and sexual solicitation . A Pornhub spokesperson told TechCrunch, though, that they believe the adult streaming platform’s account did not violate any guidelines. Instagram has not commented on the exact reasoning for the ban, or which policies the account violated. It’s worrying from a moderation perspective if a permanently banned Instagram account can accidentally get switched back on. Pornhub told TechCrunch that its account even received a notice from Instagram, stating that its ban had been a mistake (that message itse...

California Gov. Newsom vetoes bill SB 1047 that aims to prevent AI disasters

California Gov. Gavin Newsom has vetoed bill SB 1047, which aims to prevent bad actors from using AI to cause "critical harm" to humans. The California state assembly passed the legislation by a margin of 41-9 on August 28, but several organizations including the Chamber of Commerce had urged Newsom to veto the bill . In his veto message on Sept. 29, Newsom said the bill is "well-intentioned" but "does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it."  SB 1047 would have made the developers of AI models liable for adopting safety protocols that would stop catastrophic uses of their technology. That includes preventive measures such as testing and outside risk assessment, as well as an "emergency stop" that would completely shut down...

If only your bike had a trunk. Oh wait, now it does.

Just to let you know, if you buy something featured here, Mashable might earn an affiliate commission. Biking is one of the best ways to get around, especially if you live in a city. It's quick, it's eco-friendly, and you get a bit of exercise.  If you already commute on two wheels or are thinking of starting, there's a storage device you kinda need. SEE ALSO: This bamboo keyboard combo adds a touch of tranquility to your workspace The Buca Boot is a pretty magical two-in-one hybrid: It’s a super secure storage box for your bike that works like the trunk of a car. You can lock your helmet or whatever else in it and leave it safely behind. It’s also a basket—open it up, and you can carry a bouquet of flowers and a baguette like the picturesque cyclist of your dreams.    Read more... More about Storage , Car , Bicycle , Trunk , and Cyclist from Mashable http://ift.tt/2eHNwLB via IFTTT