Skip to main content

Amazon discovered a 'high volume' of CSAM in its AI training data but isn't saying where it came from

The National Center for Missing and Exploited Children said it received more than 1 million reports of AI-related child sexual abuse material (CSAM) in 2025. The "vast majority" of that content was reported by Amazon, which found the material in its training data, according to an investigation by Bloomberg. In addition, Amazon said only that it obtained the inappropriate content from external sources used to train its AI services and claimed it could not provide any further details about where the CSAM came from. 

"This is really an outlier," Fallon McNulty, executive director of NCMEC’s CyberTipline, told Bloomberg. The CyberTipline is where many types of US-based companies are legally required to report suspected CSAM. “Having such a high volume come in throughout the year begs a lot of questions about where the data is coming from, and what safeguards have been put in place.” She added that aside from Amazon, the AI-related reports the organization received from other companies last year included actionable data that it could pass along to law enforcement for next steps. Since Amazon isn’t disclosing sources, McNulty said its reports have proved “inactionable.”

"We take a deliberately cautious approach to scanning foundation model training data, including data from the public web, to identify and remove known [child sexual abuse material] and protect our customers," an Amazon representative said in a statement to Bloomberg. The spokesperson also said that Amazon aimed to over-report its figures to NCMEC in order to avoid missing any cases. The company said that it removed the suspected CSAM content before feeding training data into its AI models. 

Safety questions for minors have emerged as a critical concern for the artificial intelligence industry in recent months. CSAM has skyrocketed in NCMEC's records; compared with the more than 1 million AI-related reports the organization received last year, the 2024 total was 67,000 reports while 2023 only saw 4,700 reports. 

In addition to issues such as abusive content being used to train models, AI chatbots have also been implicated in several dangerous or tragic cases involving young users. OpenAI and Character.AI have both been sued after teenagers planned their suicides with those companies' platforms. Meta is also being sued for alleged failures to protect teen users from sexually explicit conversations with chatbots.

This article originally appeared on Engadget at https://ift.tt/F0Afveq

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/F0Afveq
via IFTTT

Comments

Popular posts from this blog

The Nintendo Switch has been the US’s bestselling console for 23 straight months

Photo by James Bareham / The Verge It’s been a good two years for the Nintendo Switch. According to Nintendo, the gaming tablet has been the bestselling console in the US for 23 straight months. And according to data from the NPD Group, it just had its best October ever, moving 735,926 units of both the Switch and Switch Lite in the US. The company says that represents a 136 percent increase compared to last year. To date, the Switch has sold 22.5 million units in the US, and last week Nintendo revealed that more than 68 million units have been sold globally . “We’re excited about our momentum,” says Nick Chavez, Nintendo of America’s SVP of sales and marketing. Chavez puts the company’s big October down to two main factors. One is a better supply of stock; this year in particular, it’s often been hard to find a Switch on store shelves. This has only been exacerbated by increased demand due to a combination of the pandemic and the breakout success of Animal Crossing: New Horizons . ...

Instagram accidentally reinstated Pornhub’s banned account

After years of on-and-off temporary suspensions, Instagram permanently banned Pornhub’s account in September. Then, for a short period of time this weekend, the account was reinstated. By Tuesday, it was permanently banned again. “This was done in error,” an Instagram spokesperson told TechCrunch. “As we’ve said previously, we permanently disabled this Instagram account for repeatedly violating our policies.” Instagram’s content guidelines prohibit  nudity and sexual solicitation . A Pornhub spokesperson told TechCrunch, though, that they believe the adult streaming platform’s account did not violate any guidelines. Instagram has not commented on the exact reasoning for the ban, or which policies the account violated. It’s worrying from a moderation perspective if a permanently banned Instagram account can accidentally get switched back on. Pornhub told TechCrunch that its account even received a notice from Instagram, stating that its ban had been a mistake (that message itse...

MVP versus EVP: Is it time to introduce ethics into the agile startup model?

Anand Rao Contributor Share on Twitter Anand Rao is global head of AI at PwC . The rocket ship trajectory of a startup is well known: Get an idea, build a team and slap together a minimum viable product (MVP) that you can get in front of users. However, today’s startups need to reconsider the MVP model as artificial intelligence (AI) and machine learning (ML) become ubiquitous in tech products and the market grows increasingly conscious of the ethical implications of AI augmenting or replacing humans in the decision-making process. An MVP allows you to collect critical feedback from your target market that then informs the minimum development required to launch a product — creating a powerful feedback loop that drives today’s customer-led business. This lean, agile model has been extremely successful over the past two decades — launching thousands of successful startups, some of which have grown into billion-dollar companies. However, building high-performing product...