Skip to main content

Google is reportedly paying publishers thousands of dollars to use its AI to write stories

Google has been quietly striking deals with some publishers to use new generative AI tools to publish stories, according to a report in Adweek. The deals, reportedly worth tens of thousands of dollars a year, are apparently part of the Google News Initiative (GNI), a six-year-old program that funds media literacy projects, fact-checking tools, and other resources for newsrooms. But the move into generative AI publishing tools would be a new, and likely controversial, step for the company.

According to Adweek, the program is currently targeting a “handful” of smaller publishers. “The beta tools let under-resourced publishers create aggregated content more efficiently by indexing recently published reports generated by other organizations, like government agencies and neighboring news outlets, and then summarizing and publishing them as a new article,” Adweek reports.

It’s not clear exactly how much publishers are being paid under the arrangement, though Adweek says it’s a “five-figure sum” per year. In exchange, media organizations reportedly agree to publish at least three articles a day, one weekly newsletter and one monthly marketing campaign using the tools.

Of note, publishers in the program are apparently not required to disclose their use of AI, nor are the aggregated websites informed that their content is being used to create AI-written stories on other sites. The AI-generated copy reportedly uses a color-coded system to indicate the reliability of each section of text to help human editors review the content before publishing.

Google didn’t immediately respond to a request for comment. In a statement to Adweek the company said it was “in the early stages of exploring ideas to potentially provide AI-enabled tools to help journalists with their work.” The spokesperson added that the AI tools “are not intended to, and cannot, replace the essential role journalists have in reporting, creating and fact-checking their articles.”

It’s not clear what Google is getting out of the arrangement, though it wouldn’t be the first tech company to pay newsrooms to use proprietary tools. The arrangement bears some similarities to the deals Facebook once struck with publishers to create live video content in 2016. The social media company made headlines as it paid publishers millions of dollars to juice its nascent video platform and dozens of media outlets opted to “pivot to video” as a result.

Those deals later evaporated after Facebook discovered it had wildly miscalculated the number of views such content was getting. The social network ended its live video deals soon after and has since tweaked its algorithm to recommend less news content. The media industry’s “pivot to video” cost hundreds of journalists their jobs, by some estimates.

While the GNI program appears to be much smaller than what Facebook attempted nearly a decade ago with live video, it will likely raise fresh scrutiny over the use of generative AI tools by publishers. Publications like CNET and Sports Illustrated have been widely criticized for attempting to pass off AI-authored articles as written by human staffers.

This article originally appeared on Engadget at https://ift.tt/obxkMhJ

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/obxkMhJ
via IFTTT

Comments

Popular posts from this blog

Instagram accidentally reinstated Pornhub’s banned account

After years of on-and-off temporary suspensions, Instagram permanently banned Pornhub’s account in September. Then, for a short period of time this weekend, the account was reinstated. By Tuesday, it was permanently banned again. “This was done in error,” an Instagram spokesperson told TechCrunch. “As we’ve said previously, we permanently disabled this Instagram account for repeatedly violating our policies.” Instagram’s content guidelines prohibit  nudity and sexual solicitation . A Pornhub spokesperson told TechCrunch, though, that they believe the adult streaming platform’s account did not violate any guidelines. Instagram has not commented on the exact reasoning for the ban, or which policies the account violated. It’s worrying from a moderation perspective if a permanently banned Instagram account can accidentally get switched back on. Pornhub told TechCrunch that its account even received a notice from Instagram, stating that its ban had been a mistake (that message itse...

Watch Aidy Bryant *completely* lose it as 'SNL' roasts political pundits

On Saturday Night Live , there are breaks and then there's whatever happened here. The Season 45 premiere featured a sketch that was meant to expose the empty noisemaking of political punditry on TV. But part of the joke involved a series of quick costume changes, and some weirdness during one of those switches led to a complete and total breakdown. Aidy Bryant, the segment's host, couldn't take it. She manages to keep it together until what appears to be an accidental wide shot exposes some of the magic as we see a woman who's probably a member of the SNL wardrobe crew fiddling with Aidy's costume. Read more... More about Saturday Night Live , Aidy Bryant , Entertainment , and Movies Tv Shows from Mashable https://ift.tt/2okrAOq via IFTTT

California Gov. Newsom vetoes bill SB 1047 that aims to prevent AI disasters

California Gov. Gavin Newsom has vetoed bill SB 1047, which aims to prevent bad actors from using AI to cause "critical harm" to humans. The California state assembly passed the legislation by a margin of 41-9 on August 28, but several organizations including the Chamber of Commerce had urged Newsom to veto the bill . In his veto message on Sept. 29, Newsom said the bill is "well-intentioned" but "does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it."  SB 1047 would have made the developers of AI models liable for adopting safety protocols that would stop catastrophic uses of their technology. That includes preventive measures such as testing and outside risk assessment, as well as an "emergency stop" that would completely shut down...