Skip to main content

AI presents 'risk of extinction' on par with nuclear war, industry leaders say

With the rise of ChatGPT, Bard and other large language models (LLMs), we've been hearing warnings from the people involved like Elon Musk about the risks posed by artificial intelligence (AI). Now, a group of high-profile industry leaders has issued a one-sentence statement effectively confirming those fears.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

It was posted to the Center for AI Safety, an organization with the mission "to reduce societal-scale risks from artificial intelligence," according to its website. Signatories are a who's who of the AI industry, including OpenAI chief executive Sam Altman and Google DeepMind head Demis Hassabis. Turing Award-winning researchers Geoffrey Hinton and Yoshua Bengio, considered by many to be the godfathers of modern AI, also put their names to it. 

It's the second such statement over the past few months. In March, Musk, Steve Wozniak and more than 1,000 others called for a six-month pause on AI to allow industry and public to effectively catch up to the technology. "Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control," the letter states.

Though AI is not (likely) self-aware as some have feared, it already presents risks for misuse and harm via deepfakes, automated disinformation and more. The LLMs could also change the way content, art and literature are produced, potentially affecting numerous jobs. 

US President Joe Biden recently stated that "it remains to be seen" if AI is dangerous, adding "tech companies have a responsibility, in my view, to make sure their products are safe before making them public... AI can help deal with some very difficult challenges like disease and climate change, but it also has to address the potential risks to our society, to our economy, to our national security." In a recent White House meeting, Altman called for regulation of AI due to potential risks. 

With a lot of opinions floating around, the new, brief statement is mean to show a common concern around AI risks, even if the parties don't agree on what those are.

"AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI," a preamble to the statement reads. "Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously."

This article originally appeared on Engadget at https://ift.tt/c1Y7sfz

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/c1Y7sfz
via IFTTT

Comments

Popular posts from this blog

Instagram accidentally reinstated Pornhub’s banned account

After years of on-and-off temporary suspensions, Instagram permanently banned Pornhub’s account in September. Then, for a short period of time this weekend, the account was reinstated. By Tuesday, it was permanently banned again. “This was done in error,” an Instagram spokesperson told TechCrunch. “As we’ve said previously, we permanently disabled this Instagram account for repeatedly violating our policies.” Instagram’s content guidelines prohibit  nudity and sexual solicitation . A Pornhub spokesperson told TechCrunch, though, that they believe the adult streaming platform’s account did not violate any guidelines. Instagram has not commented on the exact reasoning for the ban, or which policies the account violated. It’s worrying from a moderation perspective if a permanently banned Instagram account can accidentally get switched back on. Pornhub told TechCrunch that its account even received a notice from Instagram, stating that its ban had been a mistake (that message itse...

MVP versus EVP: Is it time to introduce ethics into the agile startup model?

Anand Rao Contributor Share on Twitter Anand Rao is global head of AI at PwC . The rocket ship trajectory of a startup is well known: Get an idea, build a team and slap together a minimum viable product (MVP) that you can get in front of users. However, today’s startups need to reconsider the MVP model as artificial intelligence (AI) and machine learning (ML) become ubiquitous in tech products and the market grows increasingly conscious of the ethical implications of AI augmenting or replacing humans in the decision-making process. An MVP allows you to collect critical feedback from your target market that then informs the minimum development required to launch a product — creating a powerful feedback loop that drives today’s customer-led business. This lean, agile model has been extremely successful over the past two decades — launching thousands of successful startups, some of which have grown into billion-dollar companies. However, building high-performing product...

Watch Aidy Bryant *completely* lose it as 'SNL' roasts political pundits

On Saturday Night Live , there are breaks and then there's whatever happened here. The Season 45 premiere featured a sketch that was meant to expose the empty noisemaking of political punditry on TV. But part of the joke involved a series of quick costume changes, and some weirdness during one of those switches led to a complete and total breakdown. Aidy Bryant, the segment's host, couldn't take it. She manages to keep it together until what appears to be an accidental wide shot exposes some of the magic as we see a woman who's probably a member of the SNL wardrobe crew fiddling with Aidy's costume. Read more... More about Saturday Night Live , Aidy Bryant , Entertainment , and Movies Tv Shows from Mashable https://ift.tt/2okrAOq via IFTTT