Skip to main content

Anthropic wants to hire a weapons expert. Its not what you think.

Anthropic's AI, Claude

Many people first saw it on X: A most unusual, and unsettling, job posting. Some assumed it was a joke. Others were reminded of Cyberdyne Systems, the tech company in the Terminator franchise that accidentally invents Skynet.

But over on LinkedIn, where they speak a different language, Anthropic had merely posted a listing looking for a Policy Manager, Chemical Weapons and High Yield Explosives. The job description added more details.

"This role offers a unique opportunity to shape how AI systems handle sensitive chemical and explosives information," it read. "You'll work with leading AI safety researchers while tackling critical problems in preventing catastrophic misuse. If you're excited about using your expertise to ensure AI systems remain safe and beneficial, we want to hear from you."

Mashable reached out to Anthropic, and the company provided more context.

"Our usage policies prohibit the use of Anthropic products or services to develop or design weapons," a company spokesperson told us. "This role is for the Safeguards team which is responsible for preventing misuse of our models."

The spokesperson stressed that Anthropic explicitly prohibits its AI or any of its technology to be used for weapons creation. Instead, the New York-based manager will be tasked with building and enforcing safeguards to ensure weapons are not made from Anthropic’s tech.

The company seeks to hire experts in sensitive fields who can ensure Anthropic's AI is kept from nefarious hands, the spokesperson said.

Anthropic recently found itself in a very public battle with the Department of War (a.k.a. the Department of Defense). The company says it's not budging in its demands that its AI not be used to build fully autonomous weapons or to establish mass surveillance on people.

Secretary of Defense Pete Hegseth responded to Anthropic’s conditions by declaring the company a supply chain risk to America’s national security, banning the Pentagon from using its tech after a six-month phase-out. The company then filed suit according to a March 5 note from Anthropic CEO Dario Amodei.

Meanwhile, some in the Pentagon are reportedly finding it hard to abandon Claude, Anthropic's AI model.

Back in February, Anthropic announced an update to its AI safety policy, also known as its Responsible Scaling Policy. The company stated it was forced to rethink its safety policies — considered by some to be the strongest in the industry — due to several factors, including the federal government’s emphasis on economic growth over safety regulations.

Whoever ends up in that policy manager role, then, will find themselves at the center of an explosive debate. Not to mention, potentially, the ability to help prevent a future Skynet threat.



from Mashable https://ift.tt/jhuZfGS
via IFTTT

Comments

Popular posts from this blog

The Nintendo Switch has been the US’s bestselling console for 23 straight months

Photo by James Bareham / The Verge It’s been a good two years for the Nintendo Switch. According to Nintendo, the gaming tablet has been the bestselling console in the US for 23 straight months. And according to data from the NPD Group, it just had its best October ever, moving 735,926 units of both the Switch and Switch Lite in the US. The company says that represents a 136 percent increase compared to last year. To date, the Switch has sold 22.5 million units in the US, and last week Nintendo revealed that more than 68 million units have been sold globally . “We’re excited about our momentum,” says Nick Chavez, Nintendo of America’s SVP of sales and marketing. Chavez puts the company’s big October down to two main factors. One is a better supply of stock; this year in particular, it’s often been hard to find a Switch on store shelves. This has only been exacerbated by increased demand due to a combination of the pandemic and the breakout success of Animal Crossing: New Horizons . ...

Instagram accidentally reinstated Pornhub’s banned account

After years of on-and-off temporary suspensions, Instagram permanently banned Pornhub’s account in September. Then, for a short period of time this weekend, the account was reinstated. By Tuesday, it was permanently banned again. “This was done in error,” an Instagram spokesperson told TechCrunch. “As we’ve said previously, we permanently disabled this Instagram account for repeatedly violating our policies.” Instagram’s content guidelines prohibit  nudity and sexual solicitation . A Pornhub spokesperson told TechCrunch, though, that they believe the adult streaming platform’s account did not violate any guidelines. Instagram has not commented on the exact reasoning for the ban, or which policies the account violated. It’s worrying from a moderation perspective if a permanently banned Instagram account can accidentally get switched back on. Pornhub told TechCrunch that its account even received a notice from Instagram, stating that its ban had been a mistake (that message itse...

MVP versus EVP: Is it time to introduce ethics into the agile startup model?

Anand Rao Contributor Share on Twitter Anand Rao is global head of AI at PwC . The rocket ship trajectory of a startup is well known: Get an idea, build a team and slap together a minimum viable product (MVP) that you can get in front of users. However, today’s startups need to reconsider the MVP model as artificial intelligence (AI) and machine learning (ML) become ubiquitous in tech products and the market grows increasingly conscious of the ethical implications of AI augmenting or replacing humans in the decision-making process. An MVP allows you to collect critical feedback from your target market that then informs the minimum development required to launch a product — creating a powerful feedback loop that drives today’s customer-led business. This lean, agile model has been extremely successful over the past two decades — launching thousands of successful startups, some of which have grown into billion-dollar companies. However, building high-performing product...