Skip to main content

OpenAI pulls ChatGPT feature that let user chats appear in Google Search results

iphone with chatgpt app icon on display

Thousands of private ChatGPT conversations have been appearing in Google search results because of the chatbot's "Share" feature, which the company recently removed following a backlash.

Fast Company reported this week that ChatGPT users may have inadvertently made their conversations with the AI chatbot public and searchable. The Fast Company report found nearly 4,500 ChatGPT conversations in Google search results, some of them regarding mental health struggles, relationships, and other personal and sensitive topics. Fortunately, the public conversations did not identify the users behind the posts.

How did these conversations end up on the web?

Until recently, ChatGPT users had the ability to share chats with friends, family, or coworkers by making them public. The function worked similarly to the sharing settings on a Google Doc, and users would be given a public link to the chat they could send to others. An additional option gave users the option to make the post "discoverable," and specifically discoverable by Google — whether users realized it or not.

When users created a shareable link to one of their conversations, a pop-up would appear that read: "A public link to your chat has been created." A checkbox also appeared under this message, labeled "Make this chat discoverable." And in fine print below this message, a warning appeared: "Allows it to be shown in web searches."

By checking this box, users were making it possible for their conversations to be indexed by Google, meaning Google's web crawlers could identify the page and make it eligible to appear in search results.

After Fast Company published its report, OpenAI removed the feature, with one company leader calling it a "short-lived experiment."

OpenAI Chief Information Security Officer Dane Stuckey explained on X how the feature worked — and where it ultimately went wrong.

Even though ChatGPT users had to opt in for their chats to become public, the company decided the potential for user error was simply too high.

As Mashable has reported previously, OpenAI is required to save user conversations — even conversations users have actively deleted — because of an ongoing lawsuit from the New York Times. As part of this suit, OpenAI must retain all conversations indefinitely. (This does not apply to ChatGPT Enterprise or ChatGPT Edu customers, according to OpenAI.)

So, while ChatGPT users can toggle on a "Temporary Chat" feature that's similar to an incognito mode in a web browser, your chat data may still be retained.


Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.



from Mashable https://ift.tt/Mz0nAGe
via IFTTT

Comments

Popular posts from this blog

Instagram accidentally reinstated Pornhub’s banned account

After years of on-and-off temporary suspensions, Instagram permanently banned Pornhub’s account in September. Then, for a short period of time this weekend, the account was reinstated. By Tuesday, it was permanently banned again. “This was done in error,” an Instagram spokesperson told TechCrunch. “As we’ve said previously, we permanently disabled this Instagram account for repeatedly violating our policies.” Instagram’s content guidelines prohibit  nudity and sexual solicitation . A Pornhub spokesperson told TechCrunch, though, that they believe the adult streaming platform’s account did not violate any guidelines. Instagram has not commented on the exact reasoning for the ban, or which policies the account violated. It’s worrying from a moderation perspective if a permanently banned Instagram account can accidentally get switched back on. Pornhub told TechCrunch that its account even received a notice from Instagram, stating that its ban had been a mistake (that message itse...

California Gov. Newsom vetoes bill SB 1047 that aims to prevent AI disasters

California Gov. Gavin Newsom has vetoed bill SB 1047, which aims to prevent bad actors from using AI to cause "critical harm" to humans. The California state assembly passed the legislation by a margin of 41-9 on August 28, but several organizations including the Chamber of Commerce had urged Newsom to veto the bill . In his veto message on Sept. 29, Newsom said the bill is "well-intentioned" but "does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it."  SB 1047 would have made the developers of AI models liable for adopting safety protocols that would stop catastrophic uses of their technology. That includes preventive measures such as testing and outside risk assessment, as well as an "emergency stop" that would completely shut down...

If only your bike had a trunk. Oh wait, now it does.

Just to let you know, if you buy something featured here, Mashable might earn an affiliate commission. Biking is one of the best ways to get around, especially if you live in a city. It's quick, it's eco-friendly, and you get a bit of exercise.  If you already commute on two wheels or are thinking of starting, there's a storage device you kinda need. SEE ALSO: This bamboo keyboard combo adds a touch of tranquility to your workspace The Buca Boot is a pretty magical two-in-one hybrid: It’s a super secure storage box for your bike that works like the trunk of a car. You can lock your helmet or whatever else in it and leave it safely behind. It’s also a basket—open it up, and you can carry a bouquet of flowers and a baguette like the picturesque cyclist of your dreams.    Read more... More about Storage , Car , Bicycle , Trunk , and Cyclist from Mashable http://ift.tt/2eHNwLB via IFTTT