OpenAI is worried that ChatGPT-4o users are developing feelings for the chatbot

  • Hi there and welcome to PC Help Forum (PCHF), a more effective way to get the Tech Support you need!
    We have Experts in all areas of Tech, including Malware Removal, Crash Fixing and BSOD's , Microsoft Windows, Computer DIY and PC Hardware, Networking, Gaming, Tablets and iPads, General and Specific Software Support and so much more.

    Why not Click Here To Sign Up and start enjoying great FREE Tech Support.

    This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.
  • Hello everyone We want to personally apologize to everyone for the downtime that we've experienced. We are working to get everything back up as quickly as possible. Due to the issues we've had, your password will need to be reset. Please click the button that says "Forgot Your Password" and change it. We are working to have things back to normal. Emails are fixed and should now send properly. Thank you all for your patience. Thanks, PCHF Management

PCHF IT Feeds

PCHF Tech News
PCHF Bot
Jan 10, 2015
52,125
26
pchelpforum.net
The introduction of GPT-4o has been seen as a major step up in the abilities of OpenAI’s ChatGPT chatbot, as it's now able to produce more lifelike responses and can work with a wider range of inputs. However, there may be a downside to this increased sophistication, with OpenAI itself warning that GPT-4o’s capabilities seem to be causing some users to become increasingly attached to the chatbot, with potentially worrying consequences.

Writing in a recent 'system card' blog post for GPT-4o, OpenAI outlined many of the risks associated with the new chatbot model. One of them is “anthropomorphization and emotional reliance,” which “involves attributing human-like behaviors and characteristics to nonhuman entities, such as AI models.”

When it comes to GPT-4o, OpenAI says that “During early testing … we observed users using language that might indicate forming connections with the model. For example, this includes language expressing shared bonds, such as “This is our last day together”.”

As the blog post explained, such behavior may seem innocent on the surface, but it has the potential to lead to something more problematic, both for individuals and for society at large. To skeptics, it will come as further evidence of the dangers of AI and of the rapid, unregulated development of the technology.

Falling in love with AI​


A close up of ChatGPT on a phone, with the OpenAI logo in the background of the photo


(Image credit: Shutterstock/Daniel Chetroni)

As OpenAI’s blog post admits, forming attachments to an AI might reduce a person’s need for human-to-human interactions, which in turn may affect healthy relationships. As well as that, OpenAI states that ChatGPT is “deferential,” allowing users to interrupt and take over conversations. That kind of behavior is seen as normal with AIs, but it’s rude when done with other humans. If it becomes more normalized, OpenAI believes it could impact regular human interactions.

The subject of AI attachment is not the only warning that OpenAI issued in the post. OpenAI also noted that GPT-4o can sometimes “unintentionally generate an output emulating the user’s voice” – in other words, it could be used to impersonate someone, giving everyone from criminals to malicious ex-partners opportunities to engage in nefarious activities.

Yet while OpenAI says it has enacted measures to mitigate this and other risks, when it comes to users becoming emotionally attached to ChatGPT it doesn’t appear that OpenAI has any specific measures in place yet. The company merely said that “We intend to further study the potential for emotional reliance, and ways in which deeper integration of our model’s and systems’ many features with the audio modality may drive behavior.”

Considering the clear risks of people becoming overly dependent on an artificial intelligence, and the potential wider ramifications if this happens on a large scale, one would hope that OpenAI has a plan that it's able to deploy sooner rather than later. Otherwise, we could be looking at another example of an insufficiently regulated new technology having worrying unintended consequences for individuals, and for society as a whole.

You might also like​


Continue reading...