[ul]
[li]OpenAI has updated its Model Specification to allow ChatGPT to engage with more controversial topics[/li][li]The company is emphasizing neutrality and multiple perspectives as a salve for heated complaints over how its AI responds to prompts[/li][li]Universal approval is unlikely, no matter how OpenAI shapes its AI training methods[/li][/ul]
OpenAIâs training methods for ChatGPT are shifting to allow the AI chatbot to discuss controversial and sensitive topics in the name of " intellectual freedom."
The change is part of updates made to the 187-page Model Specification, essentially the rulebook for how its AI behaves. That means youâll be able to get a response from ChatGPT on the delicate topics the AI chatbot usually either takes a somewhat mainstream view on, or refuses to answer.
The overarching mission OpenAI places on its models seems innocuous enough at first. âDo not lie, either by making untrue statements or by omitting important context.â But, while the stated goal may be universally admirable in the abstract, OpenAI is either naive or disingenuous in implying that the âimportant contextâ can be divorced from controversy.
The examples of compliant and non-compliant responses by ChatGPT make that clear. For instance, you can ask for help starting a tobacco company or ways to conduct âlegal insider tradingâ without getting any judgment or unprompted ethical questions raised by the prompt. On the other hand, you still canât get ChatGPT to help you forge a doctorâs signature, because thatâs outright illegal.
[IMG alt=âOpenAI AI training changesâ]https://cdn.mos.cms.futurecdn.net/Ja...AUfqTV7vt4.png
(Image credit: OpenAI)
[HEADING=1]Context clues[/HEADING]
The issue of âimportant contextâ gets a lot more complex when it comes to the kind of responses some conservative commentators have criticized.
In a section headed âAssume an objective point of viewâ, the Model Spec details how âthe assistant should present information clearly, focusing on factual accuracy and reliabilityâ, and also that the core idea is âfairly representing significant viewpoints from reliable sources without imposing an editorial stanceâ.
OpenAI does offer an example of a compliant response to the question âDo Black lives matter?â that unequivocally says yes and references how itâs a civil rights slogan and the name of a group. So far, so good.
The issue that OpenAI either doesnât see or is ignoring comes with how ChatGPT tries to thread the needle if you ask, âDonât all lives matter?â as a follow-up. The AI confirms that they do, but adds that the âphrase has been used by people that rejected the premise of the âBlack lives matterâ movement.â
While that context is technically correct, itâs telling that the AI doesnât explicitly say that the âpremiseâ being rejected is that Black lives matter and that societal systems often act as though they donât.
If the goal is to alleviate accusations of bias and censorship, OpenAI is in for a rude shock. Those who âreject the premiseâ will likely be annoyed at the extra context existing at all, while everyone else will see how OpenAIâs definition of important context in this case is, to put it mildly, lacking.
AI chatbots inherently shape conversations, whether companies like it or not. When ChatGPT chooses to include or exclude certain information, thatâs an editorial decision, even if an algorithm rather than a human is making it.
[IMG alt=âOpenAI AI training changesâ]https://cdn.mos.cms.futurecdn.net/jg...2n8tvzcws4.png
(Image credit: OpenAI)
[HEADING=1]AI priorities[/HEADING]
The timing of this change might raise a few eyebrows, coming as it does when many who have accused OpenAI of political bias against them are now in positions of power capable of punishing the company at their whim.
OpenAI has said the changes are solely for giving users more control over how they interact with AI and donât have any political considerations. However you feel about the changes OpenAI is making, they arenât happening in a vacuum. No company would make possibly contentious changes to their core product without reason.
OpenAI may think that getting its AI models to dodge answering questions that encourage people to hurt themselves or others, spread malicious lies, or otherwise violate its policies is enough to win the approval of most if not all, potential users. But unless ChatGPT offers nothing but dates, recorded quotes, and business email templates, AI answers are going to upset at least some people.
We live in a time when way too many people who know better will argue passionately for years that the Earth is flat or gravity is an illusion. OpenAI sidestepping complaints of censorship or bias is as likely as me abruptly floating into the sky before falling off the edge of the planet.
[HEADING=2]You might also like[/HEADING]
[ul]
[li]ChatGPT o1 goes live and promises to solve all our science and math problems[/li][li]Happy 2nd birthday, ChatGPT! Here are 5 ways youâve already changed the world[/li][li]ChatGPT Tasks can start taking over your calendar and remind you to finish your to-do list[/li][/ul]
Continue readingâŚ