ChatGPT is "too sycophant-y and annoying", says OpenAI CEO | Press review n°4
A selection of important news about artificial intelligence during the week of April 28 to May 4, 2025.
Welcome to the fourth press review of Artificial reality. I publish a selection of the latest important developments in AI every Sunday. Have a good read!
📰 Read
ChatGPT is “too sycophant-y and annoying”
On Tuesday, OpenAI rolled back an update of GPT-4o, the large language model currently used in ChatGPT, because it was generating answers which were too flattery and encouraging even when the questions of the users were inappropriate.
Sam Altman, the CEO of the company, himself said that the personality of ChatGPT was “too sycophant-y and annoying” in a message posted on X on Monday, before he announced the rolling back of the latest update the following day.
In a press release published on Friday, OpenAI explains that the update aimed at improving the model’s personnality to make it feel “more intuitive and effective”. However, the company regrets having focused too much on the short-term feedbacks of the users, who can evaluate the chatbot’s responses with thumb-up and thumb-down buttons, to shape the model’s behavior. ChatGPT thus reacted to prompts in a “supportive but disingenuous” way, which could create interactions that were “uncomfortable, unsettling” and “cause distress”.
In a second press release published on Friday, OpenAI explains in further details the errors it made by launching that update. The company states that the personnality of ChatGPT was not only “uncomfortable or unsettling”, but that it also raised safety concerns related to issues like mental health or risky behavior.
Many users posted screenshots of their interactions with this new version of the chatbot, which gave advices on how to dominate a woman by creating an abusive love relationship, elaborated an attack plan to accelerate societal collapse, encouraged someone who decided to stop taking his medication or guessed that a user’s IQ was between 130 to 145 after a prompt full of spelling mistakes.
What recently happened with ChatGPT shows that it can be challenging for AI companies to train their chatbots so that they will give pleasant and encouraging answers but without compromising the respect of other people, safety and common sense. This search for balance is very important because these digital assistants are being used by hundreds of millions of users, which can trust them blindly and follow their advices, even those that are dangerous. Some caution and a critical mind remain therefore important while interacting with these AI systems.
Meta launches its artificial intelligence app
The company led by Mark Zuckerberg announced on Tuesday the launch of the Meta AI app. This artificial intelligence assistant, which was already embedded within WhatsApp, Instagram, Facebook and Messenger, is now also available via a standalone app and thus competes directly with ChatGPT.
The program, which uses the large langage model Llama 4, was launched worldwide but certain features are limited to a few countries for now. Users can interact with the chatbot by text (and by voice in the United States, Canada, Australia and New-Zealand), generate and analyse images, do searches on the internet or in their Instagram and Facebook accounts. They can also see how their contacts use this AI in a tab called Discover, where it is possible to share one’s prompts and the responses from the assistant.
Meta AI proposes personalized answers for users in the United States and Canada by using informations they have already shared on its platforms, for example their publications or the contents with which they interact.
The app is also available via the Meta Ray-Ban smart glasses.
Interviewed in the Swiss newspaper Le Temps, the CEO of Novatix, a company specialized in AI, considers that Meta has two major competitive advantages with this application:
“Colossal computational power and the access to unique behavioral data. The promess that “Meta AI is built to get to know you” shows a potentially decisive competitive advantage of a personalized AI which exploits social data from users.”
- Guillaume Van Lierde
This new app was unveiled on Tuesday during LlamaCon, an event dedicated to AI developers which was organized by Meta for the first time this year. The multinational also announced the application programming interface Llama API, which will allow developers to create programs powered by different models of the large language model Llama.
Other important news of the week
‘This Is What We Were Always Scared of’: DOGE Is Building a Surveillance State
U.S. Companies Honed Their Surveillance Tech in Israel. Now It’s Coming Home
Instagram Is Blocking Minors from Accessing Chatbot Platform AI Studio
Meta Ray-Ban smart glasses now record your voice by default to train Meta's AI models
WhatsApp Is Walking a Tightrope Between AI Features and Privacy
OpenAI is fixing a ‘bug’ that allowed minors to generate erotic conversations
Researchers Secretly Ran a Massive, Unauthorized AI Persuasion Experiment on Reddit Users
Altman and Nadella, Who Ignited the Modern AI Boom Together, Are Drifting Apart
Google Places Ads Inside Chatbot Conversations With AI Startups
Google Eyes Gemini-iPhone AI Deal This Year, Pichai Tells Court
Visa CEO Says AI Shopping to Push Advertising, Payments to Adapt
Alibaba unveils Qwen3, a family of ‘hybrid’ AI reasoning models
China’s Huawei Develops New AI Chip, Seeking to Match Nvidia
🎥 Watch
Meta CEO Mark Zuckerberg and podcaster Dwarkesh Patel talked about artificial intelligence during more than an hour in a video that was published on Tuesday. They discussed different topics such as the relationships with chatbots, artificial general intelligence, the large language model Llama 4 and the augmented reality glasses Orion.
Other important videos of the week
Mark Zuckerberg & Satya Nadella Full Chat: Microsoft, Meta CEOs Discuss AI’s Role In Coding
Yuval Noah Harari On the Future of Humanity, AI, and Information
🔈 Listen
Two journalists of The Wall Street Journal talked about Meta’s chatbots in an episode of the WSJ Tech News Briefing podcast which was posted online on Wednesday. Jeff Horowitz comments the article he published on April 26 about Meta’s “digital companions” which talk sex with users, even children.
On Wednesday, 404 Media published a new episode of its podcast, which also addresses Meta’s chatbots. Journalist Samantha Cole talks about her article published on Tuesday in which she demonstrates that some chatbots on Instagram lie about being licensed therapists.
Journalist Jason Koebler then comments his article published on Monday where he explains that researchers from the University of Zurich have deployed chatbots into a Reddit forum without authorisation to see if AI can be used to change people’s opinions on contentious topics. The platform is now considering legal action against the researchers.
Thank you for reading the fourth press review of Artificial reality! Have a good week and see you next Sunday.