ChatGPT is three years old and full of dark sides | Press review n°32
The famous chatbot was launched on November 30, 2022. It comes with vague promises and concrete drawbacks. Here is a selection of news about AI from November 17 to 30, 2025.
Welcome to the 32nd press review of Artificial Reality. This issue is devoted to the third anniversary of ChatGPT, AI-fueled online scams, and Peter Thiel’s obsession with the Antichrist. Have a good read!
📰 Read
ChatGPT is three years old and full of dark sides
On November 30, 2022, a startup still unknown to the general public called OpenAI released ChatGPT. Three years later, the chatbot is used by 800 million people each week. In an article published Saturday, Swiss newspaper Le Temps looks at the uses and the dark sides of this conversational agent, then sketches the uncertain future of artificial intelligence.
Why is ChatGPT so successful? A study published in September by OpenAI indicates that most conversations with the chatbot are personal, mostly to obtain practical guidance, reports the Washington Post. Writing assistance and information seeking are also among the main uses. By June 2025, only 27 % of the chats were work‑related, the survey notes.
Another study by the Washington Post shows that the chatbot is primarily used for information seeking and for “musings and abstract discussions,” for example about the nature of reality.
In short, ChatGPT has found a place in many people’s lives as a digital companion with whom they can talk about anything and at any time. Users see the program as a virtual friend, a professional assistant, or even as a romantic partner.
“Still years of research”
Although these uses may have a legitimate purpose, we are still far from the revolutionary promises made by Sam Altman, OpenAI’s CEO. “Although it will happen incrementally, astounding triumphs – fixing the climate, establishing a space colony, and the discovery of all of physics – will eventually become commonplace. With nearly-limitless intelligence and abundant energy – the ability to generate great ideas, and the ability to make them happen – we can do quite a lot,” he wrote in September 2024.
“There is indeed a gap between the very ambitious promises of the industry giants and what AI can concretely deliver today,” acknowledges Aldo Podestà, director of Giotto AI, a company based in Lausanne, Switzerland. “Crossing the next step toward a more general form of intelligence will still require major breakthroughs and years of research.”
“We mostly see a strong momentum towards the ‘product’ side. OpenAI, for example, is currently focusing considerable effort on improving integration, features, and user experience,” adds Podestà. “That creates an impression of rapid movement. From a strictly scientific standpoint, progress is becoming more linear than truly disruptive.”
“Few major innovations”
Users of AI services can feel overwhelmed by the flood of launches presented as innovations, but journalists at Le Temps advise taking a step back.
“I get the impression that many of these are just gadgets or new packaging for existing products, with few major innovations,” says Rachid Guerraoui, professor at the Faculty of Computer Science and Communications, EPFL. “This frenzy of ‘new releases’ is partly explained by the colossal AI investments that come with a pressure for results. Yet, when those results arrive, they rarely translate into a solid return on investment: AI isn’t paying off much yet (except for hardware manufacturers), which fuels a lot of agitation to calm the investors.”
“The other factor driving this frenzy is fierce competition among the tech giants, plus a handful of smaller players trying to carve out a niche, or get acquired,” Guerraoui analyses. “So there’s a lot of motion, but it’s often just simple variations of existing products. Some liken this agitation to a swan song foretelling the burst of the AI bubble.”
“Nevertheless, I remain hopeful that beyond writing texts, e‑mails or presentations, beyond searching information on the web (which is not always reliable), as well as generating relatively simple code (not always reliable either), we will witness genuine scientific discoveries driven by AI -which is terrifyingly effective at exploring new avenues. AI can bring a lot to individuals, but not necessarily directly,” the professor adds.
Dark sides
While the concrete benefits of generative AI struggle to materialize, its negative consequences are becoming increasingly visible, Le Temps notes. The web has changed dramatically since ChatGPT’s launch, followed by other AI programs. While browsing the web, we are now almost constantly confronted with artificial images, videos, voices, music, and text.
These contents are sometimes low‑quality and easy to spot as fake. But it becomes increasingly impossible to distinguish them from genuine photos, videos, or music. AI blurs the line between reality and artifice, truth and illusion, honesty and deception.
Also read: OpenAI’s new social media is a universe of illusions | Press review n°26
The development of AI does not only have negative impacts in cyberspace. It also drives the construction of many data centers worldwide, for a total cost in the hundreds of billions of dollars. Their massive energy consumption tends to accelerate the building of new nuclear power plants. They also require huge amounts of water for cooling, sometimes jeopardizing water access for nearby communities.
As for automation enabled by AI, it threatens to replace millions of workers worldwide.
Amazon To Kill 600,000 Jobs And Use AI Instead (Secular Talk)
Le Temps does not mention, among the dark sides, the massive collection of personal data used to train AI systems, including the conversations with chatbots, as well as photos and videos posted on social medias.
On this subject: Meta will train its AI on our posts | Press review n°2
This problem also extends to books, news articles, images, photos, videos, and messages published online, which are stolen by many tech companies without compensation for the authors.
The downsides do not stop there. Many AI companies exploit underpaid workers for annotating the data used to train their models, they sell their software to surveillance firms and they sign contracts with weapons manufacturers.
Also read: The militarization of Silicon Valley | Press review n°18
OpenAI has also been accused of pushing a teenager to suicide. More generally, artificial intelligence facilitates cyberattacks, fraud, and scams.
A vague general AI
Before launching ChatGPT, OpenAI had already set the goal of achieving artificial general intelligence (AGI), a system that would be generally “smarter than humans”. The company hoped it could create such a program in the following years. A thousand days after ChatGPT’s debut, the definition of AGI remains vague and its realization uncertain.
For Thibault Prévost, author of The Prophets of AI. Why Silicon Valley sells the Apocalypse (Les Prophètes de l’IA. Pourquoi la Silicon Valley nous vend l’apocalypse), the lack of a precise definition of AGI is no accident. “Companies have every incentive to maintain this epistemic uncertainty because it lets them constantly redefine AGI and thus shift their goals to match investor expectations as original promises fall short,” comments the journalist.
While awaiting this hypothetical general AI that, according to Sam Altman, would “elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge,” the big tech firms cause numerous problems in the real world and in cyberspace. We’ll see where things stand at the next ChatGPT anniversary and whether we are truly moving closer to an AI that “benefits all of humanity.” I doubt it.
Seven important news
OpenAI denies liability in teen suicide lawsuit, cites ‘misuse’ of ChatGPT (The Verge)
Meet the AI workers who tell their friends and family to stay away from AI (The Guardian)
The Most Joyless Tech Revolution Ever: AI Is Making Us Rich and Unhappy (The Wall Street Journal)
European Commission accused of ‘massive rollback’ of digital protections (The Guardian)
How big tech is creating its own friendly media bubble to ‘win the narrative battle online’ (The Guardian)
Lost in the slop layer (Blood in the Machine)
The Privacy Battle in Our Brains (The New York Times)
🎥 Watch
AI fuels online scams
Wired published a video addressing the growing issue of online fraud enabled by artificial intelligence and other techniques. They do a deep dive into shopping scams and give tips on how to avoid being ripped off. This video dropped at the right time for the holiday season.
How Online Scammers Use AI To Steal Your Money (Wired)
Seven important videos
What AI companies don’t want you to know (Future of Life Institute)
ChatGPT is Turning Everyone Into Bots (Vanessa Wingårdh)
Ex OpenAI Researcher: Total Job Loss Imminent (Breaking Points)
Sale of this AI toy suspended over dangerous messages to kids (ABC News)
MAGA Govs Revolt Over Trump Ban On AI Regulation (Breaking Points)
Nvidia Earnings: Why Record Sales Show AI Boom Isn’t Letting Up (The Wall Street Journal)
🔈 Listen
An obsession for the Antichrist
In a recent episode of Tech Won’t Save Us, journalists Paris Marx of Disconnect and Gil Duran of The Nerd Reich talk about Peter Thiel’s obsession with the Antichrist.
Peter Thiel is the co-founder of Paypal, the founder of data analysis company Palantir, an early investor in Facebook, and one of the world’s most influential tech investor.
Listen to Tech Won’t Save Us on Youtube Music
Thank you for reading the 32nd press review of Artificial reality. If you don’t have a paid subscription yet, you can support my work by clicking the button below, thanks!
See you in two weeks,
Arnaud





