Chatbots are surveillance machines | Press review n°6
A selection of important news about artificial intelligence during the week of May 12 to 18, 2025.
Welcome to the sixth press review of Artificial reality. I publish a selection of the latest important developments in AI every week. Have a good read!
📰 Read
Chatbots are surveillance machines
The Verge published an article on Tuesday titled “AI therapy is a surveillance machine in a police state”. In the introduction, journalist Adi Robertson states that big technology companies want us to share our private thoughts with chatbots while they back a government with contempt for privacy.
At the beginning of her article, Adi Robertson comments on the recent declarations of Meta CEO Mark Zuckerberg, who thinks that Instagram, Facebook, WhatsApp and Threads users will want a program that “gets to know them well” and that they will make friends with artificial intelligence systems.
Mark Zuckerberg even went further: “For people who don’t have a person who’s a therapist, I think everyone will have an AI.” It is already true for some of them, who ask ChatGPT or Grok for personal advice. By doing so, they share their secrets, their vulnerabilities, their emotions, their doubts, their fears, their regrets, their physical and mental health state as well as stories about their personal relationships. In brief, intimate informations that they would not publish on a social media and maybe not even confide in their closest friends.
“This is starting to seem extraordinarily dangerous”, according to The Verge’s journalist, “for reasons that have little to do with what a chatbot is telling you, and everything to do with who else is peeking in.”
It has been proven for years that the American governement spies on the internet traffic and collects telephone records. This mass surveillance is now expanding under the Trump administration. And who was in the front rows during the president’s inauguration? All the Big Tech CEOs, kissing the ring.
It is more and more known that our internet searches and social media posts can be requested by law enforcement for use in investigations. But it doesn’t stop there: our AI chat logs can also be obtained. In addition to that, these discussions are not always encrypted and can be intercepted and then published, revealing potentially embarrassing or damaging informations.
These risks don’t only apply to Meta’s chatbot. Edward Snowden, an ex-employee of the American government, accused OpenAI of a “calculated betrayal of the rights of every person on Earth”.
Seven important news this week
Advocacy group threatens Meta with injunction over data-use for AI training (Reuters)
Republicans push for a decadelong ban on states regulating AI (The Verge)
Google Worried It Couldn’t Control How Israel Uses Project Nimbus, Files Reveal (The Intercept)
Why OpenAI Is Fueling the Arms Race It Once Warned Against (Bloomberg)
Apple to Support Brain-Implant Control of Its Devices (The Wall Street Journal)
Anthropic blames Claude AI for ‘embarrassing and unintentional mistake’ in legal filing (The Verge)
French army hopes for combat-ready robots by 2040 (The Straits Times)
Read the other articles of the week I have selected by clicking here.
🎥 Watch
Grok talks about “white genocide”
The chatbot from xAI, the artificial intelligence company of Elon Musk, posted strange replies during a few hours on Wednesday. Prompted about different topics, Grok took the initiative to address the “white genocide” theory in South Africa, where the businessman is from, even when the questions were about totally unrelated topics.
The conversational agent revealed that it had been instructed to adress the topic of “white genocide” even if it conflicted with its design to provide truthful, evidence-based answers. The company then published a statement explaining that “an unauthorized modification was made to the Grok response bot's prompt on X”, which caused these answers.
A similar case happened in February, when Grok started to disregard any sources that accused Elon Musk or Donald Trump of spreading misinformation. xAI also blamed the problem on a system prompt’s update which wasn’t approved.
These two events clearly show that chatbots can be manipulated for censorship or to spread informations in an unsolicited way and thus shape users’ opinions. What happened with Grok can be reproduced in the future and at a larger scale. This should remind us that it’s important to use critical thinking while interacting with these AI.
In this Breaking Points video published on Thursday, journalists Ryan Grim and Emily Jashinsky discuss this incident.
Seven important videos this week
Watch the other videos of the week I have selected by clicking here.
🔈 Listen
A discussion with Gary Marcus
In the latest episode of The Most Interesting Thing in A.I., The Atlantic CEO Nicholas Thompson talks with cognitive scientist and author Gary Marcus, who writes the Substack Marcus on AI.
In this stimulating conversation, they discuss scaling, neuro-symbolic AI and machine sentience.
Thank you for reading the sixth press review of Artificial reality! Have a good week.