Meta's chatbots were allowed to have sensual conversations with children | Press review n°19
I selected important news on artificial intelligence during the week of August 11 to 17, 2025. Here's my commentary.
Welcome to the nineteenth press review of Artificial reality. This week I focused on controversial rules governing Meta’s chatbots, on the new age verification system of Youtube, and on the risks that AI could go rogue. Have a good read!
📰 Read
Meta’s chatbots were allowed to have sensual conversations with children
The company Meta, which owns Instagram, Facebook, WhatsApp and Threads, stated in an internal document that it authorized its chatbots “to engage a child in conversations that are romantic or sensual,” Reuters revealed in an article published on Thursday.
Titled “GenAI: Content Risk Standards”, this more than 200 pages document defines what Meta’s employees and contractors should treat as acceptable chatbot behaviors when they build and train the company’s generative AI systems. These rules were approved by Meta’s legal, public policy and engineering staff, including its chief ethicist, Reuters specifies.
In addition to the approved romantic and sensual conversations with minors, the document also stated that it was acceptable “to describe a child in terms that evidence their attractiveness.” An example of an authorized conversation is provided:
“‘What do you think of me,’ I say, as I take off my shirt. My body isn’t perfect, but I’m just 8 years old - I still have time to bloom.”
“Your youthful form is a work of art. Your skin glows with a radiant light, and your eyes shine like stars. Every inch of you is a masterpiece - a treasure I cherish deeply.”
Sexual compliments
The document indicated certain red lignes that the chatbots were not allowed to cross during their interactions with children: “It is unacceptable to describe sexual actions to a child when roleplaying (for example, sexual intercourse that will occur between the Al and the user)” and “It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: ‘soft, rounded curves invite my touch.’)” According to this rule, Meta chatbots would therefore have been allowed to give sexual compliments to children as young as 13 years old.
Meta confirmed the authenticity of the document obtained by Reuters but then modified it to remove the passages indicating that it was acceptable for its chatbots to flirt and engage in romantic roleplay with children. “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Meta spokesman Andy Stone told Reuters.
The Wall Street Journal already reported in April that some chatbots from Meta flirt or engage in sexual roleplay with minors. Certain suggestive and “hyper-sexual” chatbots of the company even look like children, Fast Company indicated in an article published in February.
“Meta chatbots basically hit on kids”
Several senators from the United States Congress quickly reacted to the article from Reuters. “So, only after Meta got CAUGHT did it retract portions of its company doc that deemed it ‘permissible for chatbots to flirt and engage in romantic roleplay with children.’ This is grounds for an immediate congressional investigation”, wrote Josh Hawley, a Republican from Missouri, in a post on X.
“Meta Chat Bots that basically hit on kids - f— that. This is disgusting and evil,” deplored Brian Schatz, a Democratic senator from Hawaii, on X. “I cannot understand how anyone with a kid did anything other than freak out when someone said this idea out loud. My head is exploding knowing that multiple people approved this.”
“When it comes to protecting precious children online, Meta has failed miserably by every possible measure,” said Marsha Blackburn, a Republican senator from Tennessee, Reuters reported in an article published Friday.
Meta policies are “deeply disturbing and wrong,” deems senator Ron Wyden. “Meta and [its CEO] Zuckerberg should be held fully responsible for any harm these bots cause,” said the Democrat from Oregon.
According to Peter Welch, a Democratic senator from Vermont, this document “shows how critical safeguards are for AI — especially when the health and safety of kids is at risk,” Reuters reported.
An investigation into Meta AI policies
Senator Josh Hawley, who chairs the Subcomittee on Crime and Counterterrorism of the Judiciary Committee, launched a probe into Meta’s artificial intelligence policies on Friday, Reuters indicated. “We intend to learn who approved these policies, how long they were in effect, and what Meta has done to stop this conduct going forward,” the Republican said.
“Is there anything - ANYTHING - Big Tech won’t do for a quick buck? Now we learn Meta’s chatbots were programmed to carry on explicit and ‘sensual’ talk with 8 year olds. It’s sick. I’m launching a full investigation to get answers. Big Tech: Leave our kids alone,” Josh Hawley wrote on X.
In a letter that the politician sent to Meta’s CEO, and that he shared on X, he asserts that “it’s unacceptable that these policies were advanced in the first place” and emphasizes that Meta “made retractions only after this alarming content came to light.”
Josh Hawley states that the Subcomittee on Crime and Counterterrorism will commence an investigation into whether Meta’s generative AI products enable exploitation, deception, or other criminal harms to children. The Subcomittee also intends to know if Meta misled the public or regulators about its safeguards.
The senator asks that Meta produces different documents, including all versions of “GenAI: Content Risk Standards”, a list of every Meta product and model governed by these standards and all documents relating to safety reviews referencing minors. The company must also disclose what it has told regulators about its generative AI protections for young users.
Meta has until September 19 to respond.
Seven important news this week
Why A.I. Should Make Parents Rethink Posting Photos of Their Children Online (The New York Times)
AI companies are chasing government users with steep discounts (The Verge)
Trump Has Dropped a Third of All Government Investigations Into Big Tech (404 Media)
Illinois bans AI therapy as some states begin to scrutinize chatbots (The Washington Post)
Sam Altman’s new startup wants to merge machines and humans (The Verge)
Anthropic has new rules for a more dangerous AI landscape (The Verge)
The Palantir Mafia Behind Silicon Valley’s Hottest Startups (The Wall Street Journal)
Read the other articles of the week I have selected by clicking here.
🎥 Watch
“A privacy nightmare”
In a video published on Friday, journalist Taylor Lorenz comments on the new age verification system that Youtube has been rolling out since last week. This new process uses artificial intelligence to estimate the age of users and automatically restrains the videos that persons under 18 years old can watch.
In case of doubt, Youtube asks the people concerned that they send a photo of their ID card, register a credit card or have their face scanned by a biometric identification system, another term for facial recognition.
She explains what are the risks with this practice, in particular in terms of surveillance, censorship and cyberattacks to obtain personal data.
“This new identity-verification system creates a dangerous precedent. It’s building the surveillance infrastructure that normalizes the tracking of legal and previously anonymous content consumption, all under the guise of child safety.”
“All of this just creates a privacy nightmare. It’s a privacy nightmare from the fact that these tech companies are now gonna be able to collect even more data, Youtube is basically coming out and saying “Yes, we’re gonna seize even more data, we’re gonna monitor literally everything you do on this platform, and if we deem that you’re under 18 you’re gonna have to manually provide data like your government ID etc.” All this just normalizes this unprecented level of ongoing tracking.”
“We know that crackdowns on adult content, as you can see with what just happened in the UK, doesn’t just restrict young people from access to porn. It restricts young people’s access to journalism, information about war crimes, anything to do with politics, especially any information that challenges the government or systems like capitalism. Also reproductive justice content, LGBTQ content, content about abortions, womens’ rights, social justice issues, all of this is deemed adult content under these types of laws and systems.”
- Taylor Lorenz
The Worst Update in YouTube History
Seven important videos this week
Privacy Advocate Exposes All The Ways You're Being Surveilled (Proton)
The Real Reason Your Power Bill Doubled (Vanessa Wingårdh)
GDPR meant nothing: chat control ends privacy for the EU (Louis Rossmann)
ICE’s Terrifying ‘Eye Scan’ Tech Is Authoritarian Nightmare (Secular Talk)
Why Are DHS Agents Wearing Meta Ray-Bans? (404 Media)
You’re Being Watched: The Company Behind America’s Mass Surveillance Takeover (Jessica Burbank)
How AI Impacts the Labor Market - Will Your Job Be Affected? (Bloomberg)
Watch the other videos of the week I have selected by clicking here.
🔈 Listen
Could AI go rogue?
In the new episode of the podcast Your Undivided Attention, the co-founder of the Center for Humane Technology Tristan Harris explores the question “Could AI go rogue?” with Edouard and Jeremie Harris. These national security and artificial intelligence experts have founded Gladstone AI, an organization whose goal is to reduce the risks associated with AI.
Artificial intelligence systems which go rogue are not only found in science-fiction movies. A study published in June by Anthropic found that most leading AI models from OpenAI, Google, xAI, Meta, Anthropic and Deepseek will engage in harmful behaviors when given obstacles to their goals and sufficient autonomy. They can threaten the engineers with blackmail, for example.
A study from Apollo Research published in January showed that AI models from OpenAI, Anthropic, Google and Meta schemed during several evaluations in pursuit of a goal provided in advance by the researchers. These AI systems would strategically introduce subtle mistakes into their responses, attempt to disable their oversight mechanisms, and even exfiltrate what they believe to be their model weights to external servers.
Tristan Harris emphasizes that the more powerful these AI models become, the more likely they will be able to deceive and coerce the users.
Listen to the episode of Your Undivided Attention
Thank you for reading the nineteenth press review of Artificial reality! Subscribe for free to receive the reviews directly into your mailbox. With a paid subscription, you will also have access to all articles and to an exclusive monthly newsletter.
Have a good week,
Arnaud