OpenAI's new social media is a universe of illusions | Press review n°26
I selected important news on artificial intelligence during the week of September 29 to October 5, 2025.
Welcome to the twenty-sixth press review of Artificial reality. This week I focused on the new social media Sora, on AI slop, and on the growing divide between Big Tech and MAGA. Have a good read!
📰 Read
OpenAI’s new social media is a universe of illusions
OpenAI, the California company which owns ChatGPT, launched a new social media called Sora. Based on the video generator Sora 2, this app “makes disinformation extremely easy and extremely real,” warns the New York Times.
Sora allows users to generate completely artificial videos of ten seconds, audio included, from texts. They can also have their face and their voice analyzed by the app to be able to create a character in their likeness.
The interface resembles TikTok, with an endless stream of videos to scroll through by swiping the screen from bottom to top. It’s also possible to follow other accounts and interact with them.
Sora is currently available on an invite-only basis, exclusively on iPhone, and solely in the United States and Canada. It will be rolled out to the rest of the world in the future.
Sound on for Sora 2 (OpenAI)
Numerous risks
This new social media allows users to generate playful videos by using cartoon or video game characters, for example. But it’s also possible to create photorealistic videos that are virtually indistinguishable from reality. A technological feat that is not without risks.
The New York Times journalists give several problematic examples. Within a few days, Sora users created videos of ballot fraud, political gatherings, immigration arrests, protests, crimes, and attacks on city streets.
According to them, Sora—just like other similar programs such as Meta’s Vibes or Google’s Veo 3—could become “increasingly fertile breeding grounds for disinformation and abuse.” This is due to the fact that it is now easy to generate extremely realistic videos that many people will consider to be genuine.
The list of dangers is long: exacerbation of conflicts, defrauding of consumers, swinging of elections, framing of people for crimes they did not commit, and negative psychological impact of videos showing violence.
“It’s worrisome for consumers who every day are being exposed to God knows how many of these pieces of content. I worry about it for our democracy,” comments Hany Farid, a professor of computer science at the University of California, Berkeley, and a co-founder of GetReal Security.
Artificial intelligence systems which generate videos also allow users to create pornographic content, including deepfake porn. They will also probably allow to create monstrous and disturbing videos, as certain image generators already do.
“Important concerns”
OpenAI assures that it launched Sora after extensive safety testing. “Sora 2’s ability to generate hyperrealistic video and audio raises important concerns around likeness, misuse and deception,” recognizes the company.
“Our usage policies prohibit misleading others through impersonation, scams or fraud, and we take action when we detect misuse,” the startup adds.
In tests done by The New York Times, the app refused to generate videos of famous people who had not given their permission and declined prompts asking for graphic violence. However, it created videos showing store robberies, home intrusions captured on doorbell cameras, bombs exploding on city streets, and other fake images of war.
Loss of trust
So far, videos had been a relatively reliable evidence that events actually took place. Now that the generation of artificial videos is becoming increasingly accessible, there is a risk that the public will lose all confidence in what they see, according to experts interviewed by the New York newspaper.
Videos were “somewhat hard to fake, and now that final bastion is dying,” said Lucas Hansen, a founder of CivAI, a nonprofit that studies the abilities and dangers of artificial intelligence. “There is almost no digital content that can be used to prove that anything in particular happened.”
The risk is indeed twofold, because videos of events that really happened can now be labeled as ‘fake’ by claiming they were generated by AI.
Biometric data
The list of issues doesn’t stop there: by scanning their face to create a character in their image, users voluntarily give their biometric data, which are very sensitive, to OpenAI. It’s very likely that the vast majority of them don’t read the terms of use to know which rights on this personal information they are ceding to the U.S. company.
Also read: Facial recognition is gaining ground | Press review n°20
Furthermore, generating artificial videos consumes a lot of energy, can infringe on copyrights, can be addictive, risks replacing many jobs in the movie industry, and can be used for bullying.
We are entering a digital era full of illusions where it will often be impossible to distinguish what is real and what is artificial, where widespread doubt will set in, and where truth won’t have much value anymore because it will always be possible to call it into question due to AI. This lack of trust will probably further weaken the cohesion of our societies.
Seven important news this week
ICE Wants to Build Out a 24/7 Social Media Surveillance Team (Wired)
ICE to Buy Tool that Tracks Locations of Hundreds of Millions of Phones Every Day (404 Media)
Amazon’s Ring plans to scan everyone’s face at the door (The Washington Post)
I broke ChatGPT’s new parental controls in minutes. Kids are still at risk. (The Washington Post)
Meta will soon use your AI chats to personalize your feeds (The Verge)
Anthropic Will Use Claude Chats for Training Data. Here’s How to Opt Out (Wired)
This Startup Wants to Put Its Brain-Computer Interface in the Apple Vision Pro (Wired)
🎥 Watch
AI Slop Apocalypse
In this video, Taylor Lorenz talks with Drew Harwell from The Washington Post about AI slop, which is a type of low-quality media that can be made with artificial intelligence, like Sora for example.
They point out that the amount of slop online is growing, then they explain who is behind this internet phenomenon and who profits from it.
The AI Slop Apocalypse: How The Slop Economy Took Over (Taylor Lorenz)
Seven important videos this week
Joseph Gordon-Levitt: Meta’s A.I. Chatbot Is Dangerous for Kids (The New York Times)
ICE’s Alarming Plan Exposed As Mass Smartphone Surveillance Tool Uncovered (The Damage Report)
The Real Reason Trump and Big Tech Want AI in Our Schools (More Perfect Union)
Microsoft Worker Exposes Israel After Cloud AI Shutdown (Katie Halper)
The AI Apocalypse Is Coming | The Kyle Kulinski Show (Secular Talk)
Data Centers Pillage Electricity For AI Video Slop (Breaking Points)
AI is Not your friend (Louis Rossmann)
🔈 Listen
The growing divide between Big Tech and MAGA
In the new episode of Tech Won’t Save Us, Paris Marx interviews Tina Nguyen, a senior reporter at The Verge and author of The MAGA Diaries: My Surreal Adventures Inside the Right-Wing (And How I Got Out).
They talk about the divisions between the Trump administration, the wider MAGA movement, and certain tech CEOs.
Listen to Tech Won’t Save Us on Youtube Music
Thank you for reading the twenty-sixth press review of Artificial reality!
Have a good week,
Arnaud