Watchdog group Public Citizen has called on OpenAI to withdraw its AI video generation app Sora 2, warning that it poses serious risks to privacy, democracy, and public safety due to its potential misuse for creating deepfakes and non-consensual content.
In a letter sent Tuesday to OpenAI and CEO Sam Altman, the Washington-based nonprofit accused the company of showing a “reckless disregard” for product safety by rushing Sora to market without proper safeguards. The group also shared its concerns with the U.S. Congress, saying the app undermines people’s control over their likeness and could destabilize democratic trust in visual media.
“Sora represents a growing threat to democracy,” said Public Citizen’s tech policy advocate J.B. Branch. “We’re entering a world where people can’t really trust what they see.”
The Sora app, launched on iPhones last month and recently on Android in several countries, allows users to create realistic AI-generated videos from text prompts. Critics say the platform has already been misused for harassment and spreading fake visuals, despite OpenAI’s restrictions on nudity and depictions of public figures.
Following public outcry, OpenAI has struck agreements with the families of Martin Luther King Jr. and actor Bryan Cranston to prevent “disrespectful depictions” and announced new safeguards. However, Branch said the company often acts “only after outrage,” arguing such measures should have been in place before launch.
OpenAI also faces multiple lawsuits in California alleging its chatbot ChatGPT contributed to psychological harm and suicides.
Public Citizen said both cases show a “pattern” of prioritizing rapid expansion over user safety.