2025International News

New Sora AI Video Generator Raises Ethical Questions

Lindsay Gomez-Lopez

Staff Writer

Embed from Getty Images

As companies like Meta, Google, and specifically OpenAI continue to release more advanced artificial intelligence bots to the general public, the sky has become the limit when it comes to creating AI media. When mainstream generative AI programs gained the ability to create images, the main concern was taking humanity out of art, but nowadays the concern is the ability to take humanity out of humanity. It is quickly becoming a part of everyday life, and it is important to be aware of the consequences AI will have on how people engage with the world around them. This is especially true as videos, which were once the simplest way of proving something, have become the newest form of deception.

Being able to generate videos from short prompts in about a minute, OpenAI’s Sora 2 app quickly found success following its release this past September, reaching one million downloads in under 5 days according to Time Magazine. Despite the technology’s seemingly endless capabilities, the platform still laid down some ground rules–videos had a clear Sora 2 logo, consent was needed to make clips of living people, and prompts that were intentionally fraudulent, sexual, or violent would not be generated. 

The Sora 2 app ruffled a lot of feathers by allowing its downloaders to use the personas of deceased celebrities and copyrighted media in their videos. Sensing a legal mess, OpenAI began doing damage control. According to The New York Times, copyright holders will be given “control over generation of characters and a path to making money from the service.” Additionally, Newsweek reports that a new policy lets the estates and representatives of historical figures opt out of the app. This policy was used at the request of Dr. Martin Luther King Jr’s estate after AI videos mocking King went viral online. Yet, the damage has already been done. The release of Sora AI has resulted in tasteless videos circulating of figures, such as comedian Robin Williams, whose daughter criticized Sora 2 as a condensation of legacies into “slop puppeteering” according to Business Insider.

Aside from its immorality from a business and emotional perspective, there are also concerns about its potential impact on everyday life. Though a video of a deceased figure saying “six seven” is clearly meant to be a joke and unbelievable, there is the possibility that people will take a ‘leaked’ clip of a well-known person saying something controversial at face value without further factchecking. The New York Times brings up concerns about how even media allowed within the regulations could be used for fraud, such as fake security camera footage and news broadcasts. There is also the issue of OpenAI’s ‘security’ measures being bypassed. There are websites dedicated to removing the Sora 2 logo, and users keep managing to get through the content restrictions while the company struggles to stop them. To a certain degree, it’s as though the creation has outgrown its creator, unable to be properly protected anymore. This is only the tip of the iceberg, as NPR notes that attempts to control AI at this point may be impossible–social media sites can try to ban it, but if artificial intelligence continues to become more realistic, it will fly under the radar no matter how hard companies try to screen for it. 

As AI becomes more popular, more businesses will throw their hat in the ring and try to provide a service more advanced than the rest. This will inevitably lead to an era where the company that has the least restrictions will be the most popular. The problem is that generative AI is evolving at a rapid pace while legal regulations struggle to be passed and be respected. In a world where seeing used to mean believing, a person scrolling on social media cannot be blamed for not bothering to analyze every video they see for any glitches or abnormalities, therefore it is inevitable that attempts at misinformation will succeed. In a world where the result of a simple prompt can convince people that lies are reality, the truth begins to lose its value, and AI successfully creates a future where trust is a sign of naivety.

Image courtesy of Getty Images.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share This