Sora AI: Ethical Challenges With AI Advancement
By: Katherine Clarke
In an age where AI-generated video quality is improving at a staggering rate, how can we trust the content we’re seeing? The recent release of Sora AI has already raised several ethical questions regarding the spread of misinformation.
Sora is an app-based AI software produced by the same developers as OpenAI that allows users to turn simple text prompts into short-form AI generated videos. The rollout of the app has been gradual, but access is expanding. Currently the app is invite only, with invite codes being shared on Reddit and Discord threads. Despite user restrictions, Sora has rapidly become the most downloaded app in the Apple App Store and has already reached over 1 million downloads in the past week. Sora has received a lot of attention due to the hyper-realistic nature of the videos it produces. It is no question that AI video quality has been rapidly improving. With Sora, the world is seeing more physically accurate video generation that users are describing as “night and day” when comparing it to similar AI tools.
While this recent advancement in AI is undeniably impressive, it has raised concerns about misinformation, copyright, and consent. While the app has safeguards in place to block any potentially sensitive material being posted, it is not foolproof. Sora generated content is automatically watermarked with their logo, but users have already found ways around this by cropping or editing. The addition of the company’s watermark on these videos is intended to help identify AI generated content, but the reliability of this feature isn’t completely solid. Users have also managed to find ways to tweak algorithms and prompts, allowing them to work around protective filters. Given these complications and the highly accurate content, Sora has sparked great concern when it comes to the spreading of misinformation. We are now living in a reality where videos can no longer be considered as objective proof. Fake car accident footage threatens insurance fraud, and is an example of how Sora content can undermine video credibility. As for copyright, Sora has an “opt-out” policy where copyrighted material is free to be used unless the owners explicitly opt out. This has become a huge problem related to fake celebrity endorsements and accounts.
While there are many aspects of Sora that the public has found distressing, there is always room for improvement. Like any new technology, AI is always undergoing changes and developments. This means that the problems that have been pointed out with Sora are already being targeted by the developers, and safety measures are being improved as users identify risks.
At the end of the day, Sora and the issues that come with it are an inevitable part of the constant evolution of AI. Responsible innovation of these technologies will prove whether or not society can balance technological modernization and ethical safety.