OpenAI introduced Sora 2, the upgraded version of its innovative video generation model, on Sept. 30. Since it is built on a diffusion system guided by a transformer-based AI, Sora can transform simple text prompts to hyper-realistic videos. Its prevalence on social media platforms, such as Instagram and TikTok, has caught the attention of many and caused both excitement and concern across the internet.
“The most significant impacts would be legal, as evidence could be both forged and dismissed with this new technology,” Russell Jin (12), math and coding enthusiast, said. “However, people may also be impacted on a more microscopic scale due to slander. I personally am wary of this technology. Even if I do produce informative content with AI later, I will stay far away from human-related topics.”
In addition to copyright infringement, one of the most significant concerns about the technology is the fact that it can create longer, more complex, and more realistic videos than its predecessor. This has led to questions about privacy, misinformation, and intellectual property. One of the newest features is the “Sora watermark,” which is applied to indicate that a video was AI-generated. However, some websites have enabled individuals to apply this watermark to videos that are real, allowing for easy manipulation and difficulties in distinguishing between real and synthetic content.
“Allowing Sora watermarks to be added to existing videos will shift the tech world toward greater transparency, but also more confusion since differentiating real and artificial content will become increasingly difficult,” Justin Yu (10), coding and AI enthusiast, said. “This will ultimately spark new standards for content authenticity, regulation, and trust.”
With the concerning predominance of AI usage on social media, the question of media morality and ownership will become a progressively pressing issue.
