Friday, May 8, 2026
  • Login
Techstory Australia
  • Home
  • News
  • AI
  • Social Media
  • Technology
  • Markets
No Result
View All Result
  • Home
  • News
  • AI
  • Social Media
  • Technology
  • Markets
No Result
View All Result
Techstory Australia
No Result
View All Result
Home AI

ChatGPT Fails to Spot 92% of Fake Videos Made by OpenAI’s Own Sora Tool

OpenAI has emphasized that ChatGPT is not intended to function as a definitive detector of AI-generated media.

Sara Jones by Sara Jones
January 28, 2026
in AI, Technology
0
OpenAI’s AI-Powered Search Engine Now Live Within ChatGPT

PHOTO CREDITS : Times Now

74
SHARES
1.2k
VIEWS
Share on FacebookShare on Twitter

In a development that has reignited debate over the risks of generative artificial intelligence, recent testing has revealed that ChatGPT fails to correctly identify nearly 92 percent of fake videos created using OpenAI’s video-generation tool, Sora. The findings underscore a growing gap between the rapid advancement of AI systems that generate realistic content and the limited ability of existing tools to reliably detect what is real and what is fabricated.

You might also like

Investor Group Urges SEC to Scrutinize SpaceX IPO Filing and Prevent Potential Conflicts

Meta Asks California Judge to Throw Out Landmark Social Media Addiction Verdict

iPhone 18 Pro Design Tweaks Could Make Way for Larger 5,200 mAh Battery

Sora, OpenAI’s text-to-video model, is capable of producing short, highly detailed video clips from simple written prompts. These videos often feature realistic motion, lighting, environments, and human-like behavior, making them difficult for both humans and machines to distinguish from authentic footage. When a set of such videos was presented to ChatGPT and the system was asked to determine whether the clips were real or AI-generated, the model misclassified the vast majority, frequently labeling synthetic content as genuine.

What makes the result particularly striking is that both Sora and ChatGPT are products of the same company. This has raised questions about whether advanced language and multimodal models can meaningfully serve as safeguards against the misuse of generative media, even when they are closely related to the tools creating that content. Observers say the findings challenge common assumptions that artificial intelligence can police itself.

In many cases, ChatGPT not only failed to identify the videos as fake but also went further, describing the fabricated scenes as plausible real-world events. Some responses included confident explanations or contextual interpretations of scenes that never occurred, reinforcing concerns about AI “hallucinations” — instances in which systems produce authoritative-sounding but incorrect information. Such behavior can be especially problematic when users rely on AI tools to verify the authenticity of visual material.

Experts note that the limitations are partly rooted in how these systems are designed. ChatGPT is primarily optimized for understanding and generating language, not for forensic analysis of visual media. While it can interpret images and videos at a basic level, it lacks specialized capabilities to analyze subtle inconsistencies in physics, motion, lighting, or digital artifacts that might reveal synthetic origins. As AI-generated videos become increasingly polished, these cues are becoming harder to detect even for trained analysts.

ChatGPT fails to spot 92% of fake videos made by OpenAI's own Sora tool :  r/technology

The findings also highlight a broader industry-wide challenge. Detection technologies have consistently lagged behind generative tools, creating an imbalance that favors the creation of convincing fake content over the ability to identify it. As text-to-video systems like Sora improve, they are narrowing the perceptual gap between real and artificial media, making detection increasingly complex and resource-intensive.

The implications extend far beyond technical performance. Undetectable or easily misidentified fake videos pose serious risks to public trust, particularly in areas such as politics, journalism, finance, and social media. Deepfake videos could be used to spread disinformation, manipulate public opinion, fabricate evidence, or damage reputations, all while appearing credible to viewers and even to AI-based verification tools.

OpenAI has emphasized that ChatGPT is not intended to function as a definitive detector of AI-generated media. Instead, the company has relied on other safeguards, such as watermarking and metadata tagging, to signal when content is produced by tools like Sora. However, critics argue that these measures are insufficient on their own, as watermarks can be removed and metadata can be stripped during editing or re-uploading across platforms.

The failure rate revealed by the tests has fueled calls for a more comprehensive approach to content authenticity. Researchers and policy experts argue that detection should not rely on a single AI model or method. Instead, they advocate for a layered system that combines technical standards, platform-level enforcement, independent verification tools, and public awareness. Some also stress the importance of educating users about the limitations of AI, warning against treating conversational systems as authoritative fact-checkers.

There are also growing demands for clearer labeling of AI-generated content and stronger regulatory frameworks. Governments and technology companies are under increasing pressure to establish rules that ensure transparency without stifling innovation. Proposals include mandatory disclosure of synthetic media, standardized provenance systems that track content from creation to distribution, and penalties for malicious misuse.

At the same time, developers face a difficult balancing act. Improving detection capabilities without undermining creative tools or compromising user privacy remains a complex technical and ethical challenge. As generative models grow more capable, the line between real and artificial content is becoming less distinct, raising fundamental questions about authenticity in the digital age.

The revelation that ChatGPT fails to spot most fake videos produced by Sora serves as a reminder that artificial intelligence is not a neutral or infallible arbiter of truth. While AI tools can assist users in navigating information, they are not substitutes for critical thinking, human judgment, or robust verification systems.

As AI-generated video becomes more accessible and widespread, the stakes will only rise. The current gap between creation and detection suggests that society may be entering a period where seeing is no longer believing — and where even the most advanced AI systems struggle to tell the difference.

Tags: AI-generated mediaAI-generated media newsAI-generated media updatesArtificial intelligenceArtificial Intelligence newsArtificial Intelligence updatesChatgptChatGPT newsChatGPT updatesOpenAIOpenAI newsOpenAI updatesSoraSora ToolSora Tool newsSora Tool updatestech newstechstory
Share30Tweet19
Sara Jones

Sara Jones

Recommended For You

Investor Group Urges SEC to Scrutinize SpaceX IPO Filing and Prevent Potential Conflicts

by Sara Jones
May 7, 2026
0
Musk and Insiders to Retain Voting Control of SpaceX After IPO, Filing Shows

An influential investor advocacy group has called on the U.S. Securities and Exchange Commission (SEC) to closely examine any future initial public offering filing by SpaceX, warning that...

Read more

Meta Asks California Judge to Throw Out Landmark Social Media Addiction Verdict

by Sara Jones
May 7, 2026
0
Meta Plans $10 Billion Subsea Cable to Control Global Data Traffic

Meta is seeking to overturn a landmark California jury verdict that found the social media giant liable for contributing to the mental health struggles of a young user...

Read more

iPhone 18 Pro Design Tweaks Could Make Way for Larger 5,200 mAh Battery

by Sara Jones
May 6, 2026
0
iPhone 18 Pro Design Tweaks Could Make Way for Larger 5,200 mAh Battery

As anticipation builds for Apple’s next-generation flagship, fresh leaks suggest that the iPhone 18 Pro could introduce a subtle yet meaningful design shift aimed at improving one of...

Read more

PayPal Plans to Cut Over 4,500 Jobs in Major AI-Driven Restructuring Push

by Sara Jones
May 6, 2026
0
PayPal Plans to Cut Over 4,500 Jobs in Major AI-Driven Restructuring Push

In a significant move reflecting the rapidly evolving dynamics of the fintech industry, PayPal is reportedly preparing to eliminate around 20% of its global workforce—amounting to more than...

Read more

Apple Explores U.S. Chip Partnerships With Intel and Samsung Amid Growing Reliance Risks

by Sara Jones
May 6, 2026
0
Judge Rules Apple Must Submit Homework by Monday, No Exceptions

In a strategic shift that could have far-reaching implications for the global semiconductor industry, Apple is reportedly in early-stage talks with Intel and Samsung Electronics to manufacture advanced...

Read more
Next Post
Nike Files Lawsuit Against New Balance and Skechers, Alleging Patent Infringement on Sneaker

Nike to Cut 775 U.S. Jobs as It Pushes AI and Profit Turnaround

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Related News

OpenAI’s AI-Powered Search Engine Now Live Within ChatGPT

ChatGPT said: IBM and Cisco Agree to Lay the Foundations for a Quantum Internet

November 28, 2025
Trump Tariffs Transform into Bigger Threats for Mexico, Canada Than China

Trump Muses About Keeping TikTok “Around for a Little While” Amid Ongoing Debate

December 23, 2024
British iPhone Challenger Nothing Mandates Full-Time Office Return, Urges Dissenters to Leave

British iPhone Challenger Nothing Mandates Full-Time Office Return, Urges Dissenters to Leave

August 20, 2024

Browse by Category

  • AI
  • Archives
  • Business
  • Crypto
  • Finance
  • Investing
  • Markets
  • News
  • Social Media
  • Technology

Techstory.com.au

Tech, Crypto and Financial Market News from Australia and New Zealand

CATEGORIES

  • AI
  • Archives
  • Business
  • Crypto
  • Finance
  • Investing
  • Markets
  • News
  • Social Media
  • Technology

BROWSE BY TAG

amazon apple apple news apple updates Artificial intelligence Artificial Intelligence news Artificial Intelligence updates australia Australia news Australia updates Chatgpt china China news China updates Donald Trump Donald Trump news Donald Trump updates Elon musk elon musk news Elon Musk updates google google news Google updates meta meta news meta updates Microsoft microsoft news microsoft updates OpenAI OpenAI news OpenAI updates Social media tech news technology Technology news technology updates techstory Tesla tesla news tesla updates TIKTOK united States united States news United States updates

© 2023 Techstory Media. Editorial and Advertising Contact : hello@techstory.com.au

No Result
View All Result
  • Home
  • News
  • Technology
  • Markets
  • Business
  • AI
  • Investing
  • Social Media
  • Finance
  • Crypto

© 2023 Techstory Media. Editorial and Advertising Contact : hello@techstory.com.au

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?