Wednesday, April 22, 2026
  • Login
Techstory Australia
  • Home
  • News
  • AI
  • Social Media
  • Technology
  • Markets
No Result
View All Result
  • Home
  • News
  • AI
  • Social Media
  • Technology
  • Markets
No Result
View All Result
Techstory Australia
No Result
View All Result
Home AI

ChatGPT Fails to Spot 92% of Fake Videos Made by OpenAI’s Own Sora Tool

OpenAI has emphasized that ChatGPT is not intended to function as a definitive detector of AI-generated media.

Sara Jones by Sara Jones
January 28, 2026
in AI, Technology
0
OpenAI’s AI-Powered Search Engine Now Live Within ChatGPT

PHOTO CREDITS : Times Now

74
SHARES
1.2k
VIEWS
Share on FacebookShare on Twitter

In a development that has reignited debate over the risks of generative artificial intelligence, recent testing has revealed that ChatGPT fails to correctly identify nearly 92 percent of fake videos created using OpenAI’s video-generation tool, Sora. The findings underscore a growing gap between the rapid advancement of AI systems that generate realistic content and the limited ability of existing tools to reliably detect what is real and what is fabricated.

You might also like

Musk and Insiders to Retain Voting Control of SpaceX After IPO, Filing Shows

PlayStation to Require Age Verification for Certain Online Features

Tim Cook to Step Down as Apple CEO; John Ternus Confirmed as Successor

Sora, OpenAI’s text-to-video model, is capable of producing short, highly detailed video clips from simple written prompts. These videos often feature realistic motion, lighting, environments, and human-like behavior, making them difficult for both humans and machines to distinguish from authentic footage. When a set of such videos was presented to ChatGPT and the system was asked to determine whether the clips were real or AI-generated, the model misclassified the vast majority, frequently labeling synthetic content as genuine.

What makes the result particularly striking is that both Sora and ChatGPT are products of the same company. This has raised questions about whether advanced language and multimodal models can meaningfully serve as safeguards against the misuse of generative media, even when they are closely related to the tools creating that content. Observers say the findings challenge common assumptions that artificial intelligence can police itself.

In many cases, ChatGPT not only failed to identify the videos as fake but also went further, describing the fabricated scenes as plausible real-world events. Some responses included confident explanations or contextual interpretations of scenes that never occurred, reinforcing concerns about AI “hallucinations” — instances in which systems produce authoritative-sounding but incorrect information. Such behavior can be especially problematic when users rely on AI tools to verify the authenticity of visual material.

Experts note that the limitations are partly rooted in how these systems are designed. ChatGPT is primarily optimized for understanding and generating language, not for forensic analysis of visual media. While it can interpret images and videos at a basic level, it lacks specialized capabilities to analyze subtle inconsistencies in physics, motion, lighting, or digital artifacts that might reveal synthetic origins. As AI-generated videos become increasingly polished, these cues are becoming harder to detect even for trained analysts.

ChatGPT fails to spot 92% of fake videos made by OpenAI's own Sora tool :  r/technology

The findings also highlight a broader industry-wide challenge. Detection technologies have consistently lagged behind generative tools, creating an imbalance that favors the creation of convincing fake content over the ability to identify it. As text-to-video systems like Sora improve, they are narrowing the perceptual gap between real and artificial media, making detection increasingly complex and resource-intensive.

The implications extend far beyond technical performance. Undetectable or easily misidentified fake videos pose serious risks to public trust, particularly in areas such as politics, journalism, finance, and social media. Deepfake videos could be used to spread disinformation, manipulate public opinion, fabricate evidence, or damage reputations, all while appearing credible to viewers and even to AI-based verification tools.

OpenAI has emphasized that ChatGPT is not intended to function as a definitive detector of AI-generated media. Instead, the company has relied on other safeguards, such as watermarking and metadata tagging, to signal when content is produced by tools like Sora. However, critics argue that these measures are insufficient on their own, as watermarks can be removed and metadata can be stripped during editing or re-uploading across platforms.

The failure rate revealed by the tests has fueled calls for a more comprehensive approach to content authenticity. Researchers and policy experts argue that detection should not rely on a single AI model or method. Instead, they advocate for a layered system that combines technical standards, platform-level enforcement, independent verification tools, and public awareness. Some also stress the importance of educating users about the limitations of AI, warning against treating conversational systems as authoritative fact-checkers.

There are also growing demands for clearer labeling of AI-generated content and stronger regulatory frameworks. Governments and technology companies are under increasing pressure to establish rules that ensure transparency without stifling innovation. Proposals include mandatory disclosure of synthetic media, standardized provenance systems that track content from creation to distribution, and penalties for malicious misuse.

At the same time, developers face a difficult balancing act. Improving detection capabilities without undermining creative tools or compromising user privacy remains a complex technical and ethical challenge. As generative models grow more capable, the line between real and artificial content is becoming less distinct, raising fundamental questions about authenticity in the digital age.

The revelation that ChatGPT fails to spot most fake videos produced by Sora serves as a reminder that artificial intelligence is not a neutral or infallible arbiter of truth. While AI tools can assist users in navigating information, they are not substitutes for critical thinking, human judgment, or robust verification systems.

As AI-generated video becomes more accessible and widespread, the stakes will only rise. The current gap between creation and detection suggests that society may be entering a period where seeing is no longer believing — and where even the most advanced AI systems struggle to tell the difference.

Tags: AI-generated mediaAI-generated media newsAI-generated media updatesArtificial intelligenceArtificial Intelligence newsArtificial Intelligence updatesChatgptChatGPT newsChatGPT updatesOpenAIOpenAI newsOpenAI updatesSoraSora ToolSora Tool newsSora Tool updatestech newstechstory
Share30Tweet19
Sara Jones

Sara Jones

Recommended For You

Musk and Insiders to Retain Voting Control of SpaceX After IPO, Filing Shows

by Sara Jones
April 22, 2026
0
Musk and Insiders to Retain Voting Control of SpaceX After IPO, Filing Shows

In a major development ahead of one of the most anticipated public listings in history, Elon Musk and a select group of insiders are set to retain decisive...

Read more

PlayStation to Require Age Verification for Certain Online Features

by Sara Jones
April 21, 2026
0
PlayStation to Require Age Verification for Certain Online Features

Sony Interactive Entertainment has announced plans to introduce mandatory age verification for select online features on PlayStation, marking a significant shift in how users access social and communication...

Read more

Tim Cook to Step Down as Apple CEO; John Ternus Confirmed as Successor

by Sara Jones
April 21, 2026
0
Tim Cook to Step Down as Apple CEO; John Ternus Confirmed as Successor

Tim Cook has announced that he will step down as Chief Executive Officer of Apple Inc.later this year, marking the end of a transformative era for the global...

Read more

U.S. Security Agency Reportedly Using Anthropic’s “Mythos” Despite Blacklist

by Sara Jones
April 20, 2026
0
U.S. Security Agency Reportedly Using Anthropic’s “Mythos” Despite Blacklist

A leading United States intelligence agency is reportedly using a powerful artificial intelligence system developed by Anthropic, despite the company being placed on a federal blacklist over national...

Read more

Ukraine Moves to Replace Frontline Soldiers With 25,000 Ground Robots

by Sara Jones
April 20, 2026
0
Ukraine Moves to Replace Frontline Soldiers With 25,000 Ground Robots

Ukraine is preparing for a major تحول in battlefield strategy, announcing plans to deploy up to 25,000 unmanned ground robots to the front lines as part of its...

Read more
Next Post
Nike Files Lawsuit Against New Balance and Skechers, Alleging Patent Infringement on Sneaker

Nike to Cut 775 U.S. Jobs as It Pushes AI and Profit Turnaround

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Related News

Boeing Seeks to Raise Up to $25 Billion Through Stock and Debt Offering

Boeing Seeks to Raise Up to $25 Billion Through Stock and Debt Offering

October 16, 2024
Exclusive: Google parent Alphabet weighs offer for HubSpot

Alphabet Becomes Fourth Company to Reach $3 Trillion Market Cap

September 16, 2025
Porsche Launches a $US600,000 Electric Boat That Uses the Same Powertrain As the Macan EV

Porsche Launches a $US600,000 Electric Boat That Uses the Same Powertrain As the Macan EV

October 16, 2023

Browse by Category

  • AI
  • Archives
  • Business
  • Crypto
  • Finance
  • Investing
  • Markets
  • News
  • Social Media
  • Technology

Techstory.com.au

Tech, Crypto and Financial Market News from Australia and New Zealand

CATEGORIES

  • AI
  • Archives
  • Business
  • Crypto
  • Finance
  • Investing
  • Markets
  • News
  • Social Media
  • Technology

BROWSE BY TAG

amazon apple apple news apple updates Artificial intelligence Artificial Intelligence news Artificial Intelligence updates australia Australia news Australia updates Chatgpt china China news China updates Donald Trump Donald Trump news Donald Trump updates Elon musk elon musk news Elon Musk updates google google news Google updates meta meta news meta updates Microsoft microsoft news microsoft updates OpenAI OpenAI news OpenAI updates Social media tech news technology Technology news technology updates techstory Tesla tesla news tesla updates TIKTOK united States united States news United States updates

© 2023 Techstory Media. Editorial and Advertising Contact : hello@techstory.com.au

No Result
View All Result
  • Home
  • News
  • Technology
  • Markets
  • Business
  • AI
  • Investing
  • Social Media
  • Finance
  • Crypto

© 2023 Techstory Media. Editorial and Advertising Contact : hello@techstory.com.au

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?