The Deepfakes Are Everywhere: How to Spot AI-Generated Videos

AI-generated videos are more common than ever. These videos have taken over social media, from cute animal videos to out-of-this-world content, and they’re becoming more realistic every day. Although it might have been easy to spot a “fake” video a year ago, these AI tools have become sophisticated enough to fool millions of people.
New AI tools, including Sora from OpenAI, Google’s Veo 3 And Nano Bananahave erased the boundary between reality and AI-generated fantasies. Today, we’re swimming in a sea of AI-generated videos and deepfakes, ranging from fake celebrity endorsements to fake disaster broadcasts.
If you’re having trouble separating reality from AI, you’re not alone. Here are some helpful tips that should help you cut through the noise and uncover the truth behind every AI-inspired creation. To find out more, visit problem behind The energy needs of AI video and what we need to do in 2026 to avoid more AI slop.
Don’t miss any of our unbiased technical content and lab reviews. Add CNET as your preferred Google source.
Why it’s hard to spot Sora AI videos
From a technical perspective, Sora videos are impressive compared to competitors such as Mid-term V1 And Google Veo 3. They have high resolution, synchronized sound and surprising creativity. Sora’s most popular feature, called “cameo,” lets you use other people’s likenesses and insert them into almost any AI-generated scene. It’s an impressive tool, which results in terribly realistic videos.
Sora joins Google’s Veo 3, another technically impressive AI video generator. These are two of the most popular tools, but certainly not the only ones. Generative media has become an area of focus for many major tech companies in 2025, with image and video models poised to give every company the edge they desire in the race to develop the most advanced AI in all modalities. Google and OpenAI have both released featured image and video templates this year with the apparent aim of surpass oneself.
This is why so many experts are worried about Sora and other AI video generators. The Sora app makes it easier for anyone to create realistic videos featuring their users. Public figures and celebrities are particularly vulnerable to these deepfakes, and unions like SAG-AFTRA have pushed OpenAI to step up. its guardrails. Other AI video generators pose similar risks, as well as concerns about filling the internet with nonsense AI garbage and could be a dangerous tool for spreading misinformation.
Identifying AI content is an ongoing challenge for tech companies, social media platforms, and everyone. But it’s not totally hopeless. Here are some things to look for to determine if a video was made using Sora.
Look for the Sora watermark
Every video made on the Sora iOS app includes a watermark when you download it. It’s the white Sora logo – a cloud icon – bouncing around the edges of the video. This is similar to how TikTok videos are watermarked. Watermarking content is one of the main ways AI companies can help us visually spot AI-generated content. Google’s Gemini Nano Banana automatically watermarks its images. Watermarks are great because they make it clear that the content was created with the help of AI.
But the watermarks are not perfect. On the one hand, if the watermark is static (doesn’t move), it can easily be cropped. Even for mobile watermarks like Sora’s, there are apps specifically designed to remove them, so watermarks alone cannot be entirely trusted. When OpenAI CEO Sam Altman was asked about this, he said the company will have to adapt to a world where anyone can create fake videos of anyone. Of course, before Sora, there was no popular, easily accessible, skill-free way to make these videos. But his argument raises a valid point about the need to rely on other methods to verify authenticity.
Check the metadata
I know you’re probably thinking that it’s not possible to check a video’s metadata to determine if it’s real. I understand where you’re coming from. This is an extra step and you may not know where to start. But it’s a great way to determine if a video was made with Sora, and it’s easier to do than you think.
Metadata is a set of information automatically attached to a content item when it is created. This gives you more information about how an image or video was created. It may include the type of camera used to take a photo, the location, date and time the video was captured, and the file name. Every photo and video contains metadata, whether human-created or AI-created. And many AI-created content will also have credentials that indicate their AI origins.
OpenAI is part of the Coalition for Content Provenance and Authenticity, which means Sora videos include C2PA metadata. You can use the Content Authenticity Initiative verification tool to verify the metadata of a video, image, or document. Here’s how. (The Content Authenticity Initiative is part of C2PA.)
How to check the metadata of a photo, video or document
1. Go to this URL: https://verify.contentauthenticity.org/
2. Download the file you want to check. Then click Open.
4. Check the information in the right panel. If it is AI-generated, it should include it in the content summary section.
When you run a Sora video through this tool, it will indicate that the video was “issued by OpenAI” and include the fact that it is AI-generated. All Sora videos must contain these credentials that allow you to confirm that they were created with Sora.
This tool, like all AI detectors, is not perfect. There are many ways for AI videos to avoid detection. If you have non-Sora videos, they may not contain the necessary signals in the metadata for the tool to determine whether they are AI-created or not. AI videos made with Midjourney, for example, are not flagged, as I confirmed in my testing. Even if the video was created by Sora, then run through a third-party app (like a watermark removal app) and re-uploaded, it’s less likely that the tool will mark it as AI.
The Content Authenticity Initiative’s verification tool correctly reported that a video I made with Sora was AI-generated, as well as the date and time I created it.
Research other AI labels and include yours
If you’re on one of Meta’s social media platforms, like Instagram or Facebook, you can get a little help determining if something is AI. Meta has internal systems to help flag AI content and label it as such. These systems aren’t perfect, but you can clearly see the wording of flagged posts. TikTok and YouTube have similar policies when it comes to labeling AI content.
The only truly reliable way to know if something is AI-generated is if the creator discloses it. Many social media platforms now offer settings that allow users to label their posts as AI-generated. Even a simple credit or mention in your caption can go a long way in helping everyone understand how something was created.
You know as you scroll through Sora that nothing is real. However, once you leave the app and share AI-generated videos, it becomes our collective responsibility to disclose how a video was created. As AI models like Sora continue to blur the line between reality and AI, it’s up to all of us to make it as clear as possible whether something is real or AI.
Above all, stay vigilant
There is no foolproof way to accurately determine at a glance whether a video is real or artificial. The best thing you can do to avoid being deceived is to not automatically and unquestioningly believe everything you see online. Follow your gut, and if something feels unreal, it probably is. In these unprecedented, AI-filled times, your best defense is to take a closer look at the videos you watch. Don’t just take a quick glance and mindlessly scroll down the page. Look for mangled text, disappearing objects, and physics-defying movements. And don’t worry if you get screwed from time to time. Even the experts are wrong.
(Disclosure: Ziff Davis, parent company of CNET, filed a lawsuit in April against OpenAI, alleging that it violated Ziff Davis’ copyrights in the training and operation of its AI systems.)



:max_bytes(150000):strip_icc()/Health-GettyImages-2210399853-98ccad86dad74e3185ff1255da43c580.jpg?w=390&resize=390,220&ssl=1)
