Deepfake Videos Are More Realistic Than Ever. Here’s How to Spot if a Video Is Real or AI

Remember when “fake” on the internet meant a poorly photoshopped image? Ah, simpler times. Now we’re all swimming in a sea of AI-generated videos and deepfakes, from fake celebrity videos to fake disaster broadcasts, and it’s becoming almost impossible to know what’s real.
And it’s about to get worse. Sora, OpenAI’s video AI tool, is already confusing the waters. But now his new viral “social media app,” Sora 2, is the hottest ticket on the Internet. Here’s the thing: This is an invite-only TikTok-style feed where everything is 100% fake.
The author already called it a “false fever dream“, and that’s exactly what it is. It’s a platform that gets better every day at making fiction seem like reality, and the risks are enormous. If you’re having trouble separating reality from AI, you’re not alone.
Here are some helpful tips that should help you cut through the noise and get to the truth about every AI-inspired situation.
Don’t miss any of our unbiased technical content and lab reviews. Add CNET as your preferred Google source.
From a technical perspective, Sora videos are impressive compared to competitors such as Midjourney V1 And Google’s Veo 3. They have high resolution, synchronized sound and surprising creativity. Sora’s most popular feature, called “cameo,” lets you use other people’s likenesses and insert them into almost any AI-generated scene. It’s an impressive tool, which results in terribly realistic videos.
This is why so many experts are worried about Sora. The app makes it easier for anyone to create dangerous deepfakes, spread misinformation, and blur the line between what’s real and what’s not. Public figures and celebrities are particularly vulnerable to these deepfakes, and unions like SAG-AFTRA pushed OpenAI to strengthen its safeguards.
Identifying AI content is an ongoing challenge for tech companies, social media platforms, and everyone. But it’s not totally hopeless. Here are some things to look for to determine if a video was made using Sora.
Look for the Sora watermark
Every video made on the Sora iOS app includes a watermark when you download it. It’s the white Sora logo – a cloud icon – bouncing around the edges of the video. This is similar to how TikTok videos are watermarked.
Watermarking content is one of the main ways AI companies can help us visually spot AI-generated content. Google’s Gemini “nano banana” model, for example, automatically watermarks its images. Watermarks are great because they make it clear that the content was created with the help of AI.
But the watermarks are not perfect. On the one hand, if the watermark is static (doesn’t move), it can easily be cropped. Even to move watermarks like Sora’s, there are apps specifically designed to remove them, so watermarks alone cannot be entirely trusted. When OpenAI CEO Sam Altman was asked about this, he said the company will have to adapt to a world where anyone can create fake videos of anyone. Of course, before OpenAI’s Sora, there wasn’t a popular, easily accessible, skill-free way to create these videos. But his argument raises a valid point about the need to rely on other methods to verify authenticity.
Check the metadata
I know, you’re probably thinking that it’s not possible to check a video’s metadata to determine if it’s real. I understand where you’re coming from; it’s an extra step and you may not know where to start. But it’s a great way to determine if a video was made with Sora, and it’s easier to do than you think.
Metadata is a set of information automatically attached to a content item when it is created. This gives you more information about how an image or video was created. It may include the type of camera used to take a photo, the location, date and time the video was captured, and the file name. Every photo and video contains metadata, whether human-created or AI-created. And many AI-created content will also have credentials that indicate their AI origins.
OpenAI is part of the Coalition for Content Provenance and Authenticity, which for you means that Sora videos include C2PA metadata. You can use the Content Authenticity Initiative verification tool to verify the metadata of a video, image, or document. Here’s how. (The Content Authenticity Initiative is part of C2PA.)
How to check the metadata of a photo, video or document:
1. Go to this URL: https://verify.contentauthenticity.org/
2. Download the file you want to check.
3. Click Open.
4. Check the information in the right panel. If it is AI-generated, it should include it in the content summary section.
When you run a Sora video through this tool, it will indicate that the video was “issued by OpenAI” and include the fact that it is AI-generated. All Sora videos must contain these credentials that allow you to confirm that they were created with Sora.
This tool, like all AI detectors, is not perfect. There are many ways for AI videos to avoid detection. If you have other non-Sora videos, they may not contain the necessary signals in the metadata for the tool to determine whether they are AI-created or not. AI videos made with Midjourney, for example, are not flagged, as I confirmed in my testing. Even if the video was created by Sora, then run through a third-party app (like a watermark removal app) and re-uploaded, it’s less likely that the tool will mark it as AI.
The Content Authenticity Initiative’s verification tool correctly reported that a video I made with Sora was AI-generated, along with the date and time it was created.
Research other AI labels and include yours
If you’re on one of Meta’s social media platforms, like Instagram or Facebook, you might get a little help determining if something is AI. Meta has internal systems to help flag AI content and label it as such. These systems aren’t perfect, but you can clearly see the wording of flagged posts. TikTok and YouTube have similar policies when it comes to labeling AI content.
The only truly reliable way to know if something is AI-generated is if the creator discloses it. Many social media platforms now offer settings that allow users to label their posts as AI-generated. Even a simple credit or mention in your caption can go a long way in helping everyone understand how something was created.
You know as you scroll through Sora that nothing is real. But once you leave the app and share AI-generated videos, it’s our collective responsibility to disclose how a video was created. As AI models like Sora continue to blur the line between reality and AI, it’s up to all of us to make it as clear as possible whether something is real or AI.
Above all, stay vigilant
There is no foolproof way to accurately determine at a glance whether a video is real or artificial. The best thing you can do to avoid being deceived is to not automatically and unquestioningly believe everything you see online. Follow your instinct: if something feels unreal, it probably is. In these unprecedented, AI-filled times, your best defense is to take a closer look at the videos you watch. Don’t just take a quick glance and mindlessly scroll down the page. Look for mangled text, disappearing objects, and physics-defying movements. And don’t worry if you get fooled from time to time; even the experts are wrong.
(Disclosure: Ziff Davis, CNET’s parent company, filed a lawsuit in April against OpenAI, alleging that it violated Ziff Davis’ copyrights in the training and operation of its AI systems.)


