
Google is expanding its content transparency tools within the Gemini app. It is now possible to verify videos generated by the company’s own artificial intelligence models. Keep in mind, this is exclusive to anything Gemini made, because the AI uses an ID that only it can see.
It’s becoming increasingly difficult to tell if a video you receive is a genuine recording or something cooked up by a computer, but you can now upload it and find out. The process is pretty straightforward. You just upload the video to Gemini and ask a simple question like, “Was this generated using Google AI?”
Gemini then gets to work scanning for something called SynthID. This is Google’s proprietary digital watermarking technology. It embeds signals into AI-generated content that are imperceptible to humans but easily detectable by the software. The tool is thorough, checking the entire file to see if AI was used for the background music, the footage itself, or both.
The response you get isn’t just a simple yes or no, either. Gemini uses its own reasoning to give you context, which I think is helpful. It even specifies which segments of the content contain AI-generated elements. For example, you might see a response that says, “SynthID detected within the audio between 10-20 secs. No SynthID detected in the visuals.” That level of detail is a great feature for anyone trying to figure out what’s real and what isn’t in a piece of media.
There are some practical limits you should know about if you plan on using this tool frequently. Right now, the files you upload can’t be more than 100 MB in size, and they can’t run longer than 90 seconds. This means you won’t be checking full-length movies, but it’s perfectly sufficient for verifying short clips and social media content.
This new video verification capability is an expansion of a tool Google launched earlier for images. The company has been pushing its SynthID technology for a while now, looking to establish transparency in the content generated by its tools. Since its introduction in 2023, the tech giant claims it has watermarked more than 20 billion pieces of AI-generated content. That volume of marked content means that if the AI image came from a Google generator, Gemini could spot it almost immediately. This expansion brings that same level of scrutiny to motion pictures and audio.
We have to talk about the massive caveat, though. This tool is strictly limited to content that was generated or edited using Google’s own internal tools. If an image or a video was created using a non-Google-operated AI model, Gemini won’t be able to tell you anything about it. This means the tool is really only useful for transparency within Google’s own ecosystem.
Google wants you to rely on Gemini for these checks instead of having to run images and videos through third-party checkers. While the company is making it easier to identify its own creations, the lack of support for outside models means this shouldn’t be considered a universal AI detection tool.
Source: Google





