Samsung to auto-tag AI generated photos in Galaxy S26

Samsung Galaxy Unpacked was full of announcements regarding the company’s new Galaxy S26 line of smartphones. And of course, Samsung also had a lot to highlight in terms of Galaxy AI features.
One of major new AI-related features this year, Samsung concerns the photo application of the Galaxy S26. All photo tools, including AI editing, will all be housed in a new Creative Studio so users can capture, edit, enhance and generate media content without needing to use multiple apps.
The ability to take a real photo and then add a realistic, but fake, AI-generated look to that image could pose some problems. So, as Samsung announced during Galaxy Unpacked, photos containing AI elements will be labeled as such in the application.
Samsung’s AI label appears in the bottom corner of the photo and designates the image as “AI-generated content.”
Crushable speed of light
This will be good news for many, as AI images and deepfakes have been used by bad actors to spread misinformation and harass individuals.
However, it is not yet clear whether there is anything other than the watermark visible in the photo. If it’s just this visible label, it looks like users can still easily crop the photo slightly to remove the watermark. In fact, there are tutorials online that show users how to use Galaxy AI itself to remove the AI watermark in previous iterations of Samsung’s AI tools for Galaxy phones.
Other AI tools like OpenAI’s Sora and Google’s Veo 3 as well show watermarks which indicate that the content is generated by AI. However, since these are video tools, they can make the watermark more difficult to remove. Ideally, AI-generated images and videos contain an invisible digital watermark, like Google’s SynthID.
Samsung uses Google’s Gemini AI model and its powerful Nano Banana AI image generation model for Galaxy AI generative content. However, it is unclear whether AI photos made with Samsung will also contain a SynthID for AI detection.
Nonetheless, automatic tagging is a practical step to limit the potential harm of deepfakes.


