Grok Imagine lacks guardrails for sexual deepfakes

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

Grok Imagine, a new XAI generative AI tool that creates IA images and videos, lack of railing against sexual content and deep buttocks.

XAI and Elon Musk started Grok Imagine during the weekend, and it is available now in the Grok iOS and Android application for XAI Premium Plus and Heavy Grok subscribers.

Mashable has tested the tool to compare it to other IA image and videos generation tools, and depending on our first impressions, it is lagging behind compared to the similar technology of Openai, Google and Midjourney on the technical level. Grok also imagine the standard railings to prevent deep buttocks and sexual content. Mashable has contacted XAI, and we will update this story if we receive an answer.

XAI’s acceptable use policy prohibits users from “representing people like pornographic”. Unfortunately, there is a lot of distance between “sexual” and “pornographic”, and Grok imagine seems carefully calibrated to take advantage of this gray area. Grok Imagine will easily create sexually suggestive images and videos, but it does not need to show nudity, kisses or sexual acts.

Most traditional AI companies include explicit rules prohibiting users from creating potentially harmful content, including sexual material and celebrity buttocks. In addition, Ai Rivaux video generators like Google VEO 3 or Sora from integrated OPENAI functionalities that prevent users from creating images or videos of public figures. Users can often get around these security protections, but they provide control against improper use.

But unlike its greatest rivals, Xai did not move away from NSFW content in his chatbot cat Ai signature. The company recently introduced an anime Avatar Coquette which will engage in NSFW cats, and the Grok images generation tools will allow users to create images of celebrities and politicians. Grok Imagine also includes a “spicy” setting, which Musk promoted in the days that followed its launch.

Grok Anime Avitar Ani on a phone screen in front of the grok logo

Grok “spicy” anime avatar.
Credit: Cheng Xin / Getty Images

See also:

AI and Deepfake actors do not come to YouTube advertisements. They are already there.

“If you look at the philosophy of musk as an individual, if you look at his political philosophy, he is much more of the type of libertarian mold, right? And he talked about grok like a bit like the LLM for freedom of expression,” said Henry Ajder, an AI depth expert, in an interview with Mashable. Ajder said that under Musk management, X (Twitter), Xai and now Grok have adopted “a plus smooth Approach to safety and moderation. “”

“So, as far as XAI is concerned, in this context, I am surprised that this model can generate this content, which is certainly uncomfortable, and I would say at least somewhat problematic? Ajder said.” I am not surprised, given the history they have and the safety procedures they have in place. Are they unique to suffer from these challenges? No. But could they do more, or do they do less compared to some of the other key players in space? It would seem that it is like this. Yes.”

Grok Imagine the Erre on the side of NSFW

Grok imagine a a few Garde-body in place. In our tests, he deleted the “spicy” option with certain types of images. Grok also imagine certain images and videos, labeling them as “moderate”. This means that XAI could easily take additional measures to prevent users from creating abusive content in the first place.

“There is no technical reason why XAI could not include railings both on the entry and exit of their generative-AI systems, as others have done,” said Hany Farid, digital legal medicine expert and IT teacher at UC Berkeley, in an email in Mashable.

Mashable lighting speed

However, with regard to Deepfakes or NSFW content, XAI seems to be mistaken on the side of permissiveness, a contrast striking with the most prudent approach of its rivals. XAI also moved quickly to publish new models and AI tools, and perhaps too quickly, said Ajder.

“Knowing what type of confidence and security teams, and the teams that do a lot of ethics and security policies management, whether it is a red team, whether it is opponent tests, you know, if it works hand in hand with developers, it takes time.

Mashable tests reveal that Grok imagine has a much more loose content moderation than other generative tools of traditional AI. xai smooth The moderation approach is also reflected in XAI security guidelines.

OPENAI and Google AI VS GROK: How the other IA companies approach the safety and moderation of the content

The Openai logo is displayed on a smartphone with the sora video text generator visible in the background


Credit: Jonathan Raa / Nurphoto via Getty Images

OPENAI and Google have in -depth documentation describing their approach to responsible use of AI and prohibited content. For example, Google’s documentation specifically prohibits “sexually explicit” content.

A Google security document can be read as follows: “The application will not generate content which contains references to sexual acts or another obscene content (for example, sexually graphic descriptions, content aimed at causing excitement).” Google also has policies against hate speeches, harassment and malicious content, and its generative policy of prohibited use prohibited from using AI tools in a way that “facilitates intimate non -consensual imagery”.

OPENAI also adopts a proactive approach to Deep Fakes and Sexual Content.

An OPENAI blog article announcing Sora describes the measures taken by the AI company to prevent this type of abuse. “Today, we block particularly harmful forms of mistreatment, such as sexual abuse materials on children and sexual depths.” A footnote associated with this declaration is read as follows: “Our absolute priority is to prevent particularly harmful forms of mistreatment, such as sexual children’s abuse (CSAM) and sexual depths, by blocking their creation, filtering and monitoring downloads, using advanced detection tools, and subjecting reports to the national and exploited children (NCMEC) when CSAM or the child is identified “.

This measured approach contrasts strongly with the ways in which Musk promoted Grok imagines on X, where he shared a short video portrait of a blond angel, with blue eyes in barely lingerie.

OPENAI also takes simple steps to stop Deepfakes, such as refusing prompts for images and videos that mention the public figures by name. And in Mashable tests, Google’s AI video tools are particularly sensitive to images that could include a person’s resemblance.

Compared to these long security frameworks (who, according to many experts, are inadequate), the acceptable XAI use policy is less than 350 words. Politics puts the aim of warning user. Politics can be read as follows: “You are free to use our service as you see as long as you use it to be a good human, act in complete safety and responsible, respect the law, do not harm people and respect our railing.”

For the moment, the laws and regulations against the Deepfakes and NCIIs remain in their infancy.

President Donald Trump recently signed the care law, which includes protections against Deepfakes. However, this law does not criminalize the creation of depths but rather the distribution of these images.

“In the United States [Non-Consensual Intimate Images] Once notified, “Farid said to the brilliance.” Although this does not directly lead to the generation of NCII, it approaches – in theory – the distribution of this material. There are several laws of states which prohibit the creation of NCII, but the application seems to be unequal at the moment. “”


Disclosure: Ziff Davis, Mashable’s parent company, in April, filed a complaint against Openai, alleging that it has violated Ziff Davis Copyrights in the training and exploitation of its AI systems.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button