Elon

Grok “is just completely unlike how any other image altering [AI] service works,” he said.
Musk and xAI did not respond to a detailed request for comment. X did not respond to a separate detailed request for comment.
That behind-the-scenes shift in xAI’s philosophy burst into public view last month, when Grok generated a wave of sexualized images, placing real women in sexual poses, such as suggestively splattering their faces with whipped cream, and “undressing” them into revealing clothing, including bikinis as tiny as a string of dental floss. Musk appeared to egg on the undressing in posts on X.
Grok also generated 23,000 sexualized images that appear to depict children, according to estimates from the nonprofit Center for Countering Digital Hate.

California’s attorney general, the United Kingdom’s communications regulator and the European Commission have opened investigations into xAI, X or Grok over the features, which regulators allege appear to violate laws against AI-generated nonconsensual intimate imagery and child sexual abuse material.

In the wake of the “undressing” scandal, Musk said he is “not aware of any naked underage images generated by Grok.”
“When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state,” he said last month. “There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately.”
 
In the U.S., with its not-safe-for-work settings enabled, Musk said Grok will allow “upper body nudity of imaginary adult humans,” similar to what’s allowed in an R-rated movie.
But in at least one way, Musk’s push has worked for the company. Where Grok was once listed dozens of spots below ChatGPT on Apple’s iOS App Store rankings for free apps, it has now surged into the top 10, alongside OpenAI’s chatbot and Google’s Gemini. Daily average app downloads for Grok around the world soared 72 percent from Jan. 1 to Jan. 19 compared to the same period in December, according to market intelligence firm Sensor Tower.
Ashley St. Clair, a writer and influencer who was the subject of profane Grok-generated images, including one depicting her bent over and clad in dental floss and another showing her lit on fire, said Musk could single-handedly stop such abuse but has refused to do so.

“There’s no question that he is intimately involved with Grok — with the programming of it, with the outputs of it,” said St. Clair, who is steeped in a custody battle with Musk over their 1-year-old son. “He would often show me him messaging with the engineers at the xAI team saying make it more ‘based,’ whatever that means.”


Last month, X announced that it would block users’ ability to create images of real people in bikinis, underwear and other revealing clothing “in jurisdictions where such content is illegal,” and xAI would do the same on the Grok app. U.S. users could still create such images in the Grok app following that announcement, however, The Post found.
 
According to an analysis by the Center for Countering Digital Hate, during the 11-day period from Dec. 29 through Jan. 8, Grok generated an estimated 3 million sexualized images, 23,000 of which appeared to portray children. “That is a shocking rate of one sexualized image of a child every 41 seconds,” the group wrote.

Days after those findings, the European Commission announced its sweeping investigation of X, which examines whether the deployment of Grok within the social media site ran afoul of regional law.

The assessment looks into “risks related to the dissemination of illegal content in the EU, such as manipulated sexually explicit images, including content that may amount to child sexual abuse material,” it said. “These risks seem to have materialised, exposing citizens in the EU to serious harm.”
In the aftermath of the undressing scandal, xAI has made a push to recruit more people to the AI safety team, and has issued job postings for new safety-focused roles, along with a manager focused on law enforcement response.

Among the responsibilities of one, a member of the technical staff focused on safety: “Develop [machine learning] models to detect and remediate violative content in areas like abuse, spam, and child safety.”
 
Wait i thought BL loved Ai of all types and used it religiously without question as gospel ?

Is that not the case anymore ?
 
Wait i thought BL loved Ai of all types and used it religiously without question as gospel ?

Is that not the case anymore ?
I think calculators are useful but I’d perhaps question what’s going on and what to do about it if they started generating child porn.
 
Grok’s creation of obscene materials featuring users isn’t unique amongst these models, but the fact that any verified account can ask it to produce such materials of anybody it wishes *and* have that image be immediately posted publicly on a social media platform is something that I’d think is something worth sorting out.
 
Grok’s creation of obscene materials featuring users isn’t unique amongst these models, but the fact that any verified account can ask it to produce such materials of anybody it wishes *and* have that image be immediately posted publicly on a social media platform is something that I’d think is something worth sorting out.
I’m sure that it will.

“Slowly but surely”
 
I’m sure that it will.

“Slowly but surely”
Don’t you think the EU has a vested interest in speeding that process along to protect these kids or the adults who are having their likenesses turned into porn without their consent? Or that it is a unique issue that is separate from the larger discussion on AI?
 
Don’t you think the EU has a vested interest in speeding that process along to protect these kids or the adults who are having their likenesses turned into porn without their consent? Or that it is a unique issue that is separate from the larger discussion on AI?
Protecting kids was so last year for MAGA… they’ve moved onto other grifts… keep up
 
  • Like
Reactions: mqt
Don’t you think the EU has a vested interest in speeding that process along to protect these kids or the adults who are having their likenesses turned into porn without their consent? Or that it is a unique issue that is separate from the larger discussion on AI?
I do. It will get handled though. No doubt.

I also think this is rich from folks who are gender reassigning at 10 years old.
 
I do. It will get handled though. No doubt.

I also think this is rich from folks who are gender reassigning at 10 years old.
Sure, whatever. But it should hopefully go without saying that these are entirely different issues.
 
Sure, whatever. But it should hopefully go without saying that these are entirely different issues.
Sure. Just don’t lecture the folks taking out people hurting kids from the system an act like you got the high ground
 
Back
Top