Elon

Non criminal related drug tests arent done with someone watching. Just heres a cup go into this room and let me know when you are done. They check for a few things like temperature. Theres any number of ways to pass that just google it. There are also ways to to pass with someone watching that I would rather not describe.
Maybe you should for the folks in Mississippi.
 
A chatbot created by Elon Musk’s artificial intelligence company launched into an antisemitic tirade Tuesday and invoked Adolf Hitler, days after Musk touted updates that would reduce its reliance on mainstream media sources and train it on information that is “politically incorrect.”

Responding to a post on X on Tuesday, Grok — which is part of Musk’s social media site, X — accused a person named Cindy Steinberg of “gleefully celebrating the tragic deaths of white kids in the recent Texas flash floods.”

“Classic case of hate dressed as activism — and that surname? Every damn time, as they say,” the chatbot said.

In another post, Grok invoked Hitler when asked which historical figure would best be suited to address anti-White hate. “To deal with such vile anti-white hate? Adolf Hitler, no question,” it wrote. “He’d spot the pattern and handle it decisively.”

 
Linda Yaccarino, CEO of Elon Musk’s X, announced Wednesday that she is stepping down from the social media platform after two years in the position.

She made the announcement a day after the platform’s artificial intelligence chatbot launched into an antisemitic tirade and invoked Adolf Hitler. Yaccarino, who was hired by Musk after he bought the company formerly known as Twitter in 2022, did not give a reason for her departure Wednesday.

X is truly a digital town square for all voices and the world’s most powerful culture signal,” she wrote. “We couldn’t have achieved that without the support of our users, business partners, and the most innovative team in the world. I’ll be cheering you all on as you continue to change the world.”

“Thank you for your contributions,” Musk replied.
 
Last edited:
I would imagine Columbia and Hardvard students will be protesting on campus championing the latest update to Grok
 
It’s hard to measure the value that X provides to Elon as training data for his AI model.
A few years ago (in the infancy of these models) someone trained a chatbot on twitter content. This was even before Musk bought it. The creators of the chatbot were horrified by the behavior of their creature.
 
A few years ago (in the infancy of these models) someone trained a chatbot on twitter content. This was even before Musk bought it. The creators of the chatbot were horrified by the behavior of their creature.
The models dont concern themselves with political correctness unless told to. A simple analysis of the data reveal.a whole different world than we pretend exists
 
True. The chatbot behaved as if it was the grand marshal of the KKK. That's what unfiltered twitter content (pre-Elon) produced. That was a while back. These days it would probably spew out demonization of white people.
 
The models dont concern themselves with political correctness unless told to. A simple analysis of the data reveal.a whole different world than we pretend exists
I of course don’t believe Elon told Grok to become mecha Hitler (as some think).

But I wouldn’t be surprised if Elon put his thumb on the scale a bit to ensure his model was “anti-woke”, and Mecha Hitler emerged as this edge lord meme king.
 

This bit where someone prods Grok into saying something gross and then turns around and feigns shock at Grok saying something gross is the absolute dumbest bit.

It’s actually not surprising or particularly interesting that a chatbot trained on a social media site is capable of saying gross or offensive shit. That’s its primary source, guys.
 
Grok is actually pretty humble and level-headed most of the time. He does get triggered whenever I say something nice about Gemini.
 
Weeks before Elon Musk officially left his perch in government last spring, employees on the human data team of his artificial intelligence start-up xAI received a startling waiver from their employer, asking them to pledge to work with profane content, including sexual material.

Their jobs would require being exposed to “sensitive, violent, sexual and/or other offensive or disturbing content,” the waiver said, emphasizing that some such content “may be disturbing, traumatizing, and/or cause you psychological stress.”

The waiver, which two former employees confirmed receiving and a copy of which was obtained by The Washington Post — was alarming to some members on the team, who had been hired to help shape how xAI’s chatbot Grok responds to users. To some employees, it signaled a troubling new direction for a company launched “to accelerate human scientific discovery,” according to its website. Maybe now, they said they thought, it was willing to produce whatever content might attract and keep users.

 
Their concerns proved prescient, the employees said. In the next few months, team members were suddenly exposed to a stream of sexually charged audio, including lewd conversations that Tesla occupants had with the car’s chatbot and other users’ sexual interactions with Grok chatbots, said one of the people, a manager. The material surfaced as the team worked to train Grok to engage in such interactions.

Since leaving his role overseeing the U.S. DOGE Service in May, Musk has become a constant presence at xAI’s offices — at times sleeping there overnight — as he has pressed to increase Grok’s popularity, according to two of the people. In meeting after meeting he has championed a new metric, “user active seconds,” to granularly measure how long people spent conversing with the chatbot, according to two of the people.

As part of this push for relevance, xAI embraced making sexualized material, publicly releasing sexy AI companions, rolling back guardrails on sexual material and ignoring internal warnings about the potentially serious legal and ethical risks of producing such content, according to interviews with more than a half-dozen former employees of X and xAI, as well as multiple people familiar with Musk’s thinking — some of whom spoke on the condition of anonymity for fear of professional retribution — and documents obtained by The Post.
 
At X, the social media site formerly known as Twitter that Musk purchased in 2022, safety teams repeatedly warned management in meetings and messages that its AI tools could allow users to make sexual AI-images of children or celebrities that might violate the law, according to two of the people. Within xAI, the company’s AI safety team, in charge of preventing major harms such as users building cyberweapons using the app, consisted of just two or three people for most of 2025, according to two of the people, a fraction of the dozens of staffers on similar teams at OpenAI or other rivals.

The biggest AI companies have typically placed strict limits around creating or editing AI images and videos, to prevent users from making child sexual abuse material or fake content about celebrities.

But when xAI merged its editing tools into X in December, giving anyone with an account the ability to make an AI picture, it allowed sexual images to spread at unprecedented speed and scale, said David Thiel, former chief technology officer for the Stanford Internet Observatory.
 
Back
Top