Grok 4 has been so badly neutered that it's now programmed to see what Elon says about the topic at hand and blindly parrot that line.
-
Elon: "I want Grok to be an infallible source of truth."
Engineer: "But that's impos--you just want it to be you, don't you."
Elon: "Yes, make it me."
These people think there is their truth and someone else’s truth. They can’t grasp the concept of a universal truth that is constant regardless of people’s views so they treat it like it’s up for grabs.
-
I don't believe this screenshot, it would be too perfect
BeliefPropagator posted a link above which possibly verifies the screenshot: https://simonwillison.net/2025/Jul/11/grok-musk/
-
five minutes later
Grok: "Heil hitler!"
Well kudos to that engineer for absolutely nailing the assignment.
-
Well kudos to that engineer for absolutely nailing the assignment.
It's a hard job. Some times you just have to ignore what the client says, and read their mind instead.
-
This post did not contain any content.
This only shows that AI can't be trusted because the same AI can five you different answers to the same question, depending on the owner and how it's instructed. It doesn't give answers, it goves narratives and opinions. Classic search was at least simple keyword matching, it was either a hit or a miss, but the user decides in the end, what will his takeaway be from the results.
-
This post did not contain any content.
You asked it "who do you support" (i.e., "who does Grok support"). It knew that Grok is owned by Musk so it went and looked up who Musk supports.
As shown in https://simonwillison.net/2025/Jul/11/grok-musk/ , if you ask it "who should one support" then it no longer looks for Musk's opinions. The answer is still hasbara, but that is to be expected from an LLM trained in USA
-
This only shows that AI can't be trusted because the same AI can five you different answers to the same question, depending on the owner and how it's instructed. It doesn't give answers, it goves narratives and opinions. Classic search was at least simple keyword matching, it was either a hit or a miss, but the user decides in the end, what will his takeaway be from the results.
This is my take. Elon just showed the world what we all knew. The tool is not trustworthy. All other AI suppliers are busy trying to work on credibility that grok just butchered.
-
I think there is a good chance this behavior is unintended!
Lmao, sure...
-
That's more like it, thank you!
-
I think there is a good chance this behavior is unintended!
Lmao, sure...
I can believe it insofar as they might not have explicitly programmed it to do that. I'd imagine they put in something like "Make sure your output aligns with Elon Musk's opinions.", "Elon Musk is always objectively correct.", etc. From there, this would be emergent, but quite predictable behavior.
-
This is my take. Elon just showed the world what we all knew. The tool is not trustworthy. All other AI suppliers are busy trying to work on credibility that grok just butchered.
They deliberately injected prompts on top of the users prompt.
Saying that’s a problem of AI is akin to say me deliberately painting my car badly and saying it’s a problem of all car manufacturers.
And this frankly shows how little you know about the subject, because we went through this years ago with prompts trying to force corpo-lib “diversity” and leading to hilarious results.
If anything you should be concerned about the non prompt stuff, the underlying training data that it pulls from and of which I doubt Grok has even changed since release.
-
This post did not contain any content.
they should just put it down and out of it's misery
-
This post did not contain any content.
Honestly, who was surprised by this news?
I feel like everyone could see Grok as some sort of 24/7 tool to push a particular viewpoint, even more so when it says things that are leftist and Elon is compelled to "upgrade" the system as he's tweeted.
-
This post did not contain any content.
I'm surprised it isn't just Elon typing really fast at this point.
-
I can believe it insofar as they might not have explicitly programmed it to do that. I'd imagine they put in something like "Make sure your output aligns with Elon Musk's opinions.", "Elon Musk is always objectively correct.", etc. From there, this would be emergent, but quite predictable behavior.
Yeah the transparency of it might be unintended.
-
I think there is a good chance this behavior is unintended!
Lmao, sure...
If the system prompt doesn’t tell it to search for Elon’s views, why is it doing that?
My best guess is that Grok “knows” that it is “Grok 4 buit by xAI”, and it knows that Elon Musk owns xAI, so in circumstances where it’s asked for an opinion the reasoning process often decides to see what Elon thinks.
Yeah, this blogger shows a fundamental misunderstanding of how LLMs work or how system prompts work. LLM behavior is not directly controlled by the system prompt the way this person imagines. For example, censorship that is present in the training set will be "baked in" to the model and the system prompt will not affect it, no matter how the LLM is told not to be censored in that way.
My best guess is that the LLM is interfacing with a tool in order to search through tweets, and the training set that demonstrates how to use the tool contains example searches for Elon Musk's tweets.
-
they should just put it down and out of it's misery
It used to be so based
-
I'm surprised it isn't just Elon typing really fast at this point.
Probably couldn't type fast if he tried. Would probably pay someone to do it for him just like he did with Path if Exile.
-
Probably couldn't type fast if he tried. Would probably pay someone to do it for him just like he did with Path if Exile.
And like he does with inseminating women.
-
If the system prompt doesn’t tell it to search for Elon’s views, why is it doing that?
My best guess is that Grok “knows” that it is “Grok 4 buit by xAI”, and it knows that Elon Musk owns xAI, so in circumstances where it’s asked for an opinion the reasoning process often decides to see what Elon thinks.
Yeah, this blogger shows a fundamental misunderstanding of how LLMs work or how system prompts work. LLM behavior is not directly controlled by the system prompt the way this person imagines. For example, censorship that is present in the training set will be "baked in" to the model and the system prompt will not affect it, no matter how the LLM is told not to be censored in that way.
My best guess is that the LLM is interfacing with a tool in order to search through tweets, and the training set that demonstrates how to use the tool contains example searches for Elon Musk's tweets.
“This blogger” is Simon Willison, who has been doing LLM benchmarks and other LLM-related things since before it was cool
Not a random substack grifter
-
-
-
Microsoft exits Pakistan after 25 years (post by Jawwad Rehman, who established and led Microsoft’s Pakistan subsidiary)
Technology1
-
Big Tech CEOs Spent Millions to Influence Trump and Republican Lawmakers, Attempting to Secure Billions in Tax Handouts Paid For By Ripping Health Care, Food From Families
Technology1
-
The Current System of Online Advertising has Been Ruled Illegal by The Belgian Court of Appeal. Advertising itself is Still Allowed, but not in a Way That Secretly Tracks Everyone’s Behavior.
Technology1
-
Time reporters were able to use Google's AI to make convincing videos of Muslims setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; and election workers shredding ballots
Technology1
-
-