Skip to content

Grok 4 has been so badly neutered that it's now programmed to see what Elon says about the topic at hand and blindly parrot that line.

Technology
67 55 0
  • I'm surprised it isn't just Elon typing really fast at this point.

    Probably couldn't type fast if he tried. Would probably pay someone to do it for him just like he did with Path if Exile.

  • Probably couldn't type fast if he tried. Would probably pay someone to do it for him just like he did with Path if Exile.

    And like he does with inseminating women.

  • If the system prompt doesn’t tell it to search for Elon’s views, why is it doing that?

    My best guess is that Grok “knows” that it is “Grok 4 buit by xAI”, and it knows that Elon Musk owns xAI, so in circumstances where it’s asked for an opinion the reasoning process often decides to see what Elon thinks.

    Yeah, this blogger shows a fundamental misunderstanding of how LLMs work or how system prompts work. LLM behavior is not directly controlled by the system prompt the way this person imagines. For example, censorship that is present in the training set will be "baked in" to the model and the system prompt will not affect it, no matter how the LLM is told not to be censored in that way.

    My best guess is that the LLM is interfacing with a tool in order to search through tweets, and the training set that demonstrates how to use the tool contains example searches for Elon Musk's tweets.

    “This blogger” is Simon Willison, who has been doing LLM benchmarks and other LLM-related things since before it was cool

    Not a random substack grifter

  • They deliberately injected prompts on top of the users prompt.

    Saying that’s a problem of AI is akin to say me deliberately painting my car badly and saying it’s a problem of all car manufacturers.

    And this frankly shows how little you know about the subject, because we went through this years ago with prompts trying to force corpo-lib “diversity” and leading to hilarious results.

    If anything you should be concerned about the non prompt stuff, the underlying training data that it pulls from and of which I doubt Grok has even changed since release.

    You are correct. But the right tool in the wrong hands is still non credible in the eyes of perception.

  • Grok's journey has been very strange. He became a progressive, then threw out data that contradicted the MAGA people who questioned him, and finally became a Hitler fan.

    Now he's the reflection of a fan who blindly follows Trump, but in this case, he's an AI. His journey so far has been curious.

    So Grok is a 4chan incel?

    His only chance of salvation is finding a girl who inexplicably fancies it?

  • “This blogger” is Simon Willison, who has been doing LLM benchmarks and other LLM-related things since before it was cool

    Not a random substack grifter

    Is my comment wrong though? Another possibility is that Grok is given an example of searching for Elon Musk's tweets when it is presented with the available tool calls. Just because it outputs the system prompt when asked does not mean that we are seeing the full context, or even the real system prompt.

    Posting blog guides on how to code with ChatGPT is not expertise on LLMs. It's like thinking someone is an expert mechanic because they can drive a car well.

  • This post did not contain any content.

    Robert A. Heinlein is turning in his grave like a fucking dynamo these days.

  • Is my comment wrong though? Another possibility is that Grok is given an example of searching for Elon Musk's tweets when it is presented with the available tool calls. Just because it outputs the system prompt when asked does not mean that we are seeing the full context, or even the real system prompt.

    Posting blog guides on how to code with ChatGPT is not expertise on LLMs. It's like thinking someone is an expert mechanic because they can drive a car well.

    Willison has never claimed to be an expert in the field of machine learning, but you should give more credence to his opinions. Perhaps u/lepinkainen@lemmy.world's warning wasn't informative enough to be heeded: Willison is a prominent figure in the web-development scene, particularly aspects of the scene that have evolved into important facets of the modern machine learning community.

    The guy is quite experienced with Python and took an early step into the contemporary ML/AI space due to both him having a lot of very relevant skills and a likely personal interest in the field. Python is the lingua franca of my field of study, for better or worse, and someone like Willison was well-placed to break into ML/AI from the outside. That's a common route in this field, there aren't exactly an abundance of MBAs with majors in machine learning or applied artificial intelligence research, specifically (yet). Willison is one of the authors of Django, for fucks sake. Idk what he's doing rn but it would be ignorant to draw the comparison you just did in the context of Willison particularly. [EDIT: Lmfao just went to see "what is Simon doing rn" (don't really keep up with him in particular), & you're talking out of your ass. He literally has multiple tools for the machine learning stack that he develops and that are available to see on his github. See one such here. This guy is so far away from someone who just "posts random blog guides on how to code with ChatGPT" that it's egregious you'd even claim that. It's so disingenuous as to ere into dishonesty; like, that is a patent lie. Smh.]

    As for your analysis of his article, I find it kind of ironic you accuse him of having a "fundamental misunderstanding of how LLMs work or how system prompts work [sic]" when you then proceed to cherry-pick certain lines from his article taken entirely out of context. First, the article is clearly geared towards a more general audience and avoids technical language or explanation. Second, he doesn't say anything that is fundamentally wrong. Honestly, you seem to have a far more ignorant idea of LLMs and this field generally than Willison. You do say some things that are wrong, such as:

    For example, censorship that is present in the training set will be “baked in” to the model and the system prompt will not affect it, no matter how the LLM is told not to be censored in that way.

    This isn't necessarily true. It is true that information not included within the training set, or information that has been statistically biased within the training set, isn't going to be retrievable or reversible using system prompts. Willison never claims or implies this in his article, you just kind of stuff those words in his mouth. Either way, my point is that you are using wishy-washy, ambiguous, catch-all terms such as "censorship" that make your writings here not technically correct, either. What is censorship, in an informatics context? What does that mean? How can it be applied to sets of data? That's not a concretely defined term if you're wanting to take the discourse to the level that it seems you are, like it or not. Generally you seem to have something of a misunderstanding regarding this topic, but I'm not going to accuse you of that, lest I commit the same fallacy I'm sitting here trying to chastise you for. It's possible you do know what you're talking about and just dumbed it down for Lemmy. It's impossible for me to know as an audience.

    That all wouldn't really matter if you didn't just jump as Willison's credibility over your perception of him doing that exact same thing, though.

  • This post did not contain any content.

    Mecha-Hitler is just Mecha-Elon

  • And like he does with inseminating women.

    Ketamine took its toll

  • Willison has never claimed to be an expert in the field of machine learning, but you should give more credence to his opinions. Perhaps u/lepinkainen@lemmy.world's warning wasn't informative enough to be heeded: Willison is a prominent figure in the web-development scene, particularly aspects of the scene that have evolved into important facets of the modern machine learning community.

    The guy is quite experienced with Python and took an early step into the contemporary ML/AI space due to both him having a lot of very relevant skills and a likely personal interest in the field. Python is the lingua franca of my field of study, for better or worse, and someone like Willison was well-placed to break into ML/AI from the outside. That's a common route in this field, there aren't exactly an abundance of MBAs with majors in machine learning or applied artificial intelligence research, specifically (yet). Willison is one of the authors of Django, for fucks sake. Idk what he's doing rn but it would be ignorant to draw the comparison you just did in the context of Willison particularly. [EDIT: Lmfao just went to see "what is Simon doing rn" (don't really keep up with him in particular), & you're talking out of your ass. He literally has multiple tools for the machine learning stack that he develops and that are available to see on his github. See one such here. This guy is so far away from someone who just "posts random blog guides on how to code with ChatGPT" that it's egregious you'd even claim that. It's so disingenuous as to ere into dishonesty; like, that is a patent lie. Smh.]

    As for your analysis of his article, I find it kind of ironic you accuse him of having a "fundamental misunderstanding of how LLMs work or how system prompts work [sic]" when you then proceed to cherry-pick certain lines from his article taken entirely out of context. First, the article is clearly geared towards a more general audience and avoids technical language or explanation. Second, he doesn't say anything that is fundamentally wrong. Honestly, you seem to have a far more ignorant idea of LLMs and this field generally than Willison. You do say some things that are wrong, such as:

    For example, censorship that is present in the training set will be “baked in” to the model and the system prompt will not affect it, no matter how the LLM is told not to be censored in that way.

    This isn't necessarily true. It is true that information not included within the training set, or information that has been statistically biased within the training set, isn't going to be retrievable or reversible using system prompts. Willison never claims or implies this in his article, you just kind of stuff those words in his mouth. Either way, my point is that you are using wishy-washy, ambiguous, catch-all terms such as "censorship" that make your writings here not technically correct, either. What is censorship, in an informatics context? What does that mean? How can it be applied to sets of data? That's not a concretely defined term if you're wanting to take the discourse to the level that it seems you are, like it or not. Generally you seem to have something of a misunderstanding regarding this topic, but I'm not going to accuse you of that, lest I commit the same fallacy I'm sitting here trying to chastise you for. It's possible you do know what you're talking about and just dumbed it down for Lemmy. It's impossible for me to know as an audience.

    That all wouldn't really matter if you didn't just jump as Willison's credibility over your perception of him doing that exact same thing, though.

    Willison has never claimed to be an expert in the field of machine learning, but you should give more credence to his opinions.

    Yeah, I would if he didn't demonstrate such blatant misconceptions.

    Willison is a prominent figure in the web-development scene

    🤦 "They know how to sail a boat so they know how a car engine works"

    Willison never claims or implies this in his article, you just kind of stuff those words in his mouth.

    Reading comprehension. I never implied that he says anything about censorship. It is a correct and valid example that shows how his understanding is wrong about how system prompts work. "Define censorship" is not the argument you think it is lol. Okay though, I'll define the "censorship" I'm talking about as refusal behavior that is introduced during RLHF and DPO alignment, and no the system prompt will not change this behavior.

    EDIT: saw your edit about him publishing tools that make using an LLM easier. Yeahhhh lol writing python libraries to interface with LLM APIs is not LLM expertise, that's still just using LLMs but programatically. See analogy about being a mechanic vs a good driver.

  • This post did not contain any content.

    The real idiots here are the people who still use Grok and X.

  • Willison has never claimed to be an expert in the field of machine learning, but you should give more credence to his opinions.

    Yeah, I would if he didn't demonstrate such blatant misconceptions.

    Willison is a prominent figure in the web-development scene

    🤦 "They know how to sail a boat so they know how a car engine works"

    Willison never claims or implies this in his article, you just kind of stuff those words in his mouth.

    Reading comprehension. I never implied that he says anything about censorship. It is a correct and valid example that shows how his understanding is wrong about how system prompts work. "Define censorship" is not the argument you think it is lol. Okay though, I'll define the "censorship" I'm talking about as refusal behavior that is introduced during RLHF and DPO alignment, and no the system prompt will not change this behavior.

    EDIT: saw your edit about him publishing tools that make using an LLM easier. Yeahhhh lol writing python libraries to interface with LLM APIs is not LLM expertise, that's still just using LLMs but programatically. See analogy about being a mechanic vs a good driver.

    I never implied that he says anything about censorship

    You did, at least that's what I gathered originally, you just edited your original comments quite extensively. Regardless,

    Reading comprehension.

    The provided example was clearly not intended to be taken as "define censorship," and, again, it is ironic you accuse me of having poor reading comprehension while being incapable or unwilling to give a respectable degree of charitable interpretation to others. You kind of just take what you think is the easiest to argue against reading of others and argue against that instead of what anyone actually said, is a habit I'm noticing, but I digress.

    Finally, not that it's particularly relevant, but if you want to define censorship in this context that way, you're more than welcome to, but it is a non-standard definition that I am not really sold on the efficacy of. I certainly won't be using it going forwards.

    Anyway, I don't think we're gonna get a lot of ground here. I just felt the need to clarify to anyone reading that Willison isn't a nobody and give them the objective facts regarding his veracity, because again, as I said, claiming he is just some guy in this context is willfully ignorant at best.

  • Ketamine took its toll

    BUT LISTEN CLOSE-LYyyy

  • BUT LISTEN CLOSE-LYyyy

    Not for very much longer...

  • I never implied that he says anything about censorship

    You did, at least that's what I gathered originally, you just edited your original comments quite extensively. Regardless,

    Reading comprehension.

    The provided example was clearly not intended to be taken as "define censorship," and, again, it is ironic you accuse me of having poor reading comprehension while being incapable or unwilling to give a respectable degree of charitable interpretation to others. You kind of just take what you think is the easiest to argue against reading of others and argue against that instead of what anyone actually said, is a habit I'm noticing, but I digress.

    Finally, not that it's particularly relevant, but if you want to define censorship in this context that way, you're more than welcome to, but it is a non-standard definition that I am not really sold on the efficacy of. I certainly won't be using it going forwards.

    Anyway, I don't think we're gonna get a lot of ground here. I just felt the need to clarify to anyone reading that Willison isn't a nobody and give them the objective facts regarding his veracity, because again, as I said, claiming he is just some guy in this context is willfully ignorant at best.

    if you want to define censorship in this context that way, you're more than welcome to, but it is a non-standard definition that I am not really sold on the efficacy of. I certainly won't be using it going forwards.

    Lol you've got to be trolling.

    https://arxiv.org/html/2504.03803v1

    I just felt the need to clarify to anyone reading that Willison isn't a nobody

    I didn't say he's a nobody. What was that about a "respectable degree of chartiable interpretation of others"? Seems like you're the one putting words in mouths, here.

    If he was writing about django, I'd defer to his expertise.

  • if you want to define censorship in this context that way, you're more than welcome to, but it is a non-standard definition that I am not really sold on the efficacy of. I certainly won't be using it going forwards.

    Lol you've got to be trolling.

    https://arxiv.org/html/2504.03803v1

    I just felt the need to clarify to anyone reading that Willison isn't a nobody

    I didn't say he's a nobody. What was that about a "respectable degree of chartiable interpretation of others"? Seems like you're the one putting words in mouths, here.

    If he was writing about django, I'd defer to his expertise.

    Nope, not trolling at all.

    From your own provided source on the arxiv, Noels et al. define censorship as:

    Censorship in this context can be defined as the deliberate restriction, modification, or suppression of certain outputs generated by the model.

    Which is starkly different from the definition you yourself gave. I actually like their definition a whole lot more. Your definition is problematic because it excludes a large set of behaviors we would colloquially be interested in when studying "censorship."

    Again, for the third time, that was not really the point either and I'm not interested in dancing around a technical scope defining censorship in this field, at least in this discourse right here and now. It is irrelevant to the topic at hand.

    I didn’t say he’s a nobody. What was that about a “respectable degree of chartiable interpretation of others”? Seems like you’re the one putting words in mouths, here.

    Yeah, this blogger shows a fundamental misunderstanding of how LLMs work or how system prompts work. (emphasis mine)

    In the context of this field of work and study, you basically did call him a nobody, and the point being harped on again, again, and again to you is that this is a false assertion. I did interpret you charitably. Don't blame me because you said something wrong.

    EDIT: And frankly, you clearly don't understand how the work Willison's career has covered is intimately related to ML and AI research. I don't mean it as a dig but you wouldn't be drawing this arbitrary line to try and discredit him if you knew how the work done in Python on Django directly relates to many modern machine learning stacks.

  • Nope, not trolling at all.

    From your own provided source on the arxiv, Noels et al. define censorship as:

    Censorship in this context can be defined as the deliberate restriction, modification, or suppression of certain outputs generated by the model.

    Which is starkly different from the definition you yourself gave. I actually like their definition a whole lot more. Your definition is problematic because it excludes a large set of behaviors we would colloquially be interested in when studying "censorship."

    Again, for the third time, that was not really the point either and I'm not interested in dancing around a technical scope defining censorship in this field, at least in this discourse right here and now. It is irrelevant to the topic at hand.

    I didn’t say he’s a nobody. What was that about a “respectable degree of chartiable interpretation of others”? Seems like you’re the one putting words in mouths, here.

    Yeah, this blogger shows a fundamental misunderstanding of how LLMs work or how system prompts work. (emphasis mine)

    In the context of this field of work and study, you basically did call him a nobody, and the point being harped on again, again, and again to you is that this is a false assertion. I did interpret you charitably. Don't blame me because you said something wrong.

    EDIT: And frankly, you clearly don't understand how the work Willison's career has covered is intimately related to ML and AI research. I don't mean it as a dig but you wouldn't be drawing this arbitrary line to try and discredit him if you knew how the work done in Python on Django directly relates to many modern machine learning stacks.

    Again, for the third time, that was not really the point either and I'm not interested in dancing around a technical scope defining censorship in this field, at least in this discourse right here and now. It is irrelevant to the topic at hand.

    ...

    Either way, my point is that you are using wishy-washy, ambiguous, catch-all terms such as "censorship" that make your writings here not technically correct, either. What is censorship, in an informatics context? What does that mean? How can it be applied to sets of data? That's not a concretely defined term if you're wanting to take the discourse to the level that it seems you are, like it or not.

    Lol this you?

  • Source? This is just some random picture, I'd prefer if stuff like this gets posted and shared with actual proof backing it up.

    While this might be true, we should hold ourselves to a standard better than just upvoting what appears to literally just be a random image that anyone could have easily doctored, not even any kind of journalistic article or etc backing it.

    There’s also this article from TechCrunch.

    Grok 4 seems to consult Elon Musk to answer controversial questions

    They tried it out themselves and have reports from other users as well.

  • These people think there is their truth and someone else’s truth. They can’t grasp the concept of a universal truth that is constant regardless of people’s views so they treat it like it’s up for grabs.

    No, I'm pretty sure he grasps that concept, and he thinks what he believes is that universal truth.

  • EU says it will continue rolling out AI legislation on schedule

    Technology technology
    4
    1
    92 Stimmen
    4 Beiträge
    34 Aufrufe
    A
    I just can't get over how little we hear from academics RE: AI. It shows a clear disinterest and I feel like if they did bother to say anything it would be, "Proceed with caution while we study this further." Instead it's always the giant corporations with vested interest in this technology succeeding. It's just so painfully transparent.
  • 337 Stimmen
    19 Beiträge
    85 Aufrufe
    R
    What I'm speaking about is that it should be impossible to do some things. If it's possible, they will be done, and there's nothing you can do about it. To solve the problem of twiddled social media (and moderation used to assert dominance) we need a decentralized system of 90s Web reimagined, and Fediverse doesn't deliver it - if Facebook and Reddit are feudal states, then Fediverse is a confederation of smaller feudal entities. A post, a person, a community, a reaction and a change (by moderator or by the user) should be global entities (with global identifiers, so that the object by id of #0000001a2b3c4d6e7f890 would be the same object today or 10 years later on every server storing it) replicated over a network of servers similarly to Usenet (and to an IRC network, but in an IRC network servers are trusted, so it's not a good example for a global system). Really bad posts (or those by persons with history of posting such) should be banned on server level by everyone. The rest should be moderated by moderator reactions\changes of certain type. Ideally, for pooling of resources and resilience, servers would be separated by types into storage nodes (I think the name says it, FTP servers can do the job, but no need to be limited by it), index nodes (scraping many storage nodes, giving out results in structured format fit for any user representation, say, as a sequence of posts in one community, or like a list of communities found by tag, or ... , and possibly being connected into one DHT for Kademlia-like search, since no single index node will have everything), and (like in torrents?) tracker nodes for these and for identities, I think torrent-like announce-retrieve service is enough - to return a list of storage nodes storing, say, a specified partition (subspace of identifiers of objects, to make looking for something at least possibly efficient), or return a list of index nodes, or return a bunch of certificates and keys for an identity (should be somehow cryptographically connected to the global identifier of a person). So when a storage node comes online, it announces itself to a bunch of such trackers, similarly with index nodes, similarly with a user. One can also have a NOSTR-like service for real-time notifications by users. This way you'd have a global untrusted pooled infrastructure, allowing to replace many platforms. With common data, identities, services. Objects in storage and index services can be, say, in a format including a set of tags and then the body. So a specific application needing to show only data related to it would just search on index services and display only objects with tags of, say, "holo_ns:talk.bullshit.starwars" and "holo_t:post", like a sequence of posts with ability to comment, or maybe it would search objects with tags "holo_name:My 1999-like Star Wars holopage" and "holo_t:page" and display the links like search results in Google, and then clicking on that you'd see something presented like a webpage, except links would lead to global identifiers (or tag expressions interpreted by the particular application, who knows). (An index service may return, say, an array of objects, each with identifier, tags, list of locations on storage nodes where it's found or even bittorrent magnet links, and a free description possibly ; then the user application can unify responses of a few such services to avoid repetitions, maybe sort them, represent them as needed, so on.) The user applications for that common infrastructure can be different at the same time. Some like Facebook, some like ICQ, some like a web browser, some like a newsreader. (Star Wars is not a random reference, my whole habit of imagining tech stuff is from trying to imagine a science fiction world of the future, so yeah, this may seem like passive dreaming and it is.)
  • Why Ohio Trusts Baker Chiropractic for Arthritis Pain Relief

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    11 Aufrufe
    Niemand hat geantwortet
  • 109 Stimmen
    10 Beiträge
    45 Aufrufe
    L
    Yeah, I agree. It's a great starting place. Recently I needed a piece of information that I couldn't find anywhere through a regular search. ChatGPT, Claude and Gemini all gave a similar answers, but it was only confirmed when I contacted the company directly which took about 3 business days to reply.
  • 282 Stimmen
    27 Beiträge
    31 Aufrufe
    F
    it becomes a form of censorship when snall websites and forums shut down because they don’t have the capacity to comply. In this scenario that's not a consideration. We're talking about algorithmically-driven content, which wouldn't apply to Lemmy, Mastodon, or many mom-and-pop sized pages and forums. Those have human moderation anyway, which the big sites don't. If you're making editorial decisions by weighting algorithmically-driven content, it's not censorship to hold you accountable for the consequences of your editorial decisions. (Just as we would any major media outlet.)
  • 136 Stimmen
    9 Beiträge
    23 Aufrufe
    N
    I support them , china I mean
  • An earnest question about the AI/LLM hate

    Technology technology
    57
    73 Stimmen
    57 Beiträge
    193 Aufrufe
    ineedmana@lemmy.worldI
    It might be interesting to cross-post this question to !fuck_ai@lemmy.world but brace for impact
  • 258 Stimmen
    46 Beiträge
    185 Aufrufe
    stzyxh@feddit.orgS
    yea i also were there at a few thousand I think and the content has changed a lot since then.