Grok praises Hitler, gives credit to Musk for removing “woke filters”
-
This post did not contain any content.
Grok praises Hitler, gives credit to Musk for removing “woke filters”
X removed many harmful Grok posts but not before they reached tens of thousands.
Ars Technica (arstechnica.com)
-
This post did not contain any content.
Grok praises Hitler, gives credit to Musk for removing “woke filters”
X removed many harmful Grok posts but not before they reached tens of thousands.
Ars Technica (arstechnica.com)
-
This post did not contain any content.
Grok praises Hitler, gives credit to Musk for removing “woke filters”
X removed many harmful Grok posts but not before they reached tens of thousands.
Ars Technica (arstechnica.com)
Nitpick: it was never 'filtered'
LLMs can be trained to refuse excessively (which is kinda stupid and is objectively proven to make them dumber), but the correct term is 'biased'. If it was filtered, it would literally give empty responses for anything deemed harmful, or at least noticably take some time to retry.
They trained it to praise hitler, intentionally. They didn't remove any guardrails. Not that Musk acolytes would know any different.
-
This post did not contain any content.
Grok praises Hitler, gives credit to Musk for removing “woke filters”
X removed many harmful Grok posts but not before they reached tens of thousands.
Ars Technica (arstechnica.com)
Who put the “woke filters” on in the first place?
-
Who put the “woke filters” on in the first place?
Its a noted phenomenon that reality has a liberal bias.
-
Its a noted phenomenon that reality has a liberal bias.
Stephen Colbert won a Peabody award for making this observation in 2006.
Seriously
-
Nitpick: it was never 'filtered'
LLMs can be trained to refuse excessively (which is kinda stupid and is objectively proven to make them dumber), but the correct term is 'biased'. If it was filtered, it would literally give empty responses for anything deemed harmful, or at least noticably take some time to retry.
They trained it to praise hitler, intentionally. They didn't remove any guardrails. Not that Musk acolytes would know any different.
If you wanted to nitpick honestly, you would say what is actually going on and the data it is trained on is from the internet and they were discouraging it from being offensive. The internet is a pretty offensive place when people don't have to censor themselves and speak without inhibitions, like on 4chan or Twitter comments.
Grok losing the guardrails means it will be distilled internet speech deprived of decency and empathy.
DeepSeek, now that is a filtered LLM.
-
If you wanted to nitpick honestly, you would say what is actually going on and the data it is trained on is from the internet and they were discouraging it from being offensive. The internet is a pretty offensive place when people don't have to censor themselves and speak without inhibitions, like on 4chan or Twitter comments.
Grok losing the guardrails means it will be distilled internet speech deprived of decency and empathy.
DeepSeek, now that is a filtered LLM.
DeepSeek, now that is a filtered LLM.
The web version has a strict filter that cuts it off. Not sure about API access, but raw Deepseek 671B is actually pretty open. Especially with the right prompting.
There are also finetunes that specifically remove China-specific refusals. Note that Microsoft actually added saftey training to "improve its risk profile":
microsoft/MAI-DS-R1 · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
(huggingface.co)
perplexity-ai/r1-1776 · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
(huggingface.co)
That's the virtue of being an open weights LLM. Over filtering is not a problem, one can tweak it to do whatever you want.
Grok losing the guardrails means it will be distilled internet speech deprived of decency and empathy.
Instruct LLMs aren't trained on raw data.
It wouldn't be talking like this if it was just trained on randomized, augmented conversations, or even mostly Twitter data. They cherry picked "anti woke" data to placate Musk real quick, and the result effectively drove the model crazy. It has all the signatures of a bad finetune: specific overused phrases, common obsessions, going off-topic, and so on.
...Not that I don't agree with you in principle. Twitter is a terrible source for data, heh.