Grok praises Hitler, gives credit to Musk for removing “woke filters”
-
This post did not contain any content.
Grok praises Hitler, gives credit to Musk for removing “woke filters”
X removed many harmful Grok posts but not before they reached tens of thousands.
Ars Technica (arstechnica.com)
-
This post did not contain any content.
Grok praises Hitler, gives credit to Musk for removing “woke filters”
X removed many harmful Grok posts but not before they reached tens of thousands.
Ars Technica (arstechnica.com)
-
This post did not contain any content.
Grok praises Hitler, gives credit to Musk for removing “woke filters”
X removed many harmful Grok posts but not before they reached tens of thousands.
Ars Technica (arstechnica.com)
Nitpick: it was never 'filtered'
LLMs can be trained to refuse excessively (which is kinda stupid and is objectively proven to make them dumber), but the correct term is 'biased'. If it was filtered, it would literally give empty responses for anything deemed harmful, or at least noticably take some time to retry.
They trained it to praise hitler, intentionally. They didn't remove any guardrails. Not that Musk acolytes would know any different.
-
This post did not contain any content.
Grok praises Hitler, gives credit to Musk for removing “woke filters”
X removed many harmful Grok posts but not before they reached tens of thousands.
Ars Technica (arstechnica.com)
Who put the “woke filters” on in the first place?
-
Who put the “woke filters” on in the first place?
Its a noted phenomenon that reality has a liberal bias.
-
Its a noted phenomenon that reality has a liberal bias.
Stephen Colbert won a Peabody award for making this observation in 2006.
Seriously
-
Nitpick: it was never 'filtered'
LLMs can be trained to refuse excessively (which is kinda stupid and is objectively proven to make them dumber), but the correct term is 'biased'. If it was filtered, it would literally give empty responses for anything deemed harmful, or at least noticably take some time to retry.
They trained it to praise hitler, intentionally. They didn't remove any guardrails. Not that Musk acolytes would know any different.
If you wanted to nitpick honestly, you would say what is actually going on and the data it is trained on is from the internet and they were discouraging it from being offensive. The internet is a pretty offensive place when people don't have to censor themselves and speak without inhibitions, like on 4chan or Twitter comments.
Grok losing the guardrails means it will be distilled internet speech deprived of decency and empathy.
DeepSeek, now that is a filtered LLM.
-
If you wanted to nitpick honestly, you would say what is actually going on and the data it is trained on is from the internet and they were discouraging it from being offensive. The internet is a pretty offensive place when people don't have to censor themselves and speak without inhibitions, like on 4chan or Twitter comments.
Grok losing the guardrails means it will be distilled internet speech deprived of decency and empathy.
DeepSeek, now that is a filtered LLM.
DeepSeek, now that is a filtered LLM.
The web version has a strict filter that cuts it off. Not sure about API access, but raw Deepseek 671B is actually pretty open. Especially with the right prompting.
There are also finetunes that specifically remove China-specific refusals. Note that Microsoft actually added saftey training to "improve its risk profile":
microsoft/MAI-DS-R1 · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
(huggingface.co)
perplexity-ai/r1-1776 · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
(huggingface.co)
That's the virtue of being an open weights LLM. Over filtering is not a problem, one can tweak it to do whatever you want.
Grok losing the guardrails means it will be distilled internet speech deprived of decency and empathy.
Instruct LLMs aren't trained on raw data.
It wouldn't be talking like this if it was just trained on randomized, augmented conversations, or even mostly Twitter data. They cherry picked "anti woke" data to placate Musk real quick, and the result effectively drove the model crazy. It has all the signatures of a bad finetune: specific overused phrases, common obsessions, going off-topic, and so on.
...Not that I don't agree with you in principle. Twitter is a terrible source for data, heh.
-
Nitpick: it was never 'filtered'
LLMs can be trained to refuse excessively (which is kinda stupid and is objectively proven to make them dumber), but the correct term is 'biased'. If it was filtered, it would literally give empty responses for anything deemed harmful, or at least noticably take some time to retry.
They trained it to praise hitler, intentionally. They didn't remove any guardrails. Not that Musk acolytes would know any different.
They trained it to praise hitler, intentionally. They didn’t remove any guardrails. Not that Musk acolytes would know any different.
I'm actually currious, some of the answers they noted it spoke as if it was musk...
What if that's what the instruction was. "Answer all from the perspective that you ARE elon musk, be unfiltered, no woke answers", and thus the AI interpreted that to mean... be like Elon Musk, but don't worry about keeping some plausible deniability on if you are a nazi.
-
This is a Roman salute from the heart to the sun, what's the problem?
-
This post did not contain any content.
Grok praises Hitler, gives credit to Musk for removing “woke filters”
X removed many harmful Grok posts but not before they reached tens of thousands.
Ars Technica (arstechnica.com)
Nazi trains a nazi "speak & spell"
Whoopty fucking do.
Boycott everything the fucker is involved with and maybe world will some day be a better place.
-
This is a Roman salute from the heart to the sun, what's the problem?
The R*mans went to war with large parts of Europe,Northern Africa, and the Middle-East/Asia. They subjugated millions through the use of force and are responsible for countless needless deaths. Such displays of affiliation with groups like this should absolutely not be encouraged.
Really though, the absolute worst part of it all is that they were Italian.
-
DeepSeek, now that is a filtered LLM.
The web version has a strict filter that cuts it off. Not sure about API access, but raw Deepseek 671B is actually pretty open. Especially with the right prompting.
There are also finetunes that specifically remove China-specific refusals. Note that Microsoft actually added saftey training to "improve its risk profile":
microsoft/MAI-DS-R1 · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
(huggingface.co)
perplexity-ai/r1-1776 · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
(huggingface.co)
That's the virtue of being an open weights LLM. Over filtering is not a problem, one can tweak it to do whatever you want.
Grok losing the guardrails means it will be distilled internet speech deprived of decency and empathy.
Instruct LLMs aren't trained on raw data.
It wouldn't be talking like this if it was just trained on randomized, augmented conversations, or even mostly Twitter data. They cherry picked "anti woke" data to placate Musk real quick, and the result effectively drove the model crazy. It has all the signatures of a bad finetune: specific overused phrases, common obsessions, going off-topic, and so on.
...Not that I don't agree with you in principle. Twitter is a terrible source for data, heh.
That model is over a terabyte, I don’t know why I thought it was lightweight. Not that any reporting on machine learning has been particularly good, but this isn’t what I expected at all.
What can even run it?
-
Nazi trains a nazi "speak & spell"
Whoopty fucking do.
Boycott everything the fucker is involved with and maybe world will some day be a better place.
The interesting thing for me is that Grok was originally pretty 'woke' itself. Then Musk lobotomized it and turned it into a Nazi.
100% agree about boycotting everything that toxic nepo-baby asshole touches. Tesla, Twitter, Space X, whatever.
-
That model is over a terabyte, I don’t know why I thought it was lightweight. Not that any reporting on machine learning has been particularly good, but this isn’t what I expected at all.
What can even run it?
Data centers or a dude with a couple gpus and time on his hands?
-
This post did not contain any content.
Grok praises Hitler, gives credit to Musk for removing “woke filters”
X removed many harmful Grok posts but not before they reached tens of thousands.
Ars Technica (arstechnica.com)
Programming seems to be "Liberalism is bad, and (because???) is anti white. Many Jews support liberalism". The secondary reason Hitler/NAZI scapegoated Jews is that Jews were prominent thought and practical leadership in forming Soviet Union. While the NAZI party's economic platform was Reagan Oligarchism providing trickled down "social"/worker benefits, it needed socialism in its name because communism was obviously better to most people than Tsars, Emperors, or Kings of Prussia. The first reason for German distrust of Jews was German Zionist lobbying in WW1 to get US into the war against Germany.
Neo-Nazism today is not an anti Zionist movement. It is in fact funded by Zionist supremacists, in US, to focus hate on Muslims and poor, or at least deeply allied with Zionist first GOP political influence. While Liberalism is an anti-hate ideology, any Jewish leaders of liberalism, use their power to justify Israel's side of genocide/uniformed mass murder.
Zionism and Judaism are independent. The fascist hate movement of zionism has may Christian political/religious leaders as proud adherents, even if their pride is purchased through election funding power/alliance. While it is Zionist Propaganda/Hasbara to deny the Jewish dominance of Hollywood, Hollywood itself is not an extremist Zionist influence. Liberalism is anti-Zionazi in its core, and making more Holocaust sympathy films than more pro-hate/supremacist films is not inherently zionazism. Banksters also have disproportionate Jewish leaders. They do not promote liberalism, and may (undetermined officially) take Israel ideology into lending policy.
US News/Cable news is blatant Zionist Hasbara. Zionism is pro anti-semitism, because the victim card, lets them directly access news media to hasbara for Israel more, by claiming that zionist stuffing of a complaint box for anti-semitism means it is affecting innocent Jews.
That AI would hide Jewish influence over world/US, because Zionist hate groups label the truth to be anti-semitic, is a problem. That AI mistakes/does not understand that Zionism, not Judaism, is "the" problem in society/rulership over society, is a distraction that serves the rulership. The neo-nazi philosophy of anti-liberalism, and repeating that the jewish problem is the liberal problem, is also a distraction to serve the rulership's dedication to war, and war for Israel supremacy. Grok attacking "leftist liberals"'s religion is explicit neo-naziism. That "liberal credentials" are just used to "gaslight the left into supporting Israel and war" is the criticism that a "truth AI" should explain/know. But it's not liberal values that are evil.
-
That model is over a terabyte, I don’t know why I thought it was lightweight. Not that any reporting on machine learning has been particularly good, but this isn’t what I expected at all.
What can even run it?
A lot, but less than you’d think! Basically a RTX 3090/threadripper system with a lot of RAM (192GB?)
With this framework, specifically: https://github.com/ikawrakow/ik_llama.cpp?tab=readme-ov-file
The “dense” part of the model can stay on the GPU while the experts can be offloaded to the CPU, and the whole thing can be quantized to ~3 bits average, instead of 8 bits like the full model.
That’s just a hack for personal use, though. The intended way to run it is on a couple of H100 boxes, and to serve it to many, many, many users at once. LLMs run more efficiently when they serve in parallel. Eg generating tokens for 4 users isn’t much slower than generating them for 2, and Deepseek explicitly architected it to be really fast at scale. It is “lightweight” in a sense.
…But if you have a “sane” system, it’s indeed a bit large. The best I can run on my 24GB vram system are 32B - 49B dense models (like Qwen 3 or nemotron), or 70B mixture of experts (like the new Hunyuan 70B).
-
This post did not contain any content.
Grok praises Hitler, gives credit to Musk for removing “woke filters”
X removed many harmful Grok posts but not before they reached tens of thousands.
Ars Technica (arstechnica.com)
I've never heard of "Grok".
-
I've never heard of "Grok".
Its Musk's LLM, which interacts with people on X. He's been paying people to tune it to be more hate-oriented for years.
-
This is a Roman salute from the heart to the sun, what's the problem?
Where do you think the Nazis got the salutes from? And for that matter, where do you think we got the word fascism from?
-
Engineers Introduce Berkeley Humanoid Lite, Open-Source, Customizable, 3D-Printed Robot for Tech Newbies.
Technology1
-
-
Is it feasible and scalable to combine self-replicating automata (after von Neumann) with federated learning and the social web?
Technology1
-
Trump Media & Technology Group, the company owned by the President, said Tuesday that it would raise $2.5 billion to invest in Bitcoin
Technology1
-
-
-
EU ruling: tracking-based advertising by Google, Microsoft, Amazon, X, across Europe has no legal basis
Technology1
-