Elon Musk wants to rewrite "the entire corpus of human knowledge" with Grok
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::What the fuck? This is so unhinged. Genuine question, is he actually this dumb or he's just saying complete bullshit to boost stock prices?
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::Fuck Elon Musk
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::Yes! We should all wholeheartedly support this GREAT INNOVATION! There is NOTHING THAT COULD GO WRONG, so this will be an excellent step to PERMANENTLY PERFECT this WONDERFUL AI.
-
That's not how knowledge works. You can't just have an LLM hallucinate in missing gaps in knowledge and call it good.
SHH!! Yes you can, Elon! recursively training your model on itself definitely has NO DOWNSIDES
-
We will take the entire library of human knowledge, cleans it, and ensure our version is the only record available.
The only comfort I have is knowing anything that is true can be relearned by observing reality through the lense of science, which is itself reproducible from observing how we observe reality.
Have some more comfort
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::Huh. I'm not sure if he's understood the alignment problem quite right.
-
What the fuck? This is so unhinged. Genuine question, is he actually this dumb or he's just saying complete bullshit to boost stock prices?
my guess is yes.
-
The thing that annoys me most is that there have been studies done on LLMs where, when trained on subsets of output, it produces increasingly noisier output.
Sources (unordered):
- What is model collapse?
- AI models collapse when trained on recursively generated data
- Large Language Models Suffer From Their Own Output: An Analysis of the Self-Consuming Training Loop
- Collapse of Self-trained Language Models
Whatever nonsense Muskrat is spewing, it is factually incorrect. He won't be able to successfully retrain any model on generated content. At least, not an LLM if he wants a successful product. If anything, he will be producing a model that is heavily trained on censored datasets.
i think musk is annoying and a bad person but everyone responding with these papers is being disingenuous because it’s
-
a solved problem at this point,
-
clearly not what musk is planning on doing and
-
you guys who post these studies misunderstand what the model collapse papers actually say and either haven’t read them yourself or just read the abstract and saw “AI bad” then ran with it bc it made easy sense with your internal monologue. if you’re wondering what these papers all actually imply… go read them! they’re actually, surprise, very interesting! if you’ve already read the sources linked in these comment chains then… you understand why they’re not particularly relevant here and wouldn’t cite them!! like ffs your sources are all “unordered” not because it’d be too much work but because you just went out and found things that vaguely sound like they corroborate what you’re saying and you don’t actually know how you’d order them
idk why people seem to think oligarchs would be dumb enough to invest billions into something and miss some very obvious and widely publicized “gotcha”… that would be fucking stupid and they know that just as well as you?? people get really caught up on the schadenfreude of “haha look at the dumb rich people” without taking a moment to stop and think “wait, does this make sense in the first place?”
it’s why people circulate these machine learning papers so confidently with incorrect quips/opinions attached, it’s why when people do interact with these papers they misunderstand them on a fundamental level, and it’s why our society is collectively regressing like it’s 1799. guys i get your brain gives you dopamine to dunk on people but don’t do it at the price of your agency and rational ability.
-
I read about this in a popular book by some guy named Orwell
Wasn't he the children's author who published the book about a talking animals learning the value of hard work or something?
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::remember when grok called e*on and t**mp a nazi? good times
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::Dude wants to do a lot of things and fails to accomplish what he says he's doing to do or ends up half-assing it. So let him take Grok and run it right into the ground like an autopiloted Cybertruck rolling over into a flame trench of an exploding Startship rocket still on the pad shooting flames out of tunnels made by the Boring Company.
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::Lol turns out elon has no fucking idea about how llms work
-
The thing that annoys me most is that there have been studies done on LLMs where, when trained on subsets of output, it produces increasingly noisier output.
Sources (unordered):
- What is model collapse?
- AI models collapse when trained on recursively generated data
- Large Language Models Suffer From Their Own Output: An Analysis of the Self-Consuming Training Loop
- Collapse of Self-trained Language Models
Whatever nonsense Muskrat is spewing, it is factually incorrect. He won't be able to successfully retrain any model on generated content. At least, not an LLM if he wants a successful product. If anything, he will be producing a model that is heavily trained on censored datasets.
It's not so simple, there are papers on zero data 'self play' or other schemes for using other LLM's output.
Distillation is probably the only one you'd want for a pretrain, specifically.
-
And again. Read my reply. I refuted this idiotic. take.
You allowed yourselves to be dumbed down to this point.
You had started to make a point, now you are just being a dick.
-
Wasn't he the children's author who published the book about a talking animals learning the value of hard work or something?
The very one!
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::So they’re just going to fill it with Hitler’s world view, got it.
Typical and expected.
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::He knows more ... about knowledge... than... anyone alive now
-
Lol turns out elon has no fucking idea about how llms work
It's pretty obvious where the white genocide "bug" came from.
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::So just making shit up.
-
“Deleting Errors” should sound alarm bells in your head.
And the adding missing information doesn't. Isn't that just saying we are going to make shit up.