Elon Musk wants to rewrite "the entire corpus of human knowledge" with Grok
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::schrieb am 23. Juni 2025, 17:35 zuletzt editiert vonBy the way, when you refuse to band together, organize, and dispose of these people, they entrench themselves further in power. Everyone ignored Kari Lake as a harmless kook and she just destroyed Voice of America. That loudmouthed MAGA asshole in your neighborhood is going to commit a murder.
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::schrieb am 23. Juni 2025, 17:36 zuletzt editiert vonWhat the fuck? This is so unhinged. Genuine question, is he actually this dumb or he's just saying complete bullshit to boost stock prices?
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::schrieb am 23. Juni 2025, 17:44 zuletzt editiert vonFuck Elon Musk
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::schrieb am 23. Juni 2025, 18:11 zuletzt editiert vonYes! We should all wholeheartedly support this GREAT INNOVATION! There is NOTHING THAT COULD GO WRONG , so this will be an excellent step to PERMANENTLY PERFECT this WONDERFUL AI.
-
That's not how knowledge works. You can't just have an LLM hallucinate in missing gaps in knowledge and call it good.
schrieb am 23. Juni 2025, 18:16 zuletzt editiert vonSHH!! Yes you can, Elon! recursively training your model on itself definitely has NO DOWNSIDES
-
We will take the entire library of human knowledge, cleans it, and ensure our version is the only record available.
The only comfort I have is knowing anything that is true can be relearned by observing reality through the lense of science, which is itself reproducible from observing how we observe reality.
schrieb am 23. Juni 2025, 18:20 zuletzt editiert vonHave some more comfort
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::schrieb am 23. Juni 2025, 18:21 zuletzt editiert vonHuh. I'm not sure if he's understood the alignment problem quite right.
-
What the fuck? This is so unhinged. Genuine question, is he actually this dumb or he's just saying complete bullshit to boost stock prices?
schrieb am 23. Juni 2025, 18:27 zuletzt editiert vonmy guess is yes.
-
The thing that annoys me most is that there have been studies done on LLMs where, when trained on subsets of output, it produces increasingly noisier output.
Sources (unordered):
- What is model collapse?
- AI models collapse when trained on recursively generated data
- Large Language Models Suffer From Their Own Output: An Analysis of the Self-Consuming Training Loop
- Collapse of Self-trained Language Models
Whatever nonsense Muskrat is spewing, it is factually incorrect. He won't be able to successfully retrain any model on generated content. At least, not an LLM if he wants a successful product. If anything, he will be producing a model that is heavily trained on censored datasets.
schrieb am 23. Juni 2025, 18:32 zuletzt editiert von jwmgregory@lemmy.dbzer0.comi think musk is annoying and a bad person but everyone responding with these papers is being disingenuous because it’s
-
a solved problem at this point,
-
clearly not what musk is planning on doing and
-
you guys who post these studies misunderstand what the model collapse papers actually say and either haven’t read them yourself or just read the abstract and saw “AI bad” then ran with it bc it made easy sense with your internal monologue. if you’re wondering what these papers all actually imply… go read them! they’re actually, surprise, very interesting! if you’ve already read the sources linked in these comment chains then… you understand why they’re not particularly relevant here and wouldn’t cite them!! like ffs your sources are all “unordered” not because it’d be too much work but because you just went out and found things that vaguely sound like they corroborate what you’re saying and you don’t actually know how you’d order them
idk why people seem to think oligarchs would be dumb enough to invest billions into something and miss some very obvious and widely publicized “gotcha”… that would be fucking stupid and they know that just as well as you?? people get really caught up on the schadenfreude of “haha look at the dumb rich people” without taking a moment to stop and think “wait, does this make sense in the first place?”
it’s why people circulate these machine learning papers so confidently with incorrect quips/opinions attached, it’s why when people do interact with these papers they misunderstand them on a fundamental level, and it’s why our society is collectively regressing like it’s 1799. guys i get your brain gives you dopamine to dunk on people but don’t do it at the price of your agency and rational ability.
-
I read about this in a popular book by some guy named Orwell
schrieb am 23. Juni 2025, 18:41 zuletzt editiert vonWasn't he the children's author who published the book about a talking animals learning the value of hard work or something?
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::schrieb am 23. Juni 2025, 18:51 zuletzt editiert vonremember when grok called e*on and t**mp a nazi? good times
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::schrieb am 23. Juni 2025, 18:56 zuletzt editiert vonDude wants to do a lot of things and fails to accomplish what he says he's doing to do or ends up half-assing it. So let him take Grok and run it right into the ground like an autopiloted Cybertruck rolling over into a flame trench of an exploding Startship rocket still on the pad shooting flames out of tunnels made by the Boring Company.
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::schrieb am 23. Juni 2025, 19:00 zuletzt editiert vonLol turns out elon has no fucking idea about how llms work
-
The thing that annoys me most is that there have been studies done on LLMs where, when trained on subsets of output, it produces increasingly noisier output.
Sources (unordered):
- What is model collapse?
- AI models collapse when trained on recursively generated data
- Large Language Models Suffer From Their Own Output: An Analysis of the Self-Consuming Training Loop
- Collapse of Self-trained Language Models
Whatever nonsense Muskrat is spewing, it is factually incorrect. He won't be able to successfully retrain any model on generated content. At least, not an LLM if he wants a successful product. If anything, he will be producing a model that is heavily trained on censored datasets.
schrieb am 23. Juni 2025, 19:26 zuletzt editiert von brucethemoose@lemmy.worldIt's not so simple, there are papers on zero data 'self play' or other schemes for using other LLM's output.
Distillation is probably the only one you'd want for a pretrain, specifically.
-
asdf
schrieb am 23. Juni 2025, 19:51 zuletzt editiert vonYou had started to make a point, now you are just being a dick.
-
Wasn't he the children's author who published the book about a talking animals learning the value of hard work or something?
schrieb am 23. Juni 2025, 19:59 zuletzt editiert vonThe very one!
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::schrieb am 23. Juni 2025, 20:03 zuletzt editiert vonSo they’re just going to fill it with Hitler’s world view, got it.
Typical and expected.
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::schrieb am 23. Juni 2025, 20:05 zuletzt editiert vonHe knows more ... about knowledge... than... anyone alive now
-
Lol turns out elon has no fucking idea about how llms work
schrieb am 23. Juni 2025, 20:08 zuletzt editiert vonIt's pretty obvious where the white genocide "bug" came from.
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::schrieb am 23. Juni 2025, 20:09 zuletzt editiert vonSo just making shit up.
-
Distributed Spacecraft Autonomy could enable future satellite swarms to complete science goals with little human help
Technology199 vor 6 Tagenvor 6 Tagen1
-
Europe Lysosomal Storage Disorder Drugs Market Graph: Growth, Share, Value, Size, and Insights
Technology199 vor 8 Tagenvor 8 Tagen2
-
-
3D Printing Patterns Might Make Ghost Guns More Traceable Than We Thought
Technology199 vor 20 Tagenvor 21 Tagen1
-
Senate GOP budget bill has little-noticed provision that could hurt your Wi-Fi
Technology 30. Juni 2025, 20:301
-
-
-
It’s Surprisingly Easy to Live Without an Amazon Prime Subscription
Technology199 vor 30 Tagen31. Jan. 2024, 20:521