Elon Musk wants to rewrite "the entire corpus of human knowledge" with Grok
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::schrieb am 23. Juni 2025, 06:00 zuletzt editiert vonEvery single endeavor that musk is involved in, is toxic af. He, and all of this businesses, are cancers metastasizing within our society. We really should remove them.
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::schrieb am 23. Juni 2025, 06:02 zuletzt editiert vonBecause neural networks aren't known to suffer from model collapse when using their output as training data. /s
Most billionaires are mediocre sociopaths but Elon Musk takes it to the "Emperors New Clothes" levels of intellectual destitution.
-
If we had direct control over how our tax dollars were spent, things would be different pretty fast. Might not be better, but different.
schrieb am 23. Juni 2025, 06:04 zuletzt editiert vonMore guns?
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::schrieb am 23. Juni 2025, 06:08 zuletzt editiert vonWhat a loser. He'll keep rewriting it until it fits his world-view
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::schrieb am 23. Juni 2025, 06:17 zuletzt editiert vonLike all Point-Haired Bosses through the history, Elon has not heard of (or consciously chooses to ignore) one of the fundamental laws of computing: garbage in, garbage out
-
"If we take this 0.84 accuracy model and train another 0.84 accuracy model on it that will make it a 1.68 accuracy model!"
~Fucking Dumbass
schrieb am 23. Juni 2025, 06:44 zuletzt editiert von1.68 IQ move
-
schrieb am 23. Juni 2025, 06:49 zuletzt editiert von
in the unable-to-reason-effectively sense
That's all LLMs by definition.
They're probabilistic text generators, not AI. They're fundamentally incapable of reasoning in any way, shape or form.
They just take a text and produce the most probable word to follow it according to their training model, that's all.
What Musk's plan (using an LLM to regurgitate as much of its model as it can, expunging all references to Musk being a pedophile and whatnot from the resulting garbage, adding some racism and disinformation for good measure, and training a new model exclusively on that slop) will produce is a significantly more limited and prone to hallucinations model that occasionally spews racism and disinformation.
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::schrieb am 23. Juni 2025, 07:16 zuletzt editiert vonWe have never been at war with Eurasia. We have always been at war with East Asia
-
So where will Musk find that missing information and how will he detect "errors"?
schrieb am 23. Juni 2025, 07:26 zuletzt editiert vonI expect he'll ask Grok and believe the answer.
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::schrieb am 23. Juni 2025, 07:34 zuletzt editiert vonElon should seriously see a medical professional.
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::schrieb am 23. Juni 2025, 07:37 zuletzt editiert vonFirst error to correct:
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing
informationerrors and deletingerrorsinformation. -
Like all Point-Haired Bosses through the history, Elon has not heard of (or consciously chooses to ignore) one of the fundamental laws of computing: garbage in, garbage out
schrieb am 23. Juni 2025, 07:42 zuletzt editiert vonGarbage out is what he aims for.
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::schrieb am 23. Juni 2025, 07:53 zuletzt editiert von firewire400@lemmy.worldHow high on ketamine is he?
3.5 (maybe we should call it 4)
I think calling it 3.5 might already be too optimistic
-
Elon should seriously see a medical professional.
schrieb am 23. Juni 2025, 08:04 zuletzt editiert vonHe should be locked up in a mental institute. Indefinitely.
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::schrieb am 23. Juni 2025, 08:17 zuletzt editiert vonWhatever. The next generation will have to learn to trust whether the material is true or not by using sources like Wikipedia or books by well-regarded authors.
The other thing that he doesn't understand (and most "AI" advocates don't either) is that LLMs have nothing to do with facts or information. They're just probabilistic models that pick the next word(s) based on context. Anyone trying to address the facts and information produced by these models is completely missing the point.
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::schrieb am 23. Juni 2025, 08:28 zuletzt editiert vonGrok will round up physics constants and pi as well... nothing will work but Musk will say that humanity is dumb
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::schrieb am 23. Juni 2025, 08:28 zuletzt editiert vonI believe it won't work.
They would have to change so much info that won't make a coherent whole. So many alternative facts that clash with so many other aspects of life. So asking about any of it would cause errors because of the many conflicts.
Sure it might work for a bit, but it would quickly degrade and will be so much slower than other models since it needs to error correct constantly.
An other thing is that their training data will also be very limited, and they would have to check every single other one thoroughly for "false info". Increasing their manual labour.
-
Whatever. The next generation will have to learn to trust whether the material is true or not by using sources like Wikipedia or books by well-regarded authors.
The other thing that he doesn't understand (and most "AI" advocates don't either) is that LLMs have nothing to do with facts or information. They're just probabilistic models that pick the next word(s) based on context. Anyone trying to address the facts and information produced by these models is completely missing the point.
schrieb am 23. Juni 2025, 08:29 zuletzt editiert vonThinking wikipedia or other unbiased sources will still be available in a decade or so is wishful thinking. Once the digital stranglehold kicks in, it'll be mandatory sign-in with gov vetted identity provider and your sources will be limited to what that gov allows you to see. MMW.
-
Whatever. The next generation will have to learn to trust whether the material is true or not by using sources like Wikipedia or books by well-regarded authors.
The other thing that he doesn't understand (and most "AI" advocates don't either) is that LLMs have nothing to do with facts or information. They're just probabilistic models that pick the next word(s) based on context. Anyone trying to address the facts and information produced by these models is completely missing the point.
schrieb am 23. Juni 2025, 08:38 zuletzt editiert von aaron@infosec.pubasdf
-
Training an AI model on AI output? Isn't that like the one big no-no?
schrieb am 23. Juni 2025, 08:39 zuletzt editiert vonThey used to think that, but it is actually not that bad.
Like iterations with machine learning, you can train it with optimized output.
That isn't the dumb part about this.
-
Google Cloud supports AI ambitions of UAE, accused of fueling Sudan genocide
Technology199 vor 13 Tagenvor 13 Tagen1
-
-
Computer Scientists Figure Out How To Prove Lies: An attack on a fundamental proof technique reveals a glaring security issue for blockchains and other digital encryption schemes.
Technology199 vor 29 Tagenvor 30 Tagen1
-
-
-
1
-
-
SSD prices predicted to skyrocket throughout 2024 — TrendForce market report projects a 50% price hike | Tom's Hardware
Technology 1. Jan. 2024, 08:551