Elon Musk wants to rewrite "the entire corpus of human knowledge" with Grok
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::How does anyone consider him a "genius"? This guy is just so stupid.
-
Isn't everyone just sick of his bullshit though?
US tax payers clearly aren't since they're subsidising his drug habit.
-
The plan to "rewrite the entire corpus of human knowledge" with AI sounds impressive until you realize LLMs are just pattern-matching systems that remix existing text. They can't create genuinely new knowledge or identify "missing information" that wasn't already in their training data.
Remember the "white genocide in South Africa" nonsense? That kind of rewriting of history.
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
::: -
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
::: -
He means rewrite every narrative to his liking, like the benevolent god-sage he thinks he is.
Let's not beat around the bush here, he wants it to sprout fascist propaganda.
-
adding missing information
Did you mean: hallucinate on purpose?
Wasn't he going to lay off the ketamine for a while?
Edit: ... i hadnt seen the More Context and now i need a fucking beer or twnety fffffffffu-
Yeah, let's take a technology already known for filling in gaps with invented nonsense and use that as our new training paradigm.
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::He’s done with Tesla, isn’t he?
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::That's not how knowledge works. You can't just have an LLM hallucinate in missing gaps in knowledge and call it good.
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::I'm interested to see how this turns out. My prediction is that the AI trained from the results will be insane, in the unable-to-reason-effectively sense, because we don't yet have AIs capable of rewriting all that knowledge and keeping it consistent. Each little bit of it considered in isolation will fit the criteria that Musk provides, but taken as a whole it'll be a giant mess of contradictions.
Sure, the existing corpus of knowledge doesn't all say the same thing either, but the contradictions in it can be identified with deeper consistent patterns. An AI trained off of Reddit will learn drastically different outlooks and information from /r/conservative comments than it would from /r/news comments, but the fact that those are two identifiable communities means that it'd see a higher order consistency to this. If anything that'll help it understand that there are different views in the world.
-
That's not how knowledge works. You can't just have an LLM hallucinate in missing gaps in knowledge and call it good.
And then retrain on the hallucinated knowledge
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::"Adding missing information" Like... From where?
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::adding missing information and deleting errors
Which is to say, "I'm sick of Grok accurately portraying me as an evil dipshit, so I'm going to feed it a bunch of right-wing talking points and get rid of anything that hurts my feelings."
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::I see Mr. Musk has started using intracerebrally.
-
US tax payers clearly aren't since they're subsidising his drug habit.
If we had direct control over how our tax dollars were spent, things would be different pretty fast. Might not be better, but different.
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::I've seen what happens when image generating AI trains on AI art and I can't wait to see the same thing for "knowledge"
-
The plan to "rewrite the entire corpus of human knowledge" with AI sounds impressive until you realize LLMs are just pattern-matching systems that remix existing text. They can't create genuinely new knowledge or identify "missing information" that wasn't already in their training data.
But Grok 3.5/4 has Advanced Reasoning
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::iamverysmart
-
I'm interested to see how this turns out. My prediction is that the AI trained from the results will be insane, in the unable-to-reason-effectively sense, because we don't yet have AIs capable of rewriting all that knowledge and keeping it consistent. Each little bit of it considered in isolation will fit the criteria that Musk provides, but taken as a whole it'll be a giant mess of contradictions.
Sure, the existing corpus of knowledge doesn't all say the same thing either, but the contradictions in it can be identified with deeper consistent patterns. An AI trained off of Reddit will learn drastically different outlooks and information from /r/conservative comments than it would from /r/news comments, but the fact that those are two identifiable communities means that it'd see a higher order consistency to this. If anything that'll help it understand that there are different views in the world.
LLMs are prediction tools. What it will produce is a corpus that doesn't use certain phrases, or will use others more heavily, but will have the same aggregate statistical "shape".
It'll also be preposterously hard for them to work out, since the data it was trained on always has someone eventually disagreeing with the racist fascist bullshit they'll get it to focus on. Eventually it'll start saying things that contradict whatever it was supposed to be saying, because statistically eventually some manner of contrary opinion is voiced.
They won't be able to check the entire corpus for weird stuff like that, or delights like MLK speeches being rewriten to be anti-integration, so the next version will have the same basic information, but passed through a filter that makes it sound like a drunk incel talking about asian women. -
But Grok 3.5/4 has Advanced Reasoning
Surprised he didn’t name it Giga Reasoning or some other dumb shit.