Elon Musk wants to rewrite "the entire corpus of human knowledge" with Grok
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::Spoiler: He's gonna fix the "missing" information with MISinformation.
-
Spoiler: He's gonna fix the "missing" information with MISinformation.
She sounds Hot
-
She sounds Hot
She’s unfortunately can’t see you because of financial difficulties. You gotta give her money like I do. One day, I will see her in person.
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::I wonder how many papers he's read since ChatGPT released about how bad it is to train AI on AI output.
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::Humm....this doesn't sound great
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::Delusional and grasping for attention.
-
That's not what I said. It's absolutely dystopian how Musk is trying to tailor his own reality.
What I did say (and I've been doing AI research since the AlexNet days...) is that LLMs aren't old school ML systems, and we're at the point that simply scaling up to insane levels has yielded results that no one expected, but it was the lowest hanging fruit at the time. Few shot learning -> novel space generalization is very hard, so the easiest method was just take what is currently done and make it bigger (a la ResNet back in the day).
Lemmy is almost as bad as reddit when it comes to hiveminds.
You literally called it borderline magic.
Don't do that? They're pattern recognition engines, they can produce some neat results and are good for niche tasks and interesting as toys, but they really aren't that impressive. This "borderline magic" line is why they're trying to shove these chatbots into literally everything, even though they aren't good at most tasks.
-
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
::: spoiler More Context
Source.
:::Grandiose delusions from a ketamine-rotted brain.
-
So just making shit up.
Don't forget the retraining on the made up shit part!
-
I mean, this is the same guy who said we'd be living on Mars in 2025.
In a sense, he's right. I miss good old Earth.
-
-
-
-
Computer says no: Impact of automated decision-making on human life; Algorithms are deciding whether a patient receives an organ transplant or not; Algorithms use in Welfare, Penalise the poor.
Technology1
-
Australians may soon be able to download iPhone apps from outside the Apple Store under new proposal.
Technology1
-
-
-
FBI opens inquiry into 764, online group that sexually exploits and encourages minors to self-harm
Technology1