Large Language Model Performance Doubles Every 7 Months
-
By 2030, AI will greatly outperform humans in some complex intellectual tasks. Discover how LLMs are doubling their capabilities every seven months.
-
By 2030, AI will greatly outperform humans in some complex intellectual tasks. Discover how LLMs are doubling their capabilities every seven months.
I doubt it
-
By 2030, AI will greatly outperform humans in some complex intellectual tasks. Discover how LLMs are doubling their capabilities every seven months.
*with 50 percent reliability.
Heck of an asterisk on this claim.
-
By 2030, AI will greatly outperform humans in some complex intellectual tasks. Discover how LLMs are doubling their capabilities every seven months.
This is like measuring the increasing speeds of cars in the early years and extrapolating that they would be supersonic by now by ignoring the exponential impact that air resistance has.
-
By 2030, AI will greatly outperform humans in some complex intellectual tasks. Discover how LLMs are doubling their capabilities every seven months.
"performance"
-
I doubt it
Then why share it?
-
By 2030, AI will greatly outperform humans in some complex intellectual tasks. Discover how LLMs are doubling their capabilities every seven months.
Is the performance increase related to computing power? I suspect the undelying massive datacenters running the cloud based LLMs are expanding at a similar rate...
-
By 2030, AI will greatly outperform humans in some complex intellectual tasks. Discover how LLMs are doubling their capabilities every seven months.
2 X 0 = 0
-
By 2030, AI will greatly outperform humans in some complex intellectual tasks. Discover how LLMs are doubling their capabilities every seven months.
This is such bullshit. Models have already consumed all available data and have nothing left to consume, whole needing exponentially more data for progressive advancements
-
*with 50 percent reliability.
Heck of an asterisk on this claim.
All that power used for a fucking coin flip.
-
Then why share it?
So we can mock it!
-
By 2030, AI will greatly outperform humans in some complex intellectual tasks. Discover how LLMs are doubling their capabilities every seven months.
How is completely fucking up literally 50% of the time outperforming exactly???
-
Then why share it?
Do you not see any value in engaging with views you don't personally agree with? I don't think agreeing with it is a good barometer for whether it's post-worthy
-
*with 50 percent reliability.
Heck of an asterisk on this claim.
That sounds like a coin flip, but 50% reliability can be really useful.
If a model has 50% chance of completing a task that would cost me an hour - and I can easily check it was completed correctly - on average, I'm saving half of the time it would take to complete this.
That said, exponentials don't exist in the real world, we're just seeing the middle of a sigmoid curve, which will soon yield diminishing returns.
-
How is completely fucking up literally 50% of the time outperforming exactly???
You see, in 7 months, they'll fuck up literally 100% of the time! Progress.
-
This is such bullshit. Models have already consumed all available data and have nothing left to consume, whole needing exponentially more data for progressive advancements
Apparently, throwing more data at it will not help much from now on... But anyway what they're saying, I can't trust the snake oil seller, he is suspicious...
-
By 2030, AI will greatly outperform humans in some complex intellectual tasks. Discover how LLMs are doubling their capabilities every seven months.
Is it just me, or is this graph (first graph in the article) completely unintelligible?
The X-axis being time is self-explanatory, but the Y-axis is somehow exponential time but then also mapping random milestones of performance, meaning those milestones are hard-linked to that time-based Y-axis? What?
-
This is like measuring the increasing speeds of cars in the early years and extrapolating that they would be supersonic by now by ignoring the exponential impact that air resistance has.
Very good analogy. They're also ignoring that getting faster and faster at reaching a 50% success rate (a totally unacceptable success rate for meaningful tasks) doesn't imply ever achieving consistently acceptable success.
-
This is like measuring the increasing speeds of cars in the early years and extrapolating that they would be supersonic by now by ignoring the exponential impact that air resistance has.
Air resistance has cubic not exponential impact
-
Do you not see any value in engaging with views you don't personally agree with? I don't think agreeing with it is a good barometer for whether it's post-worthy
Good point, thank you, I figured that sharing poor scientific articles essentially equals spreading misinformation (which I think is a fair point either), but I like your perspective either
-
-
-
Guardian Project ProofMode: Verified Visuals - Turn your photos and videos into secure, signed visual evidence
Technology1
-
-
-
Meta(Facebook) and Yandex apps silently de-anonymize users’ browsing habits without consent.
Technology1
-
Nvidia debuts a native GeForce NOW app for Steam Deck, supporting games in up to 4K at 60 FPS; in testing, the app extended Steam Deck battery life by up to 50%
Technology1
-