ChatGPT "Absolutely Wrecked" at Chess by Atari 2600 Console From 1977
-
machine designed to play chess beats machine not designed to play chess at chess!
Fascinating news!
Why people are upvoting this drivel is beyond me.
40 year old machine designed to play chess*
-
This post did not contain any content.
ChatGPT "Absolutely Wrecked" at Chess by Atari 2600 Console From 1977
Despite all its advances, ChatGPT still, seemingly, is less smart than an Atari simulator on beginner mode.
Futurism (futurism.com)
I think people in the replies acting fake surprised are missing the point.
it is important news, because many people see LLMs as black boxes of superintelligence (almost as if that’s what they’re being marketed as!)
you and i know that’s bullshit, but the students asking chatgpt to solve their math homework instead of using wolfram alpha doesn’t.
so yes, it is important to demonstrate that this "artificial intelligence" is so much not an intelligence that it’s getting beaten by 1979 software on 1977 hardware
-
This post did not contain any content.
ChatGPT "Absolutely Wrecked" at Chess by Atari 2600 Console From 1977
Despite all its advances, ChatGPT still, seemingly, is less smart than an Atari simulator on beginner mode.
Futurism (futurism.com)
It's AI, not AGI. LLM's are good at generating language just like chess engines are good at chess. ChatGPT doesn't have the capability to keep track of all the pieces on the board.
-
Its because of all the people saying that LLMs can reason and think and the human brain works just like an LLM and... some other ridiculous claim.
This shows some limitations on LLMs.
But humans not trained (made) for chess would make stupid mistakes too
-
This post did not contain any content.
ChatGPT "Absolutely Wrecked" at Chess by Atari 2600 Console From 1977
Despite all its advances, ChatGPT still, seemingly, is less smart than an Atari simulator on beginner mode.
Futurism (futurism.com)
In other news, my toaster absolutely wrecked my T.V. at making toast.
-
This post did not contain any content.
ChatGPT "Absolutely Wrecked" at Chess by Atari 2600 Console From 1977
Despite all its advances, ChatGPT still, seemingly, is less smart than an Atari simulator on beginner mode.
Futurism (futurism.com)
How did alpha go do?
-
It's AI, not AGI. LLM's are good at generating language just like chess engines are good at chess. ChatGPT doesn't have the capability to keep track of all the pieces on the board.
They're literally selling to credulous investors that AGI is around the corner, when this and to a lesser extent Large Action Models is the only viable product they've got. It's just a demo of how far they are from their promises
-
They're literally selling to credulous investors that AGI is around the corner, when this and to a lesser extent Large Action Models is the only viable product they've got. It's just a demo of how far they are from their promises
Is there a link where I could see them making these claims myself? This is something I’ve only heard from AI critics, but never directly from the AI companies themselves. I wouldn’t be surprised if they did, but I’ve just never seen them say it outright.
-
This post did not contain any content.
ChatGPT "Absolutely Wrecked" at Chess by Atari 2600 Console From 1977
Despite all its advances, ChatGPT still, seemingly, is less smart than an Atari simulator on beginner mode.
Futurism (futurism.com)
This is useful for dispelling the hype around ChatGPT and for demonstrating the limits of general purpose LLMs.
But that's about it. This is not a "win" for old school game engines vs new ones. Stockfish uses deep reinforcement learning and is one of the strongest chess engines in the world.
EDIT: what would be actually interesting would be to see if GPT could be fine-tuned to play chess. Which is something many people have been doing: https://scholar.google.com/scholar?hl=en&q=finetune+gpt+chess
-
Is there a link where I could see them making these claims myself? This is something I’ve only heard from AI critics, but never directly from the AI companies themselves. I wouldn’t be surprised if they did, but I’ve just never seen them say it outright.
"We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies" https://blog.samaltman.com/reflections
"We fully intend that Gemini will be the very first AGI"
https://venturebeat.com/ai/at-google-i-o-sergey-brin-makes-surprise-appearance-and-declares-google-will-build-the-first-agi/"If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it's probably next year, within two years" -Elon Musk https://www.reuters.com/technology/teslas-musk-predicts-ai-will-be-smarter-than-smartest-human-next-year-2024-04-08/
-
"We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies" https://blog.samaltman.com/reflections
"We fully intend that Gemini will be the very first AGI"
https://venturebeat.com/ai/at-google-i-o-sergey-brin-makes-surprise-appearance-and-declares-google-will-build-the-first-agi/"If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it's probably next year, within two years" -Elon Musk https://www.reuters.com/technology/teslas-musk-predicts-ai-will-be-smarter-than-smartest-human-next-year-2024-04-08/
Thanks.
Well, I don’t think OpenAI knows how to build AGI, so that’s false. Otherwise, Sam’s statement there is technically correct, but kind of misleading - he talks about AGI and then, in the next sentence, switches back to AI.
Sergey’s claim that they will achieve AGI before 2030 could turn out to be true, but again, he couldn’t possibly know that. I’m sure it’s their intention, but that’s different from reality.
Elon’s statement doesn’t even make sense. I’ve never heard anyone define AGI like that. A thirteen-year-old with an IQ of 85 is generally intelligent. Being smarter than the smartest human definitely qualifies as AGI, but that’s just a weird bar. General intelligence isn’t about how smart something is - it’s about whether it can apply its intelligence across multiple unrelated fields.
-
It's AI, not AGI. LLM's are good at generating language just like chess engines are good at chess. ChatGPT doesn't have the capability to keep track of all the pieces on the board.
LLMs would be great as an interface to more specialized machine learning programs in a combined platform. We need AI to perform tasks humans aren't capable of instead of replacing them.