ChatGPT 'got absolutely wrecked' by Atari 2600 in beginner's chess match — OpenAI's newest model bamboozled by 1970s logic
-
Did the author thinks ChatGPT is in fact an AGI? It's a chatbot. Why would it be good at chess? It's like saying an Atari 2600 running a dedicated chess program can beat Google Maps at chess.
I think that’s generally the point is most people thing chat GPT is this sentient thing that knows everything and… no.
-
This post did not contain any content.
This isn't the strength of gpt-o4 the model has been optimised for tool use as an agent. That's why its so good at image gen relative to other models it uses tools to construct an image piece by piece similar to a human. Also probably poor system prompting. A LLM is not a universal thinking machine its a a universal process machine. An LLM understands the process and uses tools to accomplish the process hence its strengths in writing code (especially as an agent).
Its similar to how a monkey is infinitely better at remembering a sequence of numbers than a human ever could but is totally incapable of even comprehending writing down numbers.
-
The Atari chess program can play chess better than the Boeing 747 too. And better than the North Pole. Amazing!
Are either of those marketed as powerful AI?
-
The Atari chess program can play chess better than the Boeing 747 too. And better than the North Pole. Amazing!
Neither of those things are marketed as being artificially intelligent.
-
AI including ChatGPT is being marketed as super awesome at everything, which is why that and similar AI is being forced into absolutely everything and being sold as a replacement for people.
Something marketed as AGI should be treated as AGI when proving it isn't AGI.
Not to help the AI companies, but why don't they program them to look up math programs and outsource chess to other programs when they're asked for that stuff? It's obvious they're shit at it, why do they answer anyway? It's because they're programmed by know-it-all programmers, isn't it.
-
Not to help the AI companies, but why don't they program them to look up math programs and outsource chess to other programs when they're asked for that stuff? It's obvious they're shit at it, why do they answer anyway? It's because they're programmed by know-it-all programmers, isn't it.
I think they're trying to do that. But AI can still fail at that lol
-
Not to help the AI companies, but why don't they program them to look up math programs and outsource chess to other programs when they're asked for that stuff? It's obvious they're shit at it, why do they answer anyway? It's because they're programmed by know-it-all programmers, isn't it.
...or a simple counter to count the r in strawberry.
Because that's more difficult than one might think and they are starting to do this now. -
AI including ChatGPT is being marketed as super awesome at everything, which is why that and similar AI is being forced into absolutely everything and being sold as a replacement for people.
Something marketed as AGI should be treated as AGI when proving it isn't AGI.
I don't think ai is being marketed as awesome at everything. It's got obvious flaws. Right now its not good for stuff like chess, probably not even tic tac toe. It's a language model, its hard for it to calculate the playing field. But ai is in development, it might not need much to start playing chess.
-
A strange game. How about a nice game of Global Thermonuclear War?
Lmao!
that made me spit!!
-
A strange game. How about a nice game of Global Thermonuclear War?
Frak off, toaster
-
This isn't the strength of gpt-o4 the model has been optimised for tool use as an agent. That's why its so good at image gen relative to other models it uses tools to construct an image piece by piece similar to a human. Also probably poor system prompting. A LLM is not a universal thinking machine its a a universal process machine. An LLM understands the process and uses tools to accomplish the process hence its strengths in writing code (especially as an agent).
Its similar to how a monkey is infinitely better at remembering a sequence of numbers than a human ever could but is totally incapable of even comprehending writing down numbers.
Do you have a source for that re:monkeys memorizing numerical sequences? What do you mean by that?
-
I'm often impressed at how good chatGPT is at generating text, but I'll admit it's hilariously terrible at chess. It loves to manifest pieces out of thin air, or make absurd illegal moves, like jumping its king halfway across the board and claiming checkmate
Yeah! I’ve loved watching Gothem Chess’ videos on these. Always have been good for a laugh.
-
Not to help the AI companies, but why don't they program them to look up math programs and outsource chess to other programs when they're asked for that stuff? It's obvious they're shit at it, why do they answer anyway? It's because they're programmed by know-it-all programmers, isn't it.
Because they’re fucking terrible at designing tools to solve problems, they are obviously less and less good at pretending this is an omnitool that can do everything with perfect coherency (and if it isn’t working right it’s because you’re not believing or paying hard enough)
-
Do you have a source for that re:monkeys memorizing numerical sequences? What do you mean by that?
-
Do you have a source for that re:monkeys memorizing numerical sequences? What do you mean by that?
That threw me as well.
-
Neither of those things are marketed as being artificially intelligent.
Marketers aren't intelligent either, so I see no reason to listen to them.
-
This post did not contain any content.
While you guys suck at using tools, I'm making up for my lack of coding experience with ai, and successfully simulating the behavior of my aether (fuck you guys. Your search for a static ether is irrelevant to how mine behaves, and you shouldn't have dismissed everybody from Diogynes to Einstein), showing soliton-like structure emergence and particle-like interactions (with 1D relativistic constraints [I'm gonna need a fucking super computer to scale to 3D]). Anyways, whether you're wrong about your latest fun fact, cutting your thumb off trying to split a 2X4, or believing any idiot you talk to, this is user error, bro. Creating functional code for my simulator has saved me months, if not years of my life. Just setting up a gui was ridiculous for a novice like me, let alone translating walls of relativistic equation results (mainly stress-energy tensor) into code a computer can use. Side note: y'all don't give a fuck about facts. Come on. We're primates. Social status is the name of the game.
-
Marketers aren't intelligent either, so I see no reason to listen to them.
You’re not going to slimeball investors out of three hundred billion dollars with that attitude, mister.
-
Prepare to be delighted. Full disclosure, my Atari isn't hooked up and also I don't have the Video Chess cart even if it was, so this was fetched from Google Images.
Can confirm.
And if you play it on expert mode, you can leave for college and get your degree before it’s your turn again.
-
I don't think ai is being marketed as awesome at everything. It's got obvious flaws. Right now its not good for stuff like chess, probably not even tic tac toe. It's a language model, its hard for it to calculate the playing field. But ai is in development, it might not need much to start playing chess.
What the tech is being marketed as and what it’s capable of are not the same, and likely never will be. In fact all things are very rarely marketed how they truly behave, intentionally.
Everyone is still trying to figure out what these Large Reasoning Models and Large Language Models are even capable of; Apple, one of the largest companies in the world just released a white paper this past week describing the “illusion of reasoning”. If it takes a scientific paper to understand what these models are and are not capable of, I assure you they’ll be selling snake oil for years after we fully understand every nuance of their capabilities.
TL;DR Rich folks want them to be everything, so they’ll be sold as capable of everything until we repeatedly refute they are able to do so.