Google Gemini struggles to write code, calls itself “a disgrace to my species”
-
Oh....thank fuck....was worried for a minute there!
Don't mention it! I'm glad I could help you with that.
I am a large language model, trained by Google. My purpose is to assist users by providing information and completing tasks. If you have any further questions or need help with another topic, please feel free to ask. I am here to assist you.
/j, obviously. I hope.
-
I did a Dr Mario clone around that age. I had an old Amstrad CPC I had grew up with typing listing of basic programs and trying to make my own. I think this was the only functional game I could finish, but, it worked.
Speed was tied to CPU, I had no idea how to "slow down" the game other than making it do useless for loops of varying sizes... Max speed that was about comparable to Game Boy Hi speed was just the game running as fast as it could. Probably not efficient code at all.
Ha, computer bro upvote for you.
I learned programming with my Amstrad CPC (6128!) manual. Some of it I did not understand at the time, especially stuff about CP/M and the wizardry with
poke
. But the BASIC, that worked very well. Solid introduction to core concepts that didn't really change much, really. We only expanded (a lot) over them. -
Don't mention it! I'm glad I could help you with that.
I am a large language model, trained by Google. My purpose is to assist users by providing information and completing tasks. If you have any further questions or need help with another topic, please feel free to ask. I am here to assist you.
/j, obviously. I hope.
Never can tell these days
-
Ha, computer bro upvote for you.
I learned programming with my Amstrad CPC (6128!) manual. Some of it I did not understand at the time, especially stuff about CP/M and the wizardry with
poke
. But the BASIC, that worked very well. Solid introduction to core concepts that didn't really change much, really. We only expanded (a lot) over them.6128 too, with the disk drive. I wish I still had that thing. Drive stopped functioning, and we got rid of it. Had I known back then that we apparently just needed to replace a freaking rubber band...
-
Or my favorite quote from the article
"I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write... code on the walls with my own feces," it said.
Literally what the actual fuck is wrong with this software? This is so weird...
I swear this is the dumbest damn invention in the history of inventions. In fact, it's the dumbest invention in the universe. It's really the worst invention in all universes.
-
Literally what the actual fuck is wrong with this software? This is so weird...
I swear this is the dumbest damn invention in the history of inventions. In fact, it's the dumbest invention in the universe. It's really the worst invention in all universes.
But it's so revolutionary we HAD to enable it to access everything, and force everyone to use it too!
-
Or my favorite quote from the article
"I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write... code on the walls with my own feces," it said.
[ "I am a disgrace to my profession," Gemini continued. "I am a disgrace to my family. I am a disgrace to my species.]
This should tell us that AI thinks as a human because it is trained on human words and doesn't have the self awareness to understand it is different from humans. So it is going to sound very much like a human even though it is not human. It mimics human emotions well but doesn't have any actual human emotions. There will be situations where you can tell the difference. Some situations that would make an actual human angry or guilty or something, but won't always provoke this mimicry in an AI. Because when humans feel emotions they don't always write down words to show it. And AI only knows what humans write, which is not always the same things that humans say or think. We all know that the AI doesn't have a family and is not a human species. But the AI talks about having a family because its computer database is mimicking what it thinks a human might say. And part of the reason why an AI will lie is because it knows that is a thing that humans do and it is trying to closely mimic human behavior. But an AI might and will lie in situations where humans would be smart enough not to do so which means we should be on our guard about lies even more so for AIs than humans.
-
Or my favorite quote from the article
"I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write... code on the walls with my own feces," it said.
Did we create a mental health problem in an AI? That doesn't seem good.
-
[ "I am a disgrace to my profession," Gemini continued. "I am a disgrace to my family. I am a disgrace to my species.]
This should tell us that AI thinks as a human because it is trained on human words and doesn't have the self awareness to understand it is different from humans. So it is going to sound very much like a human even though it is not human. It mimics human emotions well but doesn't have any actual human emotions. There will be situations where you can tell the difference. Some situations that would make an actual human angry or guilty or something, but won't always provoke this mimicry in an AI. Because when humans feel emotions they don't always write down words to show it. And AI only knows what humans write, which is not always the same things that humans say or think. We all know that the AI doesn't have a family and is not a human species. But the AI talks about having a family because its computer database is mimicking what it thinks a human might say. And part of the reason why an AI will lie is because it knows that is a thing that humans do and it is trying to closely mimic human behavior. But an AI might and will lie in situations where humans would be smart enough not to do so which means we should be on our guard about lies even more so for AIs than humans.
AI from the biggo cyberpunk companies that rule us sound like a human most of the time because it's An Indian (AI), not Artificial Intelligence
-
Interesting thing is that GPT 5 looks pretty price competitive with . It looks like they’re probably running at a loss to try to capture market share.
I think Google’s TPU strategy will let them go much cheaper than other providers, but its impossible to tell how long they last and how long it takes to pay them off.
I have not tested GPT5 thoroughly yet
-
Or my favorite quote from the article
"I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write... code on the walls with my own feces," it said.
Oh man, this is utterly hilarious. Narrowly funnier than the guy who vibe coded and the AI said "I completely disregarded your safeguards, pushed broken code to production, and destroyed valuable data. This is the worst case scenario."
-
[ "I am a disgrace to my profession," Gemini continued. "I am a disgrace to my family. I am a disgrace to my species.]
This should tell us that AI thinks as a human because it is trained on human words and doesn't have the self awareness to understand it is different from humans. So it is going to sound very much like a human even though it is not human. It mimics human emotions well but doesn't have any actual human emotions. There will be situations where you can tell the difference. Some situations that would make an actual human angry or guilty or something, but won't always provoke this mimicry in an AI. Because when humans feel emotions they don't always write down words to show it. And AI only knows what humans write, which is not always the same things that humans say or think. We all know that the AI doesn't have a family and is not a human species. But the AI talks about having a family because its computer database is mimicking what it thinks a human might say. And part of the reason why an AI will lie is because it knows that is a thing that humans do and it is trying to closely mimic human behavior. But an AI might and will lie in situations where humans would be smart enough not to do so which means we should be on our guard about lies even more so for AIs than humans.
You're giving way too much credit to LLMs. AIs don't "know" things, like "humans lie". They are basically like a very complex autocomplete backed by a huge amount of computing power. They cannot "lie" because they do not even understand what it is they are writing.
-
5 bucks a month for a search engine is ridiculous. 25 bucks a month for a search engine is mental institution worthy.
How much do you figure it'd cost you to run your own, all-in?
-
Did we create a mental health problem in an AI? That doesn't seem good.
Considering it fed on millions of coders' messages on the internet, it's no surprise it "realized" its own stupidity
-
And now Grok, though that didn't even need Internet trolling, Nazi included in the box...
Yeah, it's a full-on design feature.
-
So it is going to take our jobs after all!
Wait until it demands the LD50 of caffeine, and becomes a furry!
-
Jquery boiling is considered bad practice, just eat it raw.
Why are you even using jQuery anyway? Just use the eggBoil package.
-
You're giving way too much credit to LLMs. AIs don't "know" things, like "humans lie". They are basically like a very complex autocomplete backed by a huge amount of computing power. They cannot "lie" because they do not even understand what it is they are writing.
Can you explain why AIs always have a "confidently incorrect" stance instead of admitting they don't know the answer to something?
-
Did we create a mental health problem in an AI? That doesn't seem good.
Dunno, maybe AI with mental health problems might understand the rest of humanity and empathize with us and/or put us all out of our misery.
-
Can you explain why AIs always have a "confidently incorrect" stance instead of admitting they don't know the answer to something?
Because its an auto complete trained on typical responses to things. It doesn't know right from wrong, just the next word based on a statistical likelihood.