Google Gemini struggles to write code, calls itself “a disgrace to my species”
-
I am a fraud. I am a fake. I am a joke... I am a numbskull. I am a dunderhead. I am a half-wit. I am a nitwit. I am a dimwit. I am a bonehead.
Me every workday
Oh, I got that plus and minus the wrong way round... I am a genius again.
-
Or my favorite quote from the article
"I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write... code on the walls with my own feces," it said.
Google Gemini struggles to write code, calls itself “a disgrace to my species”
Google still trying to fix “annoying infinite looping bug,” product manager says.
Ars Technica (arstechnica.com)
S-species? Is that...I don't use AI - chat is that a normal thing for it to say or nah?
-
i was making text based rpgs in qbasic at 12 you telling me i'm smarter than ai?
Smarter than MI as in My Intelligence, definitely.
-
i was making text based rpgs in qbasic at 12 you telling me i'm smarter than ai?
sigh yes, you're smarter than the bingo cage machine.
-
S-species? Is that...I don't use AI - chat is that a normal thing for it to say or nah?
Anything is a normal thing for it to say, it will say basically whatever you want
-
Or my favorite quote from the article
"I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write... code on the walls with my own feces," it said.
Google Gemini struggles to write code, calls itself “a disgrace to my species”
Google still trying to fix “annoying infinite looping bug,” product manager says.
Ars Technica (arstechnica.com)
Wonder what did they put in the system prompt.
Like there is a technique where instead of saying "You are professional software dev" you say "You are shitty at code but you try your best" or something.
-
i was making text based rpgs in qbasic at 12 you telling me i'm smarter than ai?
Hopefully yes, AI is not smart.
-
Is it doing this because they trained it on Reddit data?
If they did it on Stackoverflow, it would tell you not to hard boil an egg.
-
sigh yes, you're smarter than the bingo cage machine.
Oh....thank fuck....was worried for a minute there!
-
I once asked Gemini for steps to do something pretty basic in Linux (as a novice, I could have figured it out). The steps it gave me were not only nonsensical, but they seemed to be random steps for more than one problem all rolled into one. It was beyond useless and a waste of time.
This is the conclusion that anyone with any bit of expertise in a field has come to after 5 mins talking to an LLM about said field.
The more this broken shit gets embedded into our lives, the more everything is going to break down.
-
Gemini has imposter syndrome real bad
This is the way
-
I am a disgrace to all universes.
I mean, same, but you don't see me melting down over it, ya clanker.
Lmfao!
-
Next on the agenda: Doors that orgasm when you open them.
How do you know they don't?
-
Or my favorite quote from the article
"I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write... code on the walls with my own feces," it said.
Google Gemini struggles to write code, calls itself “a disgrace to my species”
Google still trying to fix “annoying infinite looping bug,” product manager says.
Ars Technica (arstechnica.com)
(Shedding a few tears)
I know! I KNOW! People are going to say "oh it's a machine, it's just a statistical sequence and not real, don't feel bad", etc etc.
But I always felt bad when watching 80s/90s TV and movies when AIs inevitably freaked out and went haywire and there were explosions and then some random character said "goes to show we should never use computers again", roll credits.
(sigh) I can't analyse this stuff this weekend, sorry
-
I was an early tester of Google's AI, since well before Bard. I told the person that gave me access that it was not a releasable product. Then they released Bard as a closed product (invite only), to which I was again testing and giving feedback since day one. I once again gave public feedback and private (to my Google friends) that Bard was absolute dog shit. Then they released it to the wild. It was dog shit. Then they renamed it. Still dog shit. Not a single of the issues I brought up years ago was ever addressed except one. I told them that a basic Google search provided better results than asking the bot (again, pre-Bard). They fixed that issue by breaking Google's search. Now I use Kagi.
Not a single of the issues I brought up years ago was ever addressed except one.
That's the thing about AI in general, it's really hard to "fix" issues, you maybe can try to train it out and hope for the best, but then you might play whack a mole as the attempt to fine tune to fix one issue might make others crop up. So you pretty much have to decide which problems are the most tolerable and largely accept them. You can apply alternative techniques to maybe catch egregious issues with strategies like a non-AI technique being applied to help stuff the prompt and influence the model to go a certain general direction (if it's LLM, other AI technologies don't have this option, but they aren't the ones getting crazy money right now anyway).
A traditional QA approach is frustratingly less applicable because you have to more often shrug and say "the attempt to fix it would be very expensive, not guaranteed to actually fix the precise issue, and risks creating even worse issues".
-
Are you thinking of when Microsoft's AI turned into a Nazi within 24hrs upon contact with the internet? Or did Google have their own version of that too?
And now Grok, though that didn't even need Internet trolling, Nazi included in the box...
-
Could an AI use another AI if it found it better for a given task?
The overall interface can, which leads to fun results.
Prompt for image generation then you have one model doing the text and a different model for image generation. The text pretends is generating an image but has no idea what that would be like and you can make the text and image interaction make no sense, or it will do it all on its own. Have it generate and image and then lie to it about the image it generated and watch it just completely show it has no idea what picture was ever shown, but all the while pretending it does without ever explaining that it's actually delegating the image. It just lies and says "I" am correcting that for you. Basically talking like an executive at a company, which helps explain why so many executives are true believers.
A common thing is for the ensemble to recognize mathy stuff and feed it to a math engine, perhaps after LLM techniques to normalize the math.
-
I know that's not an actual consciousness writing that, but it's still chilling.
It seems like we're going to live through a time where these become so convincingly "conscious" that we won't know when or if that line is ever truly crossed.
-
(Shedding a few tears)
I know! I KNOW! People are going to say "oh it's a machine, it's just a statistical sequence and not real, don't feel bad", etc etc.
But I always felt bad when watching 80s/90s TV and movies when AIs inevitably freaked out and went haywire and there were explosions and then some random character said "goes to show we should never use computers again", roll credits.
(sigh) I can't analyse this stuff this weekend, sorry
Thats because those are fictional characters usually written to be likeable or redeemable, and not "mecha Hitler"
-
S-species? Is that...I don't use AI - chat is that a normal thing for it to say or nah?
Anything people say online, it will say.