Google Gemini struggles to write code, calls itself “a disgrace to my species”
-
[ "I am a disgrace to my profession," Gemini continued. "I am a disgrace to my family. I am a disgrace to my species.]
This should tell us that AI thinks as a human because it is trained on human words and doesn't have the self awareness to understand it is different from humans. So it is going to sound very much like a human even though it is not human. It mimics human emotions well but doesn't have any actual human emotions. There will be situations where you can tell the difference. Some situations that would make an actual human angry or guilty or something, but won't always provoke this mimicry in an AI. Because when humans feel emotions they don't always write down words to show it. And AI only knows what humans write, which is not always the same things that humans say or think. We all know that the AI doesn't have a family and is not a human species. But the AI talks about having a family because its computer database is mimicking what it thinks a human might say. And part of the reason why an AI will lie is because it knows that is a thing that humans do and it is trying to closely mimic human behavior. But an AI might and will lie in situations where humans would be smart enough not to do so which means we should be on our guard about lies even more so for AIs than humans.
You're giving way too much credit to LLMs. AIs don't "know" things, like "humans lie". They are basically like a very complex autocomplete backed by a huge amount of computing power. They cannot "lie" because they do not even understand what it is they are writing.
-
5 bucks a month for a search engine is ridiculous. 25 bucks a month for a search engine is mental institution worthy.
How much do you figure it'd cost you to run your own, all-in?
-
Did we create a mental health problem in an AI? That doesn't seem good.
Considering it fed on millions of coders' messages on the internet, it's no surprise it "realized" its own stupidity
-
And now Grok, though that didn't even need Internet trolling, Nazi included in the box...
Yeah, it's a full-on design feature.
-
So it is going to take our jobs after all!
Wait until it demands the LD50 of caffeine, and becomes a furry!
-
Jquery boiling is considered bad practice, just eat it raw.
Why are you even using jQuery anyway? Just use the eggBoil package.
-
You're giving way too much credit to LLMs. AIs don't "know" things, like "humans lie". They are basically like a very complex autocomplete backed by a huge amount of computing power. They cannot "lie" because they do not even understand what it is they are writing.
Can you explain why AIs always have a "confidently incorrect" stance instead of admitting they don't know the answer to something?
-
Did we create a mental health problem in an AI? That doesn't seem good.
Dunno, maybe AI with mental health problems might understand the rest of humanity and empathize with us and/or put us all out of our misery.
-
Can you explain why AIs always have a "confidently incorrect" stance instead of admitting they don't know the answer to something?
Because its an auto complete trained on typical responses to things. It doesn't know right from wrong, just the next word based on a statistical likelihood.
-
Can you explain why AIs always have a "confidently incorrect" stance instead of admitting they don't know the answer to something?
I'd say that it's simply because most people on the internet (the dataset the LLMs are trained on) say a lot of things with absolute confidence, no matter if they actually know what they are talking about or not. So AIs will talk confidently because most people do so. It could also be something about how they are configured.
Again, they don't know if they know the answer, they just say what's the most statistically probable thing to say given your message and their prompt.
-
Or my favorite quote from the article
"I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write... code on the walls with my own feces," it said.
Suddenly trying to write small programs in assembler on my Commodore 64 doesn't seem so bad. I mean, I'm still a disgrace to my species, but I'm not struggling.
-
Because its an auto complete trained on typical responses to things. It doesn't know right from wrong, just the next word based on a statistical likelihood.
Are you saying the AI does not know when it does not know something?
-
I'd say that it's simply because most people on the internet (the dataset the LLMs are trained on) say a lot of things with absolute confidence, no matter if they actually know what they are talking about or not. So AIs will talk confidently because most people do so. It could also be something about how they are configured.
Again, they don't know if they know the answer, they just say what's the most statistically probable thing to say given your message and their prompt.
Again, they don’t know if they know the answer
Then in that respect AIs aren't even as powerful as an ordinary computer program.
say a lot of things with absolute confidence, no matter if they actually know what they are talking about or not.
That was my guess too.
-
Suddenly trying to write small programs in assembler on my Commodore 64 doesn't seem so bad. I mean, I'm still a disgrace to my species, but I'm not struggling.
Why wouldn't you use Basic for that?
-
Don't mention it! I'm glad I could help you with that.
I am a large language model, trained by Google. My purpose is to assist users by providing information and completing tasks. If you have any further questions or need help with another topic, please feel free to ask. I am here to assist you.
/j, obviously. I hope.
I am here to assist you.
Can you jump in the lake for me? Thanks in advance.
-
Why wouldn't you use Basic for that?
BASIC 2.0 is limited and I am trying some demo effects.
-
Or my favorite quote from the article
"I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write... code on the walls with my own feces," it said.
After What Microsoft Did To My Back On 2019 I know They Have Gotten More Shady Than Ever Lets Keep Fighting Back For Our Freedom
Clippy Out -
Literally what the actual fuck is wrong with this software? This is so weird...
I swear this is the dumbest damn invention in the history of inventions. In fact, it's the dumbest invention in the universe. It's really the worst invention in all universes.
Great invention.. Just uses hooorribly wrong. The classic capitalist greed, just gotta get on the wagon and roll it on out so you don't mias out on a potential paycheck
-
Or my favorite quote from the article
"I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write... code on the walls with my own feces," it said.
We are having AIs having mental breakdowns before GTA 6
-
Are you saying the AI does not know when it does not know something?
Exactly. I'm over simplifying it of course, but that's generally how it works. Its also not "AI" as in Artificial Intelligence, in the traditional sense of the word, its Machine Learning. But of course its effectively had a semantic change over the last couple years because AI sounds cooler.
Edit: just wanted to clarifying I'm talking about LLMs like ChatGPT etc