It's 2025, the year we decided we need a widespread slur for robots
-
This post did not contain any content.
In the Manual it says you can whip your clankers as hard as you want. If they don't malfunction within 3 hours then it's all covered under warranty!
-
My dude.
I'm not arguing about empathy itself. I'm arguing that technology is entirely incapable of genuine empathy on its own.
"AI", in the most basic definition, is nothing more than a program running on a computer. That computer might be made of many, many computers with a shitton of processing power, but the principle is the same. It, like every other kind of technology out there, is only capable of doing what it's programmed to do. And genuine empathy cannot be programmed. Because genuine empathy is not logical.
You can argue against this until you're blue in the face. But it will not make true the fact that computers do not have human feelings.
Actually, a lot of non LLM AI development, (and even LLMs, in a sense) is based very fundamentally on concepts of negative and positive reinforcement.
In such situations... pain and pleasure are essentially the scoring rubrics for a generated strategy, and fairly often, in group scenarios... something resembling mutual trust, concern for others, 'empathy' arises as a stable strategy, especially if agents can detect or are made aware of the pain or pleasure of other agents, and if goals require cooperation to achieve with more success.
This really shouldn't be surprising... as our own human (mamallian really) empathy fundamentally just is a biological sort of 'answer' to the same sort of 'question.'
It is actually quite possible to base an AI more fundamentally off of a simulation of empathy, than a simulation of expansive knowledge.
Unfortunately, the people in charge of throwing human money at LLM AI are all largely narcissistic sociopaths... so of course they chose to emulate themselves, not the basic human empathy that their lack.
Their wealth only exists and is maintained by their construction and refinement of elaborate systems of confusing, destroying, and misdirecting the broad empathy of normal humans.
-
Actually, a lot of non LLM AI development, (and even LLMs, in a sense) is based very fundamentally on concepts of negative and positive reinforcement.
In such situations... pain and pleasure are essentially the scoring rubrics for a generated strategy, and fairly often, in group scenarios... something resembling mutual trust, concern for others, 'empathy' arises as a stable strategy, especially if agents can detect or are made aware of the pain or pleasure of other agents, and if goals require cooperation to achieve with more success.
This really shouldn't be surprising... as our own human (mamallian really) empathy fundamentally just is a biological sort of 'answer' to the same sort of 'question.'
It is actually quite possible to base an AI more fundamentally off of a simulation of empathy, than a simulation of expansive knowledge.
Unfortunately, the people in charge of throwing human money at LLM AI are all largely narcissistic sociopaths... so of course they chose to emulate themselves, not the basic human empathy that their lack.
Their wealth only exists and is maintained by their construction and refinement of elaborate systems of confusing, destroying, and misdirecting the broad empathy of normal humans.
At the end of the day, LLM/AI/ML/etc is still just a glorified computer program. It also happens to be absolutely terrible for the environment.
Insert "fraction of our power" meme here
-
Didn't we agree on clanker recently?
Oof with the hard r and all huh
-
At the end of the day, LLM/AI/ML/etc is still just a glorified computer program. It also happens to be absolutely terrible for the environment.
Insert "fraction of our power" meme here
Yes, they're all computer programs, no, they're not all as spectacularly energy, water and money intensive, as reliant on mass plagiarism as LLMs.
AI is a much, much more varied field of research than just LLMs... or, well, rather, it was, untill the entire industry decided to go all in on what 5 years ago was just one of many, many, radically different approaches, such that people now basically just think AI and LLM are the same thing.
-
will leave you to die for its own self-preservation, no matter how kind you are
Should any creature sacrifice their self-preservation because someone is kind?
Yes. We do this literally every day. We pay taxes on what we earn to support those less fortunate. We share with food with coworkers and tools with neighbors. We have EMTs, firemen, and SAR who wilfully run into danger to help people they've never met. It's literally the foundation of society.
-
This post did not contain any content.
Silis (pronounced "sillies")
Like silicon.
-
This post did not contain any content.
We already have "clankers" thanks to Clone Wars. What more do we need?
-
Technically Star Wars coined it as a slur way the fuck back in one of the prequels. Shit ain't even from 2025.
KOLANAKI
-
Jesus fucking christ on a bike. You people are dense.
Oh, I get it now. You are incapable of empathy or a basic level of decency and are upset because you thought people would at least rather put up with you than a computer. If computers can mimic a basic level of human respect then what chance do you have?
-
Oh, I get it now. You are incapable of empathy or a basic level of decency and are upset because you thought people would at least rather put up with you than a computer. If computers can mimic a basic level of human respect then what chance do you have?
What the fuck is the jump to personal attacks?
This is the comment that started this entire chain:
I refuse to participate in this. I love all robots.
And that’s totally not because AI will read every comment on the Internet someday to determine who lives and who does not in future robotic society.
I made an equally tongue-in-cheek comment in response, and apparently people took that personally, leading up to personal attacks. You can fuck right off.
-
Well, that's a bad argument, this is all a guess on your part that is impossible to prove, you don't know how empathy or the human brain work, so you don't know it isn't computable, if you can explain these things in detail, enjoy your nobel prize. Until then what you're saying is baseless conjecture with pre-baked assumptions that the human brain is special.
conversely I can't prove that it is computable, sure, but you're asserting those feelings you have as facts.
You:
-
That's pathetic.
-
Oof with the hard r and all huh
This is gonna haunt me one day, just wait.
-
-
“You can't be expected to have a successful AI program when every article, book or anything else that you've read or studied, you're supposed to pay for”, President Trump says
Technology1
-
Google is launching Offerwall, A new tool for publishers/websites to paywall their content.
Technology1
-
-
-
-
Meta(Facebook) and Yandex apps silently de-anonymize users’ browsing habits without consent.
Technology1
-