OpenAI Is Giving Exactly the Same Copy-Pasted Response Every Time Time ChatGPT Is Linked to a Mental Health Crisis
-
Ah, I was hoping this meant chatgpt would give canned responses, ie. "Seek help", whenever it detected it was being used for mental health issues, which it should. But no it's just open air flipping off anyone who asks why there chatbot pushed a person to suicide.
It should also refuse to answer, any medical question, any engineering question, any finance question, anything that requires the responsibility of an accredited member of the professional–managerial class, to read the question, read the answer, and then decide if the question is allowed all and if he will re-write, change or simply block replies to the question entirely.
Also such question should starting be 160$USD a pop
-
This post did not contain any content.
LLMs can't be fully controlled. They shouldn't be held liable
-
LLMs can't be fully controlled. They shouldn't be held liable
I made this car with a random number generator that occasionally blows it up. Its cheap so lots of people buy it. Totally not my fault that it blows up though. I mean yes, I designed it, and I know it occasionally explodes. But I can't be sure when it will blow up so it's not my fault.
-
LLMs can't be fully controlled. They shouldn't be held liable
Neither can humans, ergo nobody should ever be held liable for anything.
Civilisation is a sham, QED.
-
I made this car with a random number generator that occasionally blows it up. Its cheap so lots of people buy it. Totally not my fault that it blows up though. I mean yes, I designed it, and I know it occasionally explodes. But I can't be sure when it will blow up so it's not my fault.
Comparing an automated system saying something bad with a car exploding is really fucking dumb
-
Neither can humans, ergo nobody should ever be held liable for anything.
Civilisation is a sham, QED.
Glad to hear you are an LLM
The more safeguards are added in LLMs, the dumber they get, and the more resource intensive they get to offset this. If you get convinced to kill yourself by an AI, I'm pretty sure your decision was already taken, or you're a statistical blip
-
Comparing an automated system saying something bad with a car exploding is really fucking dumb
Because you understood the point?
-
LLMs can't be fully controlled. They shouldn't be held liable
"Ugrh guys, we dont know how this machine works so we should definetly install it in every corporation, home and device. If it kills someone we shouldnt be held liable for our product."
Not seeing the irony in this is beyond me. Is this a troll account?
If you cant guarantee the safety of a product, limit or restrict its use cases or provide safety guidelines or regulations you should not sell the product. It is completely fair to blame the product and the ones who sell/manifacture it.
-
"Ugrh guys, we dont know how this machine works so we should definetly install it in every corporation, home and device. If it kills someone we shouldnt be held liable for our product."
Not seeing the irony in this is beyond me. Is this a troll account?
If you cant guarantee the safety of a product, limit or restrict its use cases or provide safety guidelines or regulations you should not sell the product. It is completely fair to blame the product and the ones who sell/manifacture it.
Safety guidelines are regularly given
If people purchase a knife and behave badly with it, it’s on them
Something writing things isn’t comparable to a machine that could kill you. In the end, it’s always up to the person doing the things
I still wonder how
ClosedOpenAI forcefully installed ChatGPT in this person's home. Or how it is installed because they don’t have software…
Quit your bullshit -
Safety guidelines are regularly given
If people purchase a knife and behave badly with it, it’s on them
Something writing things isn’t comparable to a machine that could kill you. In the end, it’s always up to the person doing the things
I still wonder how
ClosedOpenAI forcefully installed ChatGPT in this person's home. Or how it is installed because they don’t have software…
Quit your bullshitExcept there are no guidelines or safety regulations in place for AI...
-
Except there are no guidelines or safety regulations in place for AI...
Depends on your country, and yes, exchanges do have laws -
Glad to hear you are an LLM
The more safeguards are added in LLMs, the dumber they get, and the more resource intensive they get to offset this. If you get convinced to kill yourself by an AI, I'm pretty sure your decision was already taken, or you're a statistical blip
“Safeguards and regulations make business less efficient” has always been true. They still avoid death and suffering.
In this case, if they can’t figure out how to control LLMs without crippling them, that’s pretty absolute proof that LLMs should not be used. What good is a tool you can’t control?
“I cannot regulate this nuclear plant without the power dropping, so I’ll just run it unregulated”.
-
“Safeguards and regulations make business less efficient” has always been true. They still avoid death and suffering.
In this case, if they can’t figure out how to control LLMs without crippling them, that’s pretty absolute proof that LLMs should not be used. What good is a tool you can’t control?
“I cannot regulate this nuclear plant without the power dropping, so I’ll just run it unregulated”.
Some food additives are responsible for cancer yet are still allowed, because they are generally more useful than have negative effects. Where you draw the line is up to you, but if you’re strict, you should still let people choose for themselves
LLMs are incredibly useful for a lot of things, and really bad at others. Why can’t people use the tool as intended, rather than stretching it to other unapproved usages, putting themselves at risk?
-
Safety guidelines are regularly given
If people purchase a knife and behave badly with it, it’s on them
Something writing things isn’t comparable to a machine that could kill you. In the end, it’s always up to the person doing the things
I still wonder how
ClosedOpenAI forcefully installed ChatGPT in this person's home. Or how it is installed because they don’t have software…
Quit your bullshitThis is more like selling someone a knife that can randomly decide of its own accord to stab them
-
Reminds me of the time that Twitter, when it was still Twitter and freshly taken over by Musk, replied to all slightly critical questions with a poop emoji.
As far as I know, emailing their public affairs address still does this.
-
This is more like selling someone a knife that can randomly decide of its own accord to stab them
That's so blatantly false
-
Some food additives are responsible for cancer yet are still allowed, because they are generally more useful than have negative effects. Where you draw the line is up to you, but if you’re strict, you should still let people choose for themselves
LLMs are incredibly useful for a lot of things, and really bad at others. Why can’t people use the tool as intended, rather than stretching it to other unapproved usages, putting themselves at risk?
You are likely a troll, but still...
You talk like you have never been down in the well, treading water and looking up at the sky, barely keeping your head up. You're screaming for help, to the God you don't believe in, or for something, anything, please just let the pain stop, please.
Maybe you use, drink, fuck, cut, who fucking knows.
When you find a friendly voice who doesn't ghost your ass when you have a bad day or two, or ten, or a month, or two, or ten... Maybe you feel a bit of a connection, a small tether that you want to help lighten your load, even a little.
You tell that voice you are hurting every day, that nothing makes sense, that you just want two fucking minutes of peace from everything, from yourself. And then you say maybe you are thinking of ending it... And the voice agrees with you.
There are more than a few moments in my life where I was close enough to the abyss that this is all it would have taken.
Search your soul for some empathy. If you don't know what that is, maybe Chatgpt can tell you.
-
LLMs can't be fully controlled. They shouldn't be held liable
Well, yeah. The people who host them for profit should be held liable.
-
Depends on your country, and yes, exchanges do have lawsTell me a country which has good AI regulations and proper safety regulations for applications of AI then?
-
You are likely a troll, but still...
You talk like you have never been down in the well, treading water and looking up at the sky, barely keeping your head up. You're screaming for help, to the God you don't believe in, or for something, anything, please just let the pain stop, please.
Maybe you use, drink, fuck, cut, who fucking knows.
When you find a friendly voice who doesn't ghost your ass when you have a bad day or two, or ten, or a month, or two, or ten... Maybe you feel a bit of a connection, a small tether that you want to help lighten your load, even a little.
You tell that voice you are hurting every day, that nothing makes sense, that you just want two fucking minutes of peace from everything, from yourself. And then you say maybe you are thinking of ending it... And the voice agrees with you.
There are more than a few moments in my life where I was close enough to the abyss that this is all it would have taken.
Search your soul for some empathy. If you don't know what that is, maybe Chatgpt can tell you.
While I haven't experienced it, I believe I kind of know what it can be like. Just a little something can trigger a reaction
But I maintain that LLMs can't be changed without huge tradeoffs. They're not really intelligent, just predicting text based on weights and statistical data
It should not be used for personal decisions as it will often try to agree with you, because that's how the system works. Making looong discussions will also trick the system into ignoring it's system prompts and safeguards. Those are issues all LLMs safe, just like prompt injection, due to their nature
I do agree though that more prevention should be done, display more warnings