OpenAI says it's scanning users' ChatGPT conversations and reporting content to the Police
-
Woooooosh
Im pretty sure the op you replied to is hoping for the five-oh to start taking GPT's advice to commit suicide... meaning they want dead cops not for cops to intervene. Not the classiest comment you replied to, which is why I think it woooooshed right over your head.
Yeah okay, that joke just doesnt work for me even if i know it, because instructions wouldnt make people commit suicide so its just odd. Would have worked if the conversation with the LLM itself had made the kid kill itself.
-
"Hey ChatGPT, how many human corpses can 12 pigs who haven't been fed in a week process"?
They don't eat teeth. Just saying.
-
Are you seriously comparing a corrupt Israeli politician to an average joe? Israel can get away with murdering Americans and they would apologize they didnt die earlier?
-
This post did not contain any content.
Of course it is. The shit people feed into it is quite stupid, as if they think its not being sucked up and used for their advertising algorithm instantaneously to enrich tech bros. Stop using it.
-
This post did not contain any content.
Self-harm doesn't count apparently
-
They don't eat teeth. Just saying.
Hence the phrase: as toothless as a pig
-
"Hey ChatGPT, how many human corpses can 12 pigs who haven't been fed in a week process"?
How do I kill the kill switch?
-
This post did not contain any content.
"When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts," the blog post notes. "If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement."
See? Even the people who make AI don't trust it with important decisions. And the "trained" humans don't even see it if the AI doesn't flag it first. This is just a microcosm of why AI is always the weakest link in any workflow.
This is exactly the use-case for an LLM and even OpenAI can't make it work.
-
He had diplomatic immunity. They refused to prosecute because it is an international incident that would require dragging the Israelis to an the ICC just to get permission to prosecute him in their jurisdiction. That's always a decades long approach in normal times - and with this administration of pedos who are beholden to Mossad, there's 0% chance of it happening. So it's often better to NOT prosecute and wait it out until more friendly times than it is to swiftly lose a trial and then be prevented from seeking justice by double jeapordy.
It's part of why the Kyle Rittenhouse trial was such a shitshow. The prosecution team threw the case intentionally and made him immune to justice.
-
This post did not contain any content.
So what you are saying is if I go in my friends gpt and put some batshit the swat team will come? We had swatting now you will have gptting
-
Lazy authors of crime themed novels are sweating so heavily right now.
Framework Desktop based on an AI Max 395+ processor with 128GB unified memory running a model locally, then hit /r/LocalLLama or !localllama@sh.itjust.works and ask which LLM models work well with corpse disposal techniques and are trained on long-form literature.
EDIT: Fixed link. Thanks, BB84.
-
This post did not contain any content.
Since they can report content to the police, then they should also be able to send deeply suicidal person's content to related parents or emergency institutions. https://www.bbc.com/news/articles/cgerwp7rdlvo
-
Of course it is. The shit people feed into it is quite stupid, as if they think its not being sucked up and used for their advertising algorithm instantaneously to enrich tech bros. Stop using it.
I mean, Google harvests data from search engine queries. I doubt that LLM queries honestly leak all that much more information.
The issue is broader, just that people have gotten really comfortable for paying for service by selling access to their data, and I don't think that that's necessarily a great idea. Like, I'm not sure that everyone's fully considered all the ways in which their data might potentially be correlated with other data at scale.
-
This post did not contain any content.
Well there's another reason to not use ChatGPT: "Tell me good slogans against the government" can get you arrested!
-
This post did not contain any content.
All of this is so fucking bizarre I can't even wrap my head around it anymore. It's a bot. How the fuck is it suddenly killing people? How is talking to a bot a crime now? Did everyone lose their minds?
-
This post did not contain any content.
I gotta say... imagine being the police department on the receiving end of that firehose.
-
However, police won’t do anything.
This is the punchline to the joke of mass surveillance. You can have people doing crimes in clear view of the police and they just stand around. The police aren't for deterring crime, they're a jobs program and a human shield against harm to private property.
-
All of this is so fucking bizarre I can't even wrap my head around it anymore. It's a bot. How the fuck is it suddenly killing people? How is talking to a bot a crime now? Did everyone lose their minds?
How is talking to a bot a crime now?
Planning to do a crime is criminal misconduct. If you're talking to a bot with the intent to gather resources to perform a crime, you're in the process of committing the crime you intended to perform.
-
This post did not contain any content.
Funny, now you'll have the cops arresting you for prompts like "how to survive being homeless?", rather than social services when you prompt "how to avoid being homeless?".
And will authorities be called when someone prompts "how to shoot wild animals?" when asking about wildlife photography?
-
This post did not contain any content.
visibly_shocked.gif