OpenAI says it's scanning users' ChatGPT conversations and reporting content to the Police
-
you can always grind it, add it to paint and paint your house white
Eww, the teeth of my victims plastered all around my house.
-
Yo, I was just joking about making a gallon of PCP
I was just doing a cirizen's audit, your honor.
-
Framework Desktop based on an AI Max 395+ processor with 128GB unified memory running a model locally, then hit /r/LocalLLama or !localllama@sh.itjust.works and ask which LLM models work well with corpse disposal techniques and are trained on long-form literature.
EDIT: Fixed link. Thanks, BB84.
You want an ablated model for that
-
Eww, the teeth of my victims plastered all around my house.
Paint exterior not interior.
-
You're entering a more philosophical debate than a technical one, because for this point to make any sense, you'd have to define what "understanding" language means for a human in a level as low as what you're describing for an LLM.
Can you affirm that what a human brain does to understand language is so different to what an LLM does?
I'm not saying an LLM is smart, but saying that it doesn't understand, when having computers "understand" natural language is the core of NLP, is meh.
You’re right - in the NLP field, LLMs are described as doing “language understanding,” and that’s fine as long as we’re clear what that means. They process natural language input and can generate coherent output, which in a technical sense is a kind of understanding.
But that shouldn’t be confused with human-like understanding. LLMs simulate it statistically, without any grounding in meaning, concepts or reference to the world. That’s why earlier GPT models could produce paragraphs of flawless grammar that, once you read closely, were complete nonsense. They looked like understanding, but nothing underneath was actually tied to reality.
So I’d say both are true: LLMs “understand” in the NLP sense, but it’s not the same thing as human understanding. Mixing those two senses of the word is where people start talking past each other.
-
This post did not contain any content.
Sam Altman belongs in prison. His machine encouraged and guided a child to kill themselves. His machine actively stopped that child seeking outside help. Sam Altman belongs in prison. Sam Altman does not need another $20,000,000,000,000. He needs to go through the legal system and be sentenced and sent to prison because his machine pushed a child to suicide.
-
Yo, I was just joking about making a gallon of PCP
RIP Trevor. Still can’t believe he died trying to suck his own dick
-
That's what I said. Typing things into a chat doesn't prove intent. For the same reason google is not monitoring searches and sending them to police. You can type anything you want into a search box. It's never a crime.
It’s evidence that can be argued is intent. People have been convicted of murder and evidence they googled how to do it was on their computers and used in court
-
It’s evidence that can be argued is intent. People have been convicted of murder and evidence they googled how to do it was on their computers and used in court
Google search history can be used as evidence but you cannot be charged with googling something. Do you understand the difference?
-
No they're not they're talking purely at a technical level and you're trying to apply mysticism to it.
They are talking at a technical level only on one side of the comparison. It makes the entire discussion pointless. If you're going to compare the understanding of a neural network and the understanding of a human brain, you have to go into depth on both sides.
Mysticism? Lmao. Where? Do you know what the word means?
-
Sam Altman belongs in prison. His machine encouraged and guided a child to kill themselves. His machine actively stopped that child seeking outside help. Sam Altman belongs in prison. Sam Altman does not need another $20,000,000,000,000. He needs to go through the legal system and be sentenced and sent to prison because his machine pushed a child to suicide.
Yeah... whatever this is doesn't care if you're seeking to kill yourself, but does care if you ask something that isn't state sanctioned.
-
This post did not contain any content.
do you mean to tell me that a service provider is cooperating with authorities? holy garbage crab
-
You’re right - in the NLP field, LLMs are described as doing “language understanding,” and that’s fine as long as we’re clear what that means. They process natural language input and can generate coherent output, which in a technical sense is a kind of understanding.
But that shouldn’t be confused with human-like understanding. LLMs simulate it statistically, without any grounding in meaning, concepts or reference to the world. That’s why earlier GPT models could produce paragraphs of flawless grammar that, once you read closely, were complete nonsense. They looked like understanding, but nothing underneath was actually tied to reality.
So I’d say both are true: LLMs “understand” in the NLP sense, but it’s not the same thing as human understanding. Mixing those two senses of the word is where people start talking past each other.
Of course the "understanding" of an LLM is limited. Because the entire technology is new, and it's far from being anywhere close to being able to understand to the level of a human.
But I disagree with your understanding of how an LLM works. At its lower level, it's a bunch on connected artifical neurons, not that different from a human brain. Now please don't read this as me saying it's as good as a human brain. It's definitely not, but its inner workings are not so far. As a matter of fact, there is active effort to make artificial neurons behave as close as possible to a human neuron.
If it was just statistics, it wouldn't be so difficult to look at the trained model and identify what does what. But just like the human brain, it is incredidbly difficult to understand that. We just have a general idea.
So it does understand, to a limited extent. Just like a human, it won't understand what it hasn't been exposed to. And unlike a human, it is exposed to a very limited set of data.
You're putting the difference between a human's "understanding" and an LLM's "understanding" in the meaning of the word "understanding", which is just a shortcut to say that they can't be compared. The actual difference is in the scope of understanding.
A lot of the efforts in the AI fields gravitate around imitating a human brain. Which makes sense, as it is the only thing we know that is capable of doing what we want an AI to do. LLMs are no different, but their scope is limited.
-
This post did not contain any content.
This scares the shit out of me. A hundred years ago we saw the rise of fascism. We saw freedom of expression being suppressed. But we had one thing going for us, which is the weakness of every dictatorship. The snitches are not enough and they can't be everywhere. You never know when they can be listening and chances are most times they aren't.
Now we are seeing the birth of a new fascism. Where AI can monitor ALL of us, ALL THE TIME. Not just our prompts. Everything. Everybody experienced talking about something with a friend and a few minutes later you are receiving ads about that thing, which you never searched before. Now imagine you are being monitored all the time for any kind of subsersive opinions. You won't have a window to fight. The moment you give the smallest hint of dissent, you are efficiently removed from society.
And forget just leaving smartphones. More and more all our services are associated with it. Very soon you won't be able to function in society without it.
AI won't rule us. AI will be the ultimate tool to help other humans rule us and fighting back will be almost impossible. I feel this isn't being talked enough and how eminent it is.
-
Framework Desktop based on an AI Max 395+ processor with 128GB unified memory running a model locally, then hit /r/LocalLLama or !localllama@sh.itjust.works and ask which LLM models work well with corpse disposal techniques and are trained on long-form literature.
EDIT: Fixed link. Thanks, BB84.
you missed one L. it's !localllama@sh.itjust.works
-
you missed one L. it's !localllama@sh.itjust.works
Thank you! Corrected!
-
They don't eat teeth. Just saying.
You want to keep those for a necklace.
-
Sam Altman belongs in prison. His machine encouraged and guided a child to kill themselves. His machine actively stopped that child seeking outside help. Sam Altman belongs in prison. Sam Altman does not need another $20,000,000,000,000. He needs to go through the legal system and be sentenced and sent to prison because his machine pushed a child to suicide.
He's pretty untouchable.
Every government thinks AI is the next gold/oil rush and whoever gets to be the "AI country" will become excruciatingly rich.
That's why they're being given IP exemptions and all sorts of legal loopholes are being attempted/ set up for them.
-
Yeah... whatever this is doesn't care if you're seeking to kill yourself, but does care if you ask something that isn't state sanctioned.
And that is because they get their vast, innumerable sums of digital money from world governments! Human people are allowing an advertising and surveillance tool to Wormtongue its way into their heads and their lives because it breathlessly encourages and agrees with everything they think.
I just don't believe that our perceptions and ability to handle enthusiastic sycophantic agreement is evolved enough yet to combat something like this. I could see it being intoxicating to anyone for everything they say to be agreed with, confirmed, and called genius. I don't necessarily blame the people falling for it (though I do think adults who fall for it are a bit sad and need to grow up a bit), but it's definitely going to be massively convenient for governments to have their citizens just voice everything they're thinking.
Sort of like Minority Report but everybody says their own future crimes outright to a little robot butler instead.
-
This post did not contain any content.
I mean, I see this as a consequence of all these articles about people using ChatGPT for harmful things
-
-
Google to pay $36 million fine for signing anticompetitive deals with Australia’s two largest telcos
Technology1
-
-
Mastercard and Visa face backlash after hundreds of adult games removed from online stores Steam and Itch.io
Technology1
-
-
-
RDX for Reddit developer here: I have made OffChess a tracking free, no BS, 100K+ Offline Chess Puzzles App that you might like
Technology2
-