OpenAI says it's scanning users' ChatGPT conversations and reporting content to the Police
-
This post did not contain any content.
Local models. Can't be surveiled if your ai isn't on the internet.
-
This is exactly the use-case for an LLM
I don't think it is. LLM is language generating tool, not language understanding one.
That is actually incorrect. It is also a language understanding tool. You don't have an LLM without NLP. NLP includes processing and understanding natural language.
-
I mean isn't that the same as searching "how to kill president" on google using their computer
But with AI
-
Are you seriously comparing a corrupt Israeli politician to an average joe? Israel can get away with murdering Americans and they would apologize they didnt die earlier?
USS Liberty Remembers
-
They haven't refused to charge him. He has a hearing scheduled on September 3rd.
That's state charges... This was a fed op and feds always charge under this fact pattern.
The fact that feds didn't charge means that somebody at DoJ decided against the policy.
-
He had diplomatic immunity. They refused to prosecute because it is an international incident that would require dragging the Israelis to an the ICC just to get permission to prosecute him in their jurisdiction. That's always a decades long approach in normal times - and with this administration of pedos who are beholden to Mossad, there's 0% chance of it happening. So it's often better to NOT prosecute and wait it out until more friendly times than it is to swiftly lose a trial and then be prevented from seeking justice by double jeapordy.
It's part of why the Kyle Rittenhouse trial was such a shitshow. The prosecution team threw the case intentionally and made him immune to justice.
You making factually incorrect statements. Please do some basic diligence
But yeah regime are pedos, no doubt about it.
-
Did the feds charge the other people caught in the same sting? I'm not seeing any articles about the fed vs state charges.
-
That is actually incorrect. It is also a language understanding tool. You don't have an LLM without NLP. NLP includes processing and understanding natural language.
But it doesn’t understand - at least not in the sense humans do. When you give it a prompt, it breaks it into tokens, matches those against its training data, and generates the most statistically likely continuation. It doesn’t “know” what it’s saying, it’s just producing the next most probable output. That’s why it often fails at simple tasks like counting letters in a word - it isn’t actually reading and analyzing the word, just predicting text. In that sense it’s simulating understanding, not possessing it.
-
Did the feds charge the other people caught in the same sting? I'm not seeing any articles about the fed vs state charges.
I am not sure.
I tried looking more into but can't even find articles I saw last week now.
Recent did say the Israeli skipped his court hearing yesterday though.
-
But it doesn’t understand - at least not in the sense humans do. When you give it a prompt, it breaks it into tokens, matches those against its training data, and generates the most statistically likely continuation. It doesn’t “know” what it’s saying, it’s just producing the next most probable output. That’s why it often fails at simple tasks like counting letters in a word - it isn’t actually reading and analyzing the word, just predicting text. In that sense it’s simulating understanding, not possessing it.
You're entering a more philosophical debate than a technical one, because for this point to make any sense, you'd have to define what "understanding" language means for a human in a level as low as what you're describing for an LLM.
Can you affirm that what a human brain does to understand language is so different to what an LLM does?
I'm not saying an LLM is smart, but saying that it doesn't understand, when having computers "understand" natural language is the core of NLP, is meh.
-
This post did not contain any content.
Of course they are. They are a literal data farm. People need to stop using it.
-
Local models. Can't be surveiled if your ai isn't on the internet.
Exactly.
This is going to be the next Google searches thing, isn't it. People being ignorant to, or forgetting that corporations are saying everything they say or do. And then bring all shocked when they get exploited for profits or reported to authorities for doing shady things.
Rinse and repeat.
-
You're entering a more philosophical debate than a technical one, because for this point to make any sense, you'd have to define what "understanding" language means for a human in a level as low as what you're describing for an LLM.
Can you affirm that what a human brain does to understand language is so different to what an LLM does?
I'm not saying an LLM is smart, but saying that it doesn't understand, when having computers "understand" natural language is the core of NLP, is meh.
No they're not they're talking purely at a technical level and you're trying to apply mysticism to it.
-
This post did not contain any content.
Yo, I was just joking about making a gallon of PCP
-
This post did not contain any content.
This sounds like PR deflection from egging on the kid to suicide. But I still don't doubt it.
-
This post did not contain any content.
There are anonymous GPT version out there.
example:
https://duckduckgo.com/?q=DuckDuckGo+AI+Chat&ia=chat&duckai=1&atb=v382-1I've seen several others as well.
-
This post did not contain any content.
Pretty much every corporately owned service on the Internet actively spies on you for the police.
An important thing to understand as authoritarians take control of governments and start using this comprehensive spying apparatus to target political opponents.
Learn to use your computer. Use open sourced tools and software, invest in your own hardware and host your own services. It doesn’t require years of learning or study, you can often get by with a video or two.
My Jellyfin server doesn’t call the police. My local language models don’t store everything I’ve ever written. Nobody is scanning my NextCloud server or mining my Signal/Matrix/Jami contacts to determine my social graph.
All of this is running on cheap leftover hardware (with some new hard drives) and I save over $100/mo on the equivalent services. And way more if you consider access to every streaming service with exclusive content.
Windows is spying on you, Meta is spying on you, Google is spying on you, Amazon is spying on you, OpenAI is spying on you.
They do this because they make it slightly easier to use software and so people give up every bit of privacy and autonomy for their entire lives just to avoid reading a wiki or learning a technical skill.
I don’t think that that is a good deal.
-
you can always grind it, add it to paint and paint your house white
Eww, the teeth of my victims plastered all around my house.
-
Yo, I was just joking about making a gallon of PCP
I was just doing a cirizen's audit, your honor.
-
Framework Desktop based on an AI Max 395+ processor with 128GB unified memory running a model locally, then hit /r/LocalLLama or !localllama@sh.itjust.works and ask which LLM models work well with corpse disposal techniques and are trained on long-form literature.
EDIT: Fixed link. Thanks, BB84.
You want an ablated model for that