Your public ChatGPT queries are getting indexed by Google and other search engines
-
I mean... they are public. duh
-
Oh,boy. More.
Have you heard of timecube? Well here's mirrorcube
Ten times ten thousand pairs of opposite reflected extensions of you are doing the same thing - throwing the ball away from themselves toward their opposites and away from themselves, each one of each pair being the reverse of its opposite, and acting in reverse. YOU NOW KNOW WHAT THE ELECTRIC CURRENT IS, and that should tell you what RADAR is. Likewise it explains RADIO and TELEVISION. [See Principle of Regeneration, 3.13 - Reciprocals and Proportions of Motions and Substance, 7.3 - Law of Love - Reciprocal Interchange of State on Multiple Subdivisions]
It's so fucking insane
Have you heard of timecube?
No, but I've heard of þe time knife.
-
Have you heard of timecube?
No, but I've heard of þe time knife.
Warning, it gets racist the deeper you read.
-
Yes, Ollama or a range of other backends (Ooba, Kobold, etc.) can run LLMs locally. Huggingface has a huge number of models suited to different tasks like coding, storywriting, general purpose, and so on. If you run both the backend and frontend locally, then no one monetizes your data.
The part I'd argue that the previous poster is glazing over a little bit is performance. Unless you have an enterprise-grade GPU cluster sitting in your basement, you're going to make compromises on speed and/or quality relative to the giant models that run on commercial services.
Thanks for the info. Yeah, I was wondering what kind of hardware you’d need to host LLMs locally with decent performance and your post clarifies that. I doubt many people would have the kind of hardware required.
-
I'll probably have a target on my back because I kept asking it how to replace CEOs and other executives who do literally nothing but collect a paycheck and break shit.
-
Yes, Ollama or a range of other backends (Ooba, Kobold, etc.) can run LLMs locally. Huggingface has a huge number of models suited to different tasks like coding, storywriting, general purpose, and so on. If you run both the backend and frontend locally, then no one monetizes your data.
The part I'd argue that the previous poster is glazing over a little bit is performance. Unless you have an enterprise-grade GPU cluster sitting in your basement, you're going to make compromises on speed and/or quality relative to the giant models that run on commercial services.
It's also going to cost more, because you almost certainly are only going to be using your hardware a tiny fraction of the time.
-
It's also going to cost more, because you almost certainly are only going to be using your hardware a tiny fraction of the time.
Possibly, yes. There are models that will run on consumer-grade GPUs that you might already have or might have purchased anyway, where you might say there's no incremental cost. But the issue is that the performance will be limited. The models are forgetful and prone to getting stuck in loops of repeated phrases.
So if instead you custom-build a workstation with two 5090s or a Pro 6000 or something that pushes you up to the 100 GB VRAM tier, then absolutely, just as you said, you'll be spending thousands of dollars that probably won't pay back relative to renting cloud GPU time.
-
If you don't want corporations to use you chats as data, don't use corporate hosted language models.
Even non-public chats are archived by OpenAI, and the terms of service of ChatGPT essentially give OpenAI the right to use your conversations in any way that they choose.
You can bet they'll eventually find ways to monetize your data at some point in the future. If you think GoogleAds is powerful, wait until people's assistants are trained with every manipulative technique we've ever invented and are trying to sell you breakfast cereals or boner pills...
You can't uncheck that box except by not using it in the first place. But people will sell their soul to a company in order to not have to learn a little bit about self-hosting
This is basically a "if you don’t want your data to be used, run your own internet" comment
It’s just not doable for pretty much everyone
-
If you don't want your conversations to be public, how about you don't tick the checkbook that says "make this public." This isn't OpenAI's problem, its an idiot user problem.
This is a case of corporation taking advantage of technically idiotic userbase, which is most of the general public. OpenAI using a dark pattern so that users can't easily unchecked that box nor making that text that says "this can be indexed by search engines" brightly visible.
-
This is basically a "if you don’t want your data to be used, run your own internet" comment
It’s just not doable for pretty much everyone
Modern LLMs can serve you for most tasks while running locally on your machine.
Something like GPT4ALL will do the trick on any platform of your choosing if you have at least 8gb of RAM (and for most people nowadays it's true).
It has a simple, idiot-proof GUI and doesn't collect data if you don't allow it to. It's also open source, and, being local, it does not need Internet connection once you downloaded a model you need (which normally takes a single-digit number of gigabytes).
-
Modern LLMs can serve you for most tasks while running locally on your machine.
Something like GPT4ALL will do the trick on any platform of your choosing if you have at least 8gb of RAM (and for most people nowadays it's true).
It has a simple, idiot-proof GUI and doesn't collect data if you don't allow it to. It's also open source, and, being local, it does not need Internet connection once you downloaded a model you need (which normally takes a single-digit number of gigabytes).
If you want actual good features like deep research or chain of thought, eh, not sure it’s a good choice
The models will also not be very powerful
-
Mine are not public, i use
a tinfoilduck.ai. -
If you want actual good features like deep research or chain of thought, eh, not sure it’s a good choice
The models will also not be very powerful
And you don't need any of that. You don't even need a local LLM.
So if you decide you want it, then that's on you, and you have made the choice to give up your data.