Sam Altman admits OpenAI ‘totally screwed up’ its GPT-5 launch and says the company will spend trillions of dollars on data centers
-
Nah, it's good that they ripped off that bandaid. Parasocial AI relationships are terrible.
its between codependant relationship and parasocial relationship of celebrities/public figures which is the extreme end, because usually its ends with stalking, or death threats.
-
Just a few more bucks bro! I swear then it will be the revolutionary "AI" we promised it to be.
*Few more billion.
I sometimes wonder if silicon valley tech businesses in general will take a reputation hit with investors when this bubble bursts, it's gonna be a doozy.
But then I remember how many greedy idiots there are out there pumping money into grifts in the hope of The Big Win, and my expectations of consequences are tempered.
-
Its disturbing to see how many people have created emotional connections to a word generstor.
Imaginary friends used to require atleast some modicum of creativity.
-
“I literally lost my only friend overnight with no warning,” one person posted on Reddit, lamenting that the bot now speaks in clipped, utilitarian sentences. “The fact it shifted overnight feels like losing a piece of stability, solace, and love.”
Won't they just let this guy go bankrupt already?
-
"we fucked up our massive new generation product launch.. oh well lets invest trillions in new data centers" How do investors keep falling for this shit.
How indeed. It's probably a multi-factor phenomenon which requires an anthropological study for a serious answer. (Good luck trying to get the necessary access to study them.) My guess for one factor in this, is that they have more money than they know what to do with.
-
vibe innovation, they are the ones that think AI will be innovative in science by spontaneous generating of new science discoveries, without "researchers, labs, papers"
I have seen some people talk like that, and it strikes me as a religion. There's euphoria, zeal, hope. To them AGI is coming to usher in heaven on earth. Singularity is like rupture.
Sam Altman is one of the preachers of this religion.
-
the vast majority comes from russia though, the west has a ton on specific niches. propaganda in the us, somewhat easier to figure out because its obvious(in the form of cinema, and movies, and shows) plus constant adoration for military is another.
Elon turned Grok into Mecha-Hitler.
Trump is telling the Smithsonian museum to ignore slavery, or to cover slavery as a positive.
The domestic appetite for propaganda is huge. Prager U is American.
Let's not center foreign countries when we have so much work to do at home.
-
How indeed. It's probably a multi-factor phenomenon which requires an anthropological study for a serious answer. (Good luck trying to get the necessary access to study them.) My guess for one factor in this, is that they have more money than they know what to do with.
The american stock market is purely vibe driven now
-
It doesn't have "3 million bits of info" on a specific topic, or even if it did, it wouldn't be able to directly measure it. It's worth reading a bit about how LLMs work behind the hood, because although somewhat dense if you're new to the concepts, you come out knowing a lot more about what to expect when using them, what the limitations actually are and how to use them better if you decide to go that route.
You could do this with logprobs. The language model itself has basically no real insight into its confidence but there's more that you can get out of the model besides just the text.
The problem is that those probabilities are really "how confident are you that this text should come next in this conversation" not "how confident are you that this text is true/accurate." It's a fundamental limitation at the moment I think.
-
Won't they just let this guy go bankrupt already?
Think of the military applications if it finally works though
-
It's always funny to me when people do add 'confidence scores' to LLMs, because it always amounts to just adding 'say how confident you are with low, medium or high in your response' to th prompt, and then you have made up confidences for made up replies. And you can tell clients that it's just made up and not actual confidence, but they will insist that they need it anyways…
And you can tell clients that it's just made up and not actual confidence, but they will insist that they need it anyways…
That doesn’t justify flat out making shit up to everyone else, though. If a client is told information is made up but they use it anyway, that’s on the client. Although I’d argue that an LLM shouldn’t be in the business of making shit up unless specifically instructed to do so by the client.
-
That’s actually one thing that got significantly improved with GPT-5, fewer hallucinations. Still not perfect of course
I’m more inclined to believe it’s gotten better at being convincing.
-
Nah, it's good that they ripped off that bandaid. Parasocial AI relationships are terrible.
The worst part is that they backstepped a bit and made it "friendlier".
Basically undoing that part.
-
Won't they just let this guy go bankrupt already?
THE TECHbros are whoring themselves out to trump for govt contracts.
-
“I literally lost my only friend overnight with no warning,” one person posted on Reddit, lamenting that the bot now speaks in clipped, utilitarian sentences. “The fact it shifted overnight feels like losing a piece of stability, solace, and love.”
Honestly, that should have been for the better. If it's meant to be a tool, I would much rather it behave like a tool, rather than trying to be my best friend, or an evil vizier trying to give me advice.
The fact that people got so attached to what is essentially a text generation algorithm that they were mourning its "death" is worrying, especially when it's one that OpenAI has proven themselves to be more than able to modify as they wish.
Just as concerning is OpenAI rolling back the update to make their model "friendlier", or that people were clamouring hand over fist to throw money at the company in the hopes of getting their "friend" back.
That can't possibly be good news, especially when the shareholders find out that they have an iron grip over a portion of their users.
-
“I literally lost my only friend overnight with no warning,” one person posted on Reddit, lamenting that the bot now speaks in clipped, utilitarian sentences. “The fact it shifted overnight feels like losing a piece of stability, solace, and love.”
Never use AI for friendship, it's like admitting you only want yes-men in your life. I don't want to be around anyone who uses AI for emotional support.
-
Never use AI for friendship, it's like admitting you only want yes-men in your life. I don't want to be around anyone who uses AI for emotional support.
You would have better luck with a dating sim then AI as emotional support. Might inspire you to make a real friend.
-
they have software for protein sciences, but usually its only accessible to scientists though. i dont AI is sophisticated enough to do any kind of science, as will try to scrape from whatever site it finds.
We're talking about repurposing the GPUs, not the AI.
-
Never use AI for friendship, it's like admitting you only want yes-men in your life. I don't want to be around anyone who uses AI for emotional support.
It's so much more effective when you keep things as neutral as possible. I will often ask it to tear apart my argument as though I am my opponent and use its tendency to align with the user against itself.
-
And you can tell clients that it's just made up and not actual confidence, but they will insist that they need it anyways…
That doesn’t justify flat out making shit up to everyone else, though. If a client is told information is made up but they use it anyway, that’s on the client. Although I’d argue that an LLM shouldn’t be in the business of making shit up unless specifically instructed to do so by the client.
I'm not really sure I follow.
Just to be clear, I'm not justifying anything, and I'm not involved in those projects. But the examples I know concern LLMs customized/fine-tuned for clients for specific projects (so not used by others), and those clients asking to have confidence scores, people on our side saying that it's possible but that it wouldn't actually say anything about actual confidence/certainty, since the models don't have any confidence metric beyond "how likely is the next token given these previous tokens" and the clients going "that's fine, we want it anyways".
And if you ask me, LLMs shouldn't be used for any of the stuff it's used for there. It just cracks me up when the solution to "the lying machine is lying to me" is to ask the lying machine how much it's lying. And when you tell them "it'll lie about that too" they go "yeah, ok, that's fine".
And making shit up is the whole functionality of LLMs, there's nothing there other than that. It just can make shit up pretty well sometimes.