The Swedish prime minister has come under fire after admitting that he regularly consults AI tools for a second opinion
-
Bad news friend. The number of atheist heads of state is surprisingly low.
-
This post did not contain any content.
‘We didn’t vote for ChatGPT’: Swedish PM under fire for using AI in role
Tech experts criticise Ulf Kristersson as newspaper accuses him of falling for ‘the oligarchs’ AI psychosis’
the Guardian (www.theguardian.com)
"That's right voters I'm spineless and have no original ideas" -every politician
-
This post did not contain any content.
‘We didn’t vote for ChatGPT’: Swedish PM under fire for using AI in role
Tech experts criticise Ulf Kristersson as newspaper accuses him of falling for ‘the oligarchs’ AI psychosis’
the Guardian (www.theguardian.com)
Anyone who has an immediate kneejerk reaction the moment someone mentions AI is no better than the people they’re criticizing. Horseshoe theory applies here too - the most vocal AI haters are just as out of touch as the people who treat everything an LLM says as gospel.
-
There’s a certain irony in people reacting in an extremely predictable way - spewing hate and criticism the moment someone mentions AI - while seemingly not realizing that they’re reflexively responding to a prompt without any real thought, just like an LLM.
A tool isn’t bad just because it doesn’t do what you thought it would do. You just take that into account and adjust how you use it. Hammer isn't a scam just because it can't drive in screws.
-
It really can't. It does not understand things.
How is "not understanding things" preventing an LLM from bringing up a point you hadn't thought of before?
-
But it doesn't know anything. At all. Does Sweden not have a fuck ton of people that are trained to gather intelligence?
It doesn’t understand things the way humans do, but saying it doesn’t know anything at all isn’t quite accurate either. This thing was trained on the entire internet and your grandma’s diary. You simply don’t absorb that much data without some kind of learning taking place.
It’s not a knowledge machine, but it does have a sort of “world model” that’s emerged from its training data. It “knows” what happens when you throw a stone through a window or put your hand in boiling water. That kind of knowledge isn’t what it was explicitly designed for - it’s a byproduct of being trained on data that contains a lot of correct information.
It’s not as knowledgeable as the AI companies want you to believe - but it’s also not as dumb as the haters want you to believe either.
-
there absolutely is something wrong with sending the basis for decisions in matters of state to a foreign actor, though.
-
Anyone who has an immediate kneejerk reaction the moment someone mentions AI is no better than the people they’re criticizing. Horseshoe theory applies here too - the most vocal AI haters are just as out of touch as the people who treat everything an LLM says as gospel.
Absolutely incorrect. Bullshit. And horseshoe theory itself is largely bullshit.
(Succinct response taken from Reddit post discussing the topic)
"Horseshoe Theory is slapping "theory" on a strawman to simplify WHY there's crossover from two otherwise conflicting groups. It's pseudo-intellectualizing it to make it seem smart."
This ignores the many, many reasons we keep telling you why we find it dangerous, inaccurate, and distasteful. You don't offer a counter argument in your response so I can only assume it's along the lines of, "technology is inevitable, would you have said the same if the Internet?" Which is also a fallacious argument. But go ahead, give me something better if I assume wrong.
I can easily see why people would be furious their elected leader is abdicating thought and responsibility to an often wrong, unaccountably biased chat bot.
Furthermore, your insistance continues to push an acceptance of AI on those who clearly don't want it, contributing to the anger we feel at having it forced upon us
-
Anyone who has an immediate kneejerk reaction the moment someone mentions AI is no better than the people they’re criticizing. Horseshoe theory applies here too - the most vocal AI haters are just as out of touch as the people who treat everything an LLM says as gospel.
here's my kneejerk reaction: my prime minister is basing his decisions partly on the messages of an unknown foreign actor, and sending information about state internals to that unknown foreign actor.
whether it's ai or not is a later issue.
-
Anyone who has an immediate kneejerk reaction the moment someone mentions AI is no better than the people they’re criticizing. Horseshoe theory applies here too - the most vocal AI haters are just as out of touch as the people who treat everything an LLM says as gospel.
If someone says they got a second opinion from a physician known for being wrong half the time would you not wonder why they didn’t choose someone more reliable for something as important as their health? AI is notorious for providing incomplete, irrelevant, heavily slanted, or just plain wrong info. Why give it any level of trust to make national decisions? Might as well, I dunno…use a bible? Some would consider that trustworthy.
-
“You have to be very careful,” Simone Fischer-Hübner, a computer science researcher at Karlstad University, told Aftonbladet, warning against using ChatGPT to work with sensitive information.
I mean, sending queries to a search engine or an LLM are about the same in terms of exposing one's queries.
If the guy were complaining about information from an LLM not being cited or something, then I think I could see where he was coming from more.
It's a woman
-
If someone says they got a second opinion from a physician known for being wrong half the time would you not wonder why they didn’t choose someone more reliable for something as important as their health? AI is notorious for providing incomplete, irrelevant, heavily slanted, or just plain wrong info. Why give it any level of trust to make national decisions? Might as well, I dunno…use a bible? Some would consider that trustworthy.
I often ask ChatGPT for a second opinion, and the responses range from “not helpful” to “good point, I hadn’t thought of that.” It’s hit or miss. But just because half the time the suggestions aren’t helpful doesn’t mean it’s useless. It’s not doing the thinking for me - it’s giving me food for thought.
The problem isn’t taking into consideration what an LLM says - the problem is blindly taking it at its word.
-
there absolutely is something wrong with sending the basis for decisions in matters of state to a foreign actor, though.
As i wrote in another comment, you can run a local instance of chatgpt or other ai on your own computer, no internet involved
-
Absolutely incorrect. Bullshit. And horseshoe theory itself is largely bullshit.
(Succinct response taken from Reddit post discussing the topic)
"Horseshoe Theory is slapping "theory" on a strawman to simplify WHY there's crossover from two otherwise conflicting groups. It's pseudo-intellectualizing it to make it seem smart."
This ignores the many, many reasons we keep telling you why we find it dangerous, inaccurate, and distasteful. You don't offer a counter argument in your response so I can only assume it's along the lines of, "technology is inevitable, would you have said the same if the Internet?" Which is also a fallacious argument. But go ahead, give me something better if I assume wrong.
I can easily see why people would be furious their elected leader is abdicating thought and responsibility to an often wrong, unaccountably biased chat bot.
Furthermore, your insistance continues to push an acceptance of AI on those who clearly don't want it, contributing to the anger we feel at having it forced upon us
You opened with a flat dismissal, followed by a quote from Reddit that didn’t explain why horseshoe theory is wrong - it just mocked it. That’s not an argument, that’s posturing.
From there, you shifted into responding to claims I never made. I didn’t argue that AI is flawless, inevitable, or beyond criticism. I pointed out that reflexive, emotional overreactions to AI are often as irrational as the blind techno-optimism they claim to oppose. That’s the context you ignored.
You then assumed what I must believe, invited yourself to argue against that imagined position, and finished with vague accusations about me “pushing acceptance” of something people “clearly don’t want.” None of that engages with what I actually said.
-
What use is an opinion that can neither be explained or defended by the person giving it? How is that useful to a person making decisions for millions of people?
LLMs can defend what you tell it to defend. What are you on about?
-
This post did not contain any content.
‘We didn’t vote for ChatGPT’: Swedish PM under fire for using AI in role
Tech experts criticise Ulf Kristersson as newspaper accuses him of falling for ‘the oligarchs’ AI psychosis’
the Guardian (www.theguardian.com)
I’m not against the technology, I’m against people who runs it. I have problem with how they teach their LLMs on code, user data, music, books, webs all without author’s / user’s consent and worse even with authors / users explicit NO consent to scrape or to use it for learning.
Another level is lack of security - ChatGPT chats available to everyone.
Deep fakes everywhere, just see the latest Taylor Swift one.
Sorry, but fuck you with all of this.
There is lack of basic security, privacy and ignoring all of its danger. Only what that fucking AI firms want is easy, cheep and quick money.
All that hype for nothing = means you cannot even rely on the output. -
This post did not contain any content.
‘We didn’t vote for ChatGPT’: Swedish PM under fire for using AI in role
Tech experts criticise Ulf Kristersson as newspaper accuses him of falling for ‘the oligarchs’ AI psychosis’
the Guardian (www.theguardian.com)
europe is fucking doomed
-
I’m not against the technology, I’m against people who runs it. I have problem with how they teach their LLMs on code, user data, music, books, webs all without author’s / user’s consent and worse even with authors / users explicit NO consent to scrape or to use it for learning.
Another level is lack of security - ChatGPT chats available to everyone.
Deep fakes everywhere, just see the latest Taylor Swift one.
Sorry, but fuck you with all of this.
There is lack of basic security, privacy and ignoring all of its danger. Only what that fucking AI firms want is easy, cheep and quick money.
All that hype for nothing = means you cannot even rely on the output.yet you need these masses of input for the technology to exist. the business models that base on the technology aren't sustainable even without payment of the input data.
-
yet you need these masses of input for the technology to exist. the business models that base on the technology aren't sustainable even without payment of the input data.
Of common, you justifying stealing by this bullshit?
-
As i wrote in another comment, you can run a local instance of chatgpt or other ai on your own computer, no internet involved
of course you can. why would a career politician who has very visibly been interested only in politics since his teens know how to do that?