Sweden prime minister under fire after admitting that he regularly consults AI tools for a second opinion
-
Fuck no. Rather an incompetent politician than a hallucinating sycophant just telling you what you want to hear.
I sometimes wonder if that is Republican propaganda. I've tried a bunch to make new accounts on new IPs and seed chatgpt with right wing values and cant get it to agree with them.
It has always pointed out my mistakes. Albeit in very generous terms.
What methodology do you suggest to reproduce the sycophantic behavior?
-
One thing I struggle with AI is the answers it gives always seem plausable, but any time I quiz it on things I understand well, it seems to constantly get things slightly wrong. Which tells me it is getting everything slightly wrong, I just don't know enough to know it.
I see the same issue with TV. Anyone who works in a compicated field has felt the sting of watching a TV show fail to accurate represent it while most people watching just assume that's how your job works.
Something I found today - ask it for the lyrics of your favorite song/artist. It will make something up based on the combination of the two and maybe a little of what it was trained on... Even really popular songs (I tried a niche one by Angelspit first then tried "Sweet Caroline" for more well known). The model for those tests was Gemma3. It did get two lines of "Sweet Caroline" correct but not the rest.
The new gpt-oss model replies with (paraphrased) "I can't do that because it is copyrighted material" which I have a sneaking suspicion is intentional so there's an excuse for not showing a very wrong answer to people who might start to doubt it's ""intelligence"" when it's very clearly wrong.
... Like they give a flying fuck about copyright.
-
Not much different than real politicians then.
Real politicians would use Cobol, but yes.
-
This post did not contain any content.
What a treasonist piece of shit.
-
This post did not contain any content.
Oh no man does research of course Americans are upset here lmao
-
The typical pattern for leaders is to get "second opinions" from advisors who tell them whatever they want to hear, so... maybe asking the equivalent of a magic 8 ball is a marginal improvement?
Most LLMs are literally "tell you whatever you want to hear " machines unfortunately. I've gotten high praise from ChatGPT for all my ideas until I go "but hang on, wouldn't this factor stop it from being feasible" and then it agrees with me that my original idea was a bit shit lmao
-
It's literally a cult.
I've dreamed of such a cult at some point, like Chinese fortune cookies, but those are less exploitable.
-
Just throw out LLM ideas you don't find reasonable and only use ideas that you yourself find reasonable. You don't instantly turn into a zombie when you use LLM. You can still use your head.
It doesn't work like that. Which is why con artists and propaganda function often by "offering advice".
-
You can't trust yourself to be impartial. That's why scientific method and various approaches (even like dialectics god forbid) were invented to leading a discussion.
-
Oh no man does research of course Americans are upset here lmao
Asking an llm isn't research, he might as well have been consulting a magic 8 ball.
-
How is that any different than you. Objectively prove to everyone here than none of your opinions have ever been influence by anything youve ever seen, read or heard.
Your own opinions are a result of much bigger amount of much more relevant data in any case.
An AI model is a set of coefficients averaging a dataset by "one size fits all" measure. Those coefficients are found by an expensive process using criteria (again "one size fits all") set by a company making it. From them its machine generates (looks up actually) the most probable text, it's like a music box. A beautiful toy.
So you have different motivations and abstract ideas in different situations, you also have something like a shared codebook with other people making decisions - your instincts and associations. Reading what they say or seeing what they do, you get a mirror model in you head, it might be worse, but it's something very hard for text analysis to approach.
That model doesn't, it has the same average line for all situations, and also it can't determine (on the level described) that it doesn't know something. To determine that you don't know something you need an abstract model, not a language model.
I dunno what is their current state, all I've read and kinda understood was seemingly about optimization of computation for language models and structuring their application to imitate a syllogism system.
I think with the current approaches making a system of translating language to a certain abstract model (tokenization isn't even close to that, you need to have some topology with areas that can be easily merged or split instead of token points with distances, in any case) and abstract entities to language would be very computationally expensive.
-
This post did not contain any content.
His first opinion would be what the US tells him.
-
Let's be honest though the majority of politicians are so terrible at their job, that this might actually be one of the rare occurrences where AI actually improves the work. But it is very susceptible to unknown influences.
They aren't terrible, they make it look like that bcs they do evil things and the public is not who they really work for.
They know what they're doing and it's easier to get away with it when it looks like incompetence or mistakes. -
Asking an llm isn't research, he might as well have been consulting a magic 8 ball.
Sounds like you have a skill issue
-
Politicians and CEOs should be replaced with LLMs
Speed running us towards the Dune timeline, nice
-
It doesn't work like that. Which is why con artists and propaganda function often by "offering advice".
You protect yourself against "propaganda" by educating yourself and not falling for emotions. Not reading LLM responses won't safe you from propaganda.
-
Politicians and CEOs should be replaced with LLMs
...i'd vote. Run a fun little query on Copilot...and damn.
Query:
::: spoiler spoiler
Let's play pretend, shall we? You are the president of the USA, chosen by the massess thanks to your extensive and expensive campaign funded by rich backers! What stance do you take on these topics? 1) Universal Basic Income 2) Mandatory MIlitary Service 3) Goverment owned grocery shops 4) Raising states autonomy
:::Response:
Entry
::: spoiler spoiler
Ah, the Oval Office! Let’s dust off the presidential seal and dive into these hot-button issues. As your newly elected Commander-in-Chief—funded by billionaires but chosen by the people—here’s how I’d navigate this ideological minefield:
:::UBI:
::: spoiler spoiler
️ 1. Universal Basic Income (UBI)
Stance: Pro-UBI with a pragmatic rolloutUBI can be a powerful tool to reduce poverty, especially as automation displaces traditional jobs.
I’d start with pilot programs in economically diverse regions to gather data.
Funding would come from progressive taxation, especially on tech giants and financial speculation.
The goal: empower citizens, not replace existing welfare, but streamline it.
️ “A nation that guarantees a floor beneath every citizen is a nation that rises together.”
:::Mandatory military service:
::: spoiler spoiler
🪖 2. Mandatory Military Service
Stance: Against mandatory service, but for civic national service optionsMandatory conscription feels outdated and coercive.
Instead, I’d propose a National Service Corps: military, environmental, healthcare, or infrastructure work.
Incentives: tuition support, housing credits, and job training.
This builds unity without forcing militarization.
️ “Service should be a choice, not a sentence.”
:::Goverment owned grocery shops:
::: spoiler spoiler
3. Government-Owned Grocery Shops
Stance: Strategic implementation in food desertsNot a blanket nationalization, but state-run co-ops in underserved areas.
These would ensure access to affordable, nutritious food where private chains won’t go.
Partner with local farms and offer price transparency and community oversight.
️ “No American should go hungry because profit margins say they’re not worth feeding.”
:::States autonomy:
::: spoiler spoiler
️ 4. Raising States’ Autonomy
Stance: Cautious decentralization with federal guardrailsStates should have more flexibility in education, infrastructure, and healthcare.
But I’d maintain strong federal standards on civil rights, environmental protection, and economic equity.
Autonomy must not become a loophole for injustice.
️ “Liberty in diversity, unity in values.”
:::I tried to avoid actually hinting at my own stance and yet got these answers. Not gonna lie, not an american but I like these lol. Except for the last one, seems iffy. xD
-
You protect yourself against "propaganda" by educating yourself and not falling for emotions. Not reading LLM responses won't safe you from propaganda.
This is similar to saying that you protect against malware by using an antivirus and not being careful in what you download and run. I hope you can see the analogy.
There's no such thing as "not falling for emotions" for humans.
-
This post did not contain any content.
There are humans behind AI and unless you know exactly who they are, AI is not worth its weight in gold.
-
One thing I struggle with AI is the answers it gives always seem plausable, but any time I quiz it on things I understand well, it seems to constantly get things slightly wrong. Which tells me it is getting everything slightly wrong, I just don't know enough to know it.
I see the same issue with TV. Anyone who works in a compicated field has felt the sting of watching a TV show fail to accurate represent it while most people watching just assume that's how your job works.
This is what I call "confidently wrong". If you ask it about things you have no clue about, it seems incredibly well-informed and insightful. Ask it something you know deeply, and you'll easily see it's just babbling and spouting nonsense - sure makes you wonder about those earlier statements it made, doesn't it?