Sweden prime minister under fire after admitting that he regularly consults AI tools for a second opinion
-
This post did not contain any content.
Politicians and CEOs should be replaced with LLMs
-
Yes. By an hourly rate which includes considerations of youre opponents position. Do tou not understand how to develope a proper legal argument. My god you people ar stupid.
This conversation has been in the context of AI. Thus, I do not want my lawyer taking advice (we'll use that word instead of considering since you clearly do not grasp context) from the person suing me while I'm paying for the lawyer. You are clearly a MAGA level moron.
-
LLMs can defend what you tell it to defend. What are you on about?
No it cannot. It does not understand anything so it cannot actually defend its points. It can make something that looks like a defense, but it doesn't understand what it is telling you. It can spit text back at you until the cows come home but none of it can ever be trusted or relied on.
-
I really don't get it. These things are brand new. How can anyone get so into these things so quickly. I don't take advice from people I barely know, much less ones that can be so easily and quickly reprogrammed.
Because that's what it is really trained for: to produce correct grammar and plausible sentences. It's really an unbelievable leap from computer-generated text from preceding approaches where, in a matter of a few years, you went from little more than gibberish to stuff that's so incredibly realistic that it can be mistaken for intelligent conversation, easily passing the Turing Test (I had to actually go to Wikipedia to check and, indeed, this was verified this year - note that this in particular is for recent models)
So you have something that is sufficiently realistic that it can appear to be a human conversation partner. Human beings aren't (yet) well-equipped to deal with something which appears to be human but whose behaviour diverges from typical human behaviour so radically (most relevantly, it won't readily admit to not knowing something).
-
Because that's what it is really trained for: to produce correct grammar and plausible sentences. It's really an unbelievable leap from computer-generated text from preceding approaches where, in a matter of a few years, you went from little more than gibberish to stuff that's so incredibly realistic that it can be mistaken for intelligent conversation, easily passing the Turing Test (I had to actually go to Wikipedia to check and, indeed, this was verified this year - note that this in particular is for recent models)
So you have something that is sufficiently realistic that it can appear to be a human conversation partner. Human beings aren't (yet) well-equipped to deal with something which appears to be human but whose behaviour diverges from typical human behaviour so radically (most relevantly, it won't readily admit to not knowing something).
Its more than that. It takes the input and tries to interpret the bad grammar and sentences into search terms and finds link the correlate the highest to its interpretation and then gives back the response that summarizes the results with good grammar and plausible sentences. Again this is why I stress that you have to evaluate its response and sources. The sources are the real value in any query. Im actually not sure how much the chatbots give sources by default though as I know I have not gotten them and then asked for them and now I get them as a matter of course so im not sure if it learns that I want them or if they did a change to provide them when they had not before.
-
No it cannot. It does not understand anything so it cannot actually defend its points. It can make something that looks like a defense, but it doesn't understand what it is telling you. It can spit text back at you until the cows come home but none of it can ever be trusted or relied on.
it sounds like you've never used an LLM, mate.
You don't need to get philosophical into the definition of what is understanding to realize they give you arguments as valid as anyone else would.
-
What use is an opinion that can neither be explained or defended by the person giving it? How is that useful to a person making decisions for millions of people?
Just throw out LLM ideas you don't find reasonable and only use ideas that you yourself find reasonable. You don't instantly turn into a zombie when you use LLM. You can still use your head.
-
Fuck no. Rather an incompetent politician than a hallucinating sycophant just telling you what you want to hear.
Nah you are wrong and should use AI as a first opinion
-
Politicians and CEOs should be replaced with LLMs
It can't make things any worse...
-
Nah you are wrong and should use AI as a first opinion
Wait... how many fingers do you have on each hand?
-
Your examples where an LLM is defending a position you chose for it while producing obviously conflicting arguments actually proves what the others have been telling you. This is meaningless slop. It clearly has no connection to any position an LLM might have appeared to have on a subject. If it did, you would not be able to make it defend the opposite side without objections.
-
This post did not contain any content.
Meanwhile the American president uses no intelligence at all. Artificial or otherwise
-
We need LLMs as much as we needed 3D movies or augmented reality.
als brechmittel.
Endlich hab jemand eine echte Nutzung für LLMs gefunden.
-
Depending on the AI, it will conclude that he ought to buy a new phone charger, deport all the foreigners, kill all the Jews or rewrite his legislation in Perl. It's hard to say without more information.
Not much different than real politicians then.
-
Fuck no. Rather an incompetent politician than a hallucinating sycophant just telling you what you want to hear.
I sometimes wonder if that is Republican propaganda. I've tried a bunch to make new accounts on new IPs and seed chatgpt with right wing values and cant get it to agree with them.
It has always pointed out my mistakes. Albeit in very generous terms.
What methodology do you suggest to reproduce the sycophantic behavior?
-
One thing I struggle with AI is the answers it gives always seem plausable, but any time I quiz it on things I understand well, it seems to constantly get things slightly wrong. Which tells me it is getting everything slightly wrong, I just don't know enough to know it.
I see the same issue with TV. Anyone who works in a compicated field has felt the sting of watching a TV show fail to accurate represent it while most people watching just assume that's how your job works.
Something I found today - ask it for the lyrics of your favorite song/artist. It will make something up based on the combination of the two and maybe a little of what it was trained on... Even really popular songs (I tried a niche one by Angelspit first then tried "Sweet Caroline" for more well known). The model for those tests was Gemma3. It did get two lines of "Sweet Caroline" correct but not the rest.
The new gpt-oss model replies with (paraphrased) "I can't do that because it is copyrighted material" which I have a sneaking suspicion is intentional so there's an excuse for not showing a very wrong answer to people who might start to doubt it's ""intelligence"" when it's very clearly wrong.
... Like they give a flying fuck about copyright.
-
Not much different than real politicians then.
Real politicians would use Cobol, but yes.
-
This post did not contain any content.
What a treasonist piece of shit.
-
This post did not contain any content.
Oh no man does research of course Americans are upset here lmao
-
The typical pattern for leaders is to get "second opinions" from advisors who tell them whatever they want to hear, so... maybe asking the equivalent of a magic 8 ball is a marginal improvement?
Most LLMs are literally "tell you whatever you want to hear " machines unfortunately. I've gotten high praise from ChatGPT for all my ideas until I go "but hang on, wouldn't this factor stop it from being feasible" and then it agrees with me that my original idea was a bit shit lmao