Sweden prime minister under fire after admitting that he regularly consults AI tools for a second opinion
-
i'd say it's still bad because this is the leader of a government consulting with a stochastic parrot instead of experts.
Presumably it wasn't instead of, it was in addition to, and therefore totally fine
-
Presumably it wasn't instead of, it was in addition to, and therefore totally fine
it's still not fine, as per my first point. it's leaking information to foreign interests.
-
it's still not fine, as per my first point. it's leaking information to foreign interests.
Right, but we already addressed that above. If it's done on a local pc's ai that doesn't operate using a net connection, and is used in addition to rather than instead of consulting with people, then it's totally fine
-
Right, but we already addressed that above. If it's done on a local pc's ai that doesn't operate using a net connection, and is used in addition to rather than instead of consulting with people, then it's totally fine
yeah but then we're no longer discussing the topic the thread is about.
-
yet you need these masses of input for the technology to exist. the business models that base on the technology aren't sustainable even without payment of the input data.
Do we really need this technology to exist though? It's unreliable and very niche as far as I have seen.
People say that it speeds up certain tasks, but it's so unreliable that you need to error-check the whole thing afterwards.
-
This post did not contain any content.
Let's be honest though the majority of politicians are so terrible at their job, that this might actually be one of the rare occurrences where AI actually improves the work. But it is very susceptible to unknown influences.
-
here's my kneejerk reaction: my prime minister is basing his decisions partly on the messages of an unknown foreign actor, and sending information about state internals to that unknown foreign actor.
whether it's ai or not is a later issue.
He explicitly states that no sensitive informarion gets used. If you believe that, then I have no issue with him additionally asking for a third opinion from an LLM.
-
He explicitly states that no sensitive informarion gets used. If you believe that, then I have no issue with him additionally asking for a third opinion from an LLM.
i don't have any reason to believe it, given the track record.
also, the second half of the problem is of course the information that comes back, what it is based on, and what affects that base.
-
europe is fucking doomed
Because of this one incident.
Good how you figured it out.
So much smarter than the rest.
...
Get. out. -
He explicitly states that no sensitive informarion gets used. If you believe that, then I have no issue with him additionally asking for a third opinion from an LLM.
He explicitly states that no sensitive informarion gets used. If you believe that, then I have
... a bridge to sell you.
Don't be naive.
-
Let's be honest though the majority of politicians are so terrible at their job, that this might actually be one of the rare occurrences where AI actually improves the work. But it is very susceptible to unknown influences.
That's the big issue. If it was only about competence, I think throwing dice might yield better results than what many politicians are doing. But AI isn't throwing dice but instead reproduces what the creators of the AI want to say.
-
That's the big issue. If it was only about competence, I think throwing dice might yield better results than what many politicians are doing. But AI isn't throwing dice but instead reproduces what the creators of the AI want to say.
Creators of AI don't quite have the technology to puppeteer their AI like this.
They can selects the input, they can bias the training, but if the model isn't going to be lobotomized coming out
then they can't really bend it toward any particular one opinionI'm sure in the future they'll be able to adjust advertising manipulation in real time but not yet.
What is really sketchy is states and leaders relying on commercial models instead of public ones
I think states should train public models and release them for the public good
if only to undermine big tech bros and their nefarious influence -
Do we really need this technology to exist though? It's unreliable and very niche as far as I have seen.
People say that it speeds up certain tasks, but it's so unreliable that you need to error-check the whole thing afterwards.
It's a new technology barely out of infancy. Of course it's unreliable and niche. You could say the same thing about any technological advance in history.
-
Of common, you justifying stealing by this bullshit?
Fuck the copyright system as it exists today.
-
It's a new technology barely out of infancy. Of course it's unreliable and niche. You could say the same thing about any technological advance in history.
You could say that. But you could also say that none of these other technological advances got pushed through this badly while being obviously not ready for
widespreaduse.And also, can you really say that though? Most other technological advances had a pretty clear distinction from the older way of doing things.
-
You could say that. But you could also say that none of these other technological advances got pushed through this badly while being obviously not ready for
widespreaduse.And also, can you really say that though? Most other technological advances had a pretty clear distinction from the older way of doing things.
But you could also say that none of these other technological advances got pushed through this badly while being obviously not ready for widespread use.
I can certainly agree with you that most current advertised use cases of LLMs are total bullshit, yes. My point is just that asking if it deserves to exist based on its shortfalls is weird, when it's barely existed a few years. It just shouldn't be getting pushed as much as it is
-
It's a new technology barely out of infancy. Of course it's unreliable and niche. You could say the same thing about any technological advance in history.
The very nature of how it functions is unreliable. It's a statistical probabilistic model. It's great for what it was designed to do but imagining that it has any way of rationalising data is purely that, just imagination. Even if let's say we accept that it makes an error rate at the same rate as humans do (if it can even identify an error reliably), there's no accountability in place that ensures that it would check the correctness like a human would.
-
The very nature of how it functions is unreliable. It's a statistical probabilistic model. It's great for what it was designed to do but imagining that it has any way of rationalising data is purely that, just imagination. Even if let's say we accept that it makes an error rate at the same rate as humans do (if it can even identify an error reliably), there's no accountability in place that ensures that it would check the correctness like a human would.
I understand perfectly how LLMs work, and I made no claims about what they can do. Taking them on their own capabilities (text generation, inspiration, etc), not what some lying-through-their-teeth marketer said, is there a reason to say they 'shouldn't exist'?
-
I understand perfectly how LLMs work, and I made no claims about what they can do. Taking them on their own capabilities (text generation, inspiration, etc), not what some lying-through-their-teeth marketer said, is there a reason to say they 'shouldn't exist'?
OP didn't phrase it as "should they exist" but as "do we need them to exist".
And personally i think not, we don't need them.
In text generation they are good... inspiration? They are more of an inspiration killer imo. -
Creators of AI don't quite have the technology to puppeteer their AI like this.
They can selects the input, they can bias the training, but if the model isn't going to be lobotomized coming out
then they can't really bend it toward any particular one opinionI'm sure in the future they'll be able to adjust advertising manipulation in real time but not yet.
What is really sketchy is states and leaders relying on commercial models instead of public ones
I think states should train public models and release them for the public good
if only to undermine big tech bros and their nefarious influenceYou don't have to modify the model to parrot your opinion. You just have to put your stuff into the system prompt.
You can even modify the system prompt on the fly depending on e.g. the user account or the specific user input. That way you can modify the responses for a far bigger subject range: whenever a keyword of a specific subject is detected, the fitting system prompt is loaded, so you don't have to trash your system prompt full of off-topic information.
This is so trivially simple to do that even a junior dev should be able to wrap something like that around an existing LLM.
Edit: In fact, that's exactly how all these customized ChatGPT versions work.