AI Utopia, AI Apocalypse, and AI Reality: If we can’t build an equitable, sustainable society on our own, it’s pointless to hope that a machine that can’t think straight will do it for us.
-
Beginning by insulting your opponent isn’t exactly the best way to ensure they’ll finish reading your message.
You have a great day.
Fair.
I've removed it, and I'm sorry.
-
This post did not contain any content.
Kill the AI company CEOs and a few choice leadership, and we can end this nightmare now.
-
Even if it is, I don't see what it's going to conclude that we haven't already.
If we do build "the AI that will save us" it's just going to tell us "in order to ensure your existence as a species, take care of the planet and each other" and I really, really, can't picture a scenario where we actually listen.
It won't tell us what to do, it'll do the very complex thing we ask it to. The biggest issues facing our species and planet atm all boil down to highly complex logistics. We produce enough food to make everyone in the world fat. There is sufficient shelter and housing to make everyone safe and secure from the elements. We know how to generate electricity and even distribute it securely without destroying the global climate systems. What we seem unable to do is allocate, transport, and prioritize resources to effectively execute on these things. Because they present very challenging logistical problems. The various disciplines underpinning AI dev, however, from ML to network sciences to resource allocation algorithms making your computer work, all are very well suited to solving logistics problems/building systems that do so. I really don't see a sustainable future where "AI" is not fundamental to the logistics operations supporting it.
-
The problem is that we absolutely can build a sustainable society on our own. We've had the blueprints forever, the Romans worked this out centuries ago, the problem is that there's always some power seeking prick who messes it up. So we gave up trying to build a fair society and just went with feudalism and then capitalism instead.
The Romans were one of the most extractive and wasteful empires in history. Wtf are you on about????
-
Fair.
I've removed it, and I'm sorry.
I’m not saying ASI would think in some magical new way. I’m saying it could process so much more data with such precision that it would detect patterns or connections we physically can’t. Like how an AI can tell biological sex from a retina scan, but no human doctor can do even knowing it's possible. That’s not just “faster logic.” It’s a cognitive scale we simply don’t have. I see no reason to assume that we're anywhere near the far end of the intelligence spectrum.
My comment about it's potenttial persuation capabilities was more of the dangers of such system. That an ASI might be so good at persuasion, threat construction, and lying that it could influence us in ways we don’t even fully realize. Not because it’s “divine” - but because it’s just far more competent at manipulating human behavior than any human is.
-
they had reusable poop sponges what more do you want??
More sponges to begin with.
-
Would they though? I think if anything most industries and economies would be booming, more disposable income results in more people buying stuff. This results in more profitable businesses and thus more taxes are collected. More taxes being available to the government means better public services.
Even the banks would benefit, loans would be more stable since the delinquency rate would be much lower if everyone had better pay.
The only people who would lose out would be the idiot day traders who rely on uncertainty and quite a lot of luck in order to make any money. In a more stable global economy businesses would be guaranteed to make money and so there would be no cheap deals that could be made.
More taxes being available to the government means better public services.
You forgot the /s
-
This post did not contain any content.
-
At the very least it’ll help with your spelling and grammar.
Ye, sure, any other bright thoughts?
-
This post did not contain any content.
Very similar to global warming. If government AI policy is to strengthen military, empire, zionism, and oligarchy then voters need to be miserable and have bigger issues in their lives and hatred towards trans hispanic immigrant pet eaters.
Skynet is awesome, and will be programmed for such supremacy. The same techbros who say polite things about UBI/freedom dividends/Universal high income are the ones vying to take all of our money to deliver skynet. If the slave class doesn't take political influence before skynet, then "power sharing with the slaves" through UBI is far less likely than genocide of the uppity classes.
-
I’m not saying ASI would think in some magical new way. I’m saying it could process so much more data with such precision that it would detect patterns or connections we physically can’t. Like how an AI can tell biological sex from a retina scan, but no human doctor can do even knowing it's possible. That’s not just “faster logic.” It’s a cognitive scale we simply don’t have. I see no reason to assume that we're anywhere near the far end of the intelligence spectrum.
My comment about it's potenttial persuation capabilities was more of the dangers of such system. That an ASI might be so good at persuasion, threat construction, and lying that it could influence us in ways we don’t even fully realize. Not because it’s “divine” - but because it’s just far more competent at manipulating human behavior than any human is.
Superpowered lying is already a thing, and all we needed was demographic data and context control.
Today, it is possible to get a population to believe almost anything. Show them the right argument, at the right time, in the right context, and they believe it. Facebook and google have scaled up exactly that into their main sources of revenue.
Same goes for attention hacking. AI generated content designed to hook viewers functions in entirely predictable, and fairly well understood ways. And the same goes for the algorithms which "recommend" additional content based on what someone is watching.
As for why doctors can't do things AIs are pulling off, I'd suggest that's because current systems are using indicators we don't know about, which they aren't sentient enough to explain. If they could, I have no doubt a human doctor, given enough time, could learn about, and detect, such indicators.
There is no evidence that what these models are doing, is "beyond our scale of thinking".
But again, I do think the machine will be faster.
Current models display "emergent capabilities", as in abilities we don't know about before the model is created and tested. But once it is created, we can and have figured out what it is doing and how.
-
None of those things directly threatened the power of an oligarch.
They are examples of complex and difficult tasks that humans are capable of when working together, implying through comparison reordering society is also achievable.