Skip to content

AI Utopia, AI Apocalypse, and AI Reality: If we can’t build an equitable, sustainable society on our own, it’s pointless to hope that a machine that can’t think straight will do it for us.

Technology
47 28 0
  • Even if it is, I don't see what it's going to conclude that we haven't already.

    If we do build "the AI that will save us" it's just going to tell us "in order to ensure your existence as a species, take care of the planet and each other" and I really, really, can't picture a scenario where we actually listen.

  • It's not that the output of an ASI would be incomprehensible but that as humans we're simply incapable of predicting what it would do/say because we're not it. We're incapable of even imagining how convincing of an argument a system like this could make.

    We're incapable of even imagining how convincing of an argument a system like this could make.

    Vaguely gestures at all of sci-fi, depicting the full spectrum of artificial sentience, from funny comedic-relief idiot, to literal god.

    What exactly do you mean by that?

  • This post did not contain any content.

    Maybe we just can't count ''r's properly and it is our fault!

  • We're incapable of even imagining how convincing of an argument a system like this could make.

    Vaguely gestures at all of sci-fi, depicting the full spectrum of artificial sentience, from funny comedic-relief idiot, to literal god.

    What exactly do you mean by that?

    The issue isn’t whether we can imagine a smarter entity - obviously we can, as we do in sci-fi. But what we imagine are just results of human intelligence. They’re always bounded by our own cognitive limits. We picture a smarter person, not something categorically beyond us.

    The real concept behind Artificial Superintelligence is that it wouldn’t just be smarter in the way Einstein was smarter than average - it would be to us what we are to ants. Or less generously, what we are to bacteria. We can observe bacteria under a microscope, study their behavior, even manipulate them - and they have no concept of what we are, or that we even exist. That’s the kind of intelligence gap we're talking about.

    Imagine trying to argue against a perfect proof. Take something as basic as 1 + 1 = 2. Now imagine an argument for something much more complex - like a definitive answer to climate change, or consciousness, or free will - delivered with the same kind of clarity and irrefutability. That’s the kind of persuasive power we're dealing with. Not charisma. Not rhetoric. Not "debating skills." But precision of thought orders of magnitude beyond our own.

    The fact that we think we can comprehend what this would be like is part of the limitation. Just like a five-year-old thinks they understand what it means to be an adult - until they grow up and realize they had no idea.

  • This post did not contain any content.

    No way, do you want to tell me that spftware which is tailored and trained by megacorps, will not save our covilisation?!

  • The Romans had a slave economy. I don't really think that counts as sustainable or even having worked it out.

    So do we. We just leave them in another country so we don't have to think about them.

  • The issue isn’t whether we can imagine a smarter entity - obviously we can, as we do in sci-fi. But what we imagine are just results of human intelligence. They’re always bounded by our own cognitive limits. We picture a smarter person, not something categorically beyond us.

    The real concept behind Artificial Superintelligence is that it wouldn’t just be smarter in the way Einstein was smarter than average - it would be to us what we are to ants. Or less generously, what we are to bacteria. We can observe bacteria under a microscope, study their behavior, even manipulate them - and they have no concept of what we are, or that we even exist. That’s the kind of intelligence gap we're talking about.

    Imagine trying to argue against a perfect proof. Take something as basic as 1 + 1 = 2. Now imagine an argument for something much more complex - like a definitive answer to climate change, or consciousness, or free will - delivered with the same kind of clarity and irrefutability. That’s the kind of persuasive power we're dealing with. Not charisma. Not rhetoric. Not "debating skills." But precision of thought orders of magnitude beyond our own.

    The fact that we think we can comprehend what this would be like is part of the limitation. Just like a five-year-old thinks they understand what it means to be an adult - until they grow up and realize they had no idea.

    Logic is logic. There is no "advanced" logic that somehow allows you to decipher aspects of reality you otherwise could not. Humanity has yet to encounter anything that cannot be consistently explained in more and more detail, as we investigate it further.

    We can and do answer complex questions. That human society is too disorganized to disseminate the answers we do have, and act on them at scale, isn't going to be changed by explaining the same thing slightly better.

    Imagine trying to argue against a perfect proof. Take something as basic as 1 + 1 = 2. Now imagine an argument for something much more complex - like a definitive answer to climate change, or consciousness, or free will - delivered with the same kind of clarity and irrefutability.

    Absolutely nothing about humans makes me think we are incapable of finding such answers on our own. And if we are genuinely incapable of developing a definitive answer on something, I'm more inclined to believe there isn't one, than assume that we are simply too "small-minded" to find an answer that is obvious to the hypothetical superintelligence.

    But precision of thought orders of magnitude beyond our own.

    This is just the "god doesn't need to make sense to us, his thoughts are beyond our comprehension" -argument, again.

    Just like a five-year-old thinks they understand what it means to be an adult - until they grow up and realize they had no idea.

    They don't know, because we don't tell them. Children in adverse conditions are perfectly capable of understanding the realities of survival.

    You are using the fact that there are things we don't understand, yet, as if it were proof that there are things we can't understand, ever. Or eventually figure out on our own.

    That non-sentients cannot comprehend sentience (ants and humans) has absolutely no relevance on whether sentients are able to comprehend other sentients (humans and machine intelligences).

    I think machine thinking, in contrast to the human mind, will just be a faster processor of logic.

    There is absolutely nothing stopping the weakest modern CPU from running the exact same code as the fastest modern CPU. The only difference will be the rate at which the work is completed.

  • No way, do you want to tell me that spftware which is tailored and trained by megacorps, will not save our covilisation?!

    At the very least it’ll help with your spelling and grammar.

  • Logic is logic. There is no "advanced" logic that somehow allows you to decipher aspects of reality you otherwise could not. Humanity has yet to encounter anything that cannot be consistently explained in more and more detail, as we investigate it further.

    We can and do answer complex questions. That human society is too disorganized to disseminate the answers we do have, and act on them at scale, isn't going to be changed by explaining the same thing slightly better.

    Imagine trying to argue against a perfect proof. Take something as basic as 1 + 1 = 2. Now imagine an argument for something much more complex - like a definitive answer to climate change, or consciousness, or free will - delivered with the same kind of clarity and irrefutability.

    Absolutely nothing about humans makes me think we are incapable of finding such answers on our own. And if we are genuinely incapable of developing a definitive answer on something, I'm more inclined to believe there isn't one, than assume that we are simply too "small-minded" to find an answer that is obvious to the hypothetical superintelligence.

    But precision of thought orders of magnitude beyond our own.

    This is just the "god doesn't need to make sense to us, his thoughts are beyond our comprehension" -argument, again.

    Just like a five-year-old thinks they understand what it means to be an adult - until they grow up and realize they had no idea.

    They don't know, because we don't tell them. Children in adverse conditions are perfectly capable of understanding the realities of survival.

    You are using the fact that there are things we don't understand, yet, as if it were proof that there are things we can't understand, ever. Or eventually figure out on our own.

    That non-sentients cannot comprehend sentience (ants and humans) has absolutely no relevance on whether sentients are able to comprehend other sentients (humans and machine intelligences).

    I think machine thinking, in contrast to the human mind, will just be a faster processor of logic.

    There is absolutely nothing stopping the weakest modern CPU from running the exact same code as the fastest modern CPU. The only difference will be the rate at which the work is completed.

    Beginning by insulting your opponent isn’t exactly the best way to ensure they’ll finish reading your message.

    You have a great day.

  • Beginning by insulting your opponent isn’t exactly the best way to ensure they’ll finish reading your message.

    You have a great day.

    Fair.

    I've removed it, and I'm sorry.

  • This post did not contain any content.

    Kill the AI company CEOs and a few choice leadership, and we can end this nightmare now.

  • Even if it is, I don't see what it's going to conclude that we haven't already.

    If we do build "the AI that will save us" it's just going to tell us "in order to ensure your existence as a species, take care of the planet and each other" and I really, really, can't picture a scenario where we actually listen.

    It won't tell us what to do, it'll do the very complex thing we ask it to. The biggest issues facing our species and planet atm all boil down to highly complex logistics. We produce enough food to make everyone in the world fat. There is sufficient shelter and housing to make everyone safe and secure from the elements. We know how to generate electricity and even distribute it securely without destroying the global climate systems. What we seem unable to do is allocate, transport, and prioritize resources to effectively execute on these things. Because they present very challenging logistical problems. The various disciplines underpinning AI dev, however, from ML to network sciences to resource allocation algorithms making your computer work, all are very well suited to solving logistics problems/building systems that do so. I really don't see a sustainable future where "AI" is not fundamental to the logistics operations supporting it.

  • The problem is that we absolutely can build a sustainable society on our own. We've had the blueprints forever, the Romans worked this out centuries ago, the problem is that there's always some power seeking prick who messes it up. So we gave up trying to build a fair society and just went with feudalism and then capitalism instead.

    The Romans were one of the most extractive and wasteful empires in history. Wtf are you on about????

  • Fair.

    I've removed it, and I'm sorry.

    I’m not saying ASI would think in some magical new way. I’m saying it could process so much more data with such precision that it would detect patterns or connections we physically can’t. Like how an AI can tell biological sex from a retina scan, but no human doctor can do even knowing it's possible. That’s not just “faster logic.” It’s a cognitive scale we simply don’t have. I see no reason to assume that we're anywhere near the far end of the intelligence spectrum.

    My comment about it's potenttial persuation capabilities was more of the dangers of such system. That an ASI might be so good at persuasion, threat construction, and lying that it could influence us in ways we don’t even fully realize. Not because it’s “divine” - but because it’s just far more competent at manipulating human behavior than any human is.

  • they had reusable poop sponges what more do you want??

    More sponges to begin with.

  • Would they though? I think if anything most industries and economies would be booming, more disposable income results in more people buying stuff. This results in more profitable businesses and thus more taxes are collected. More taxes being available to the government means better public services.

    Even the banks would benefit, loans would be more stable since the delinquency rate would be much lower if everyone had better pay.

    The only people who would lose out would be the idiot day traders who rely on uncertainty and quite a lot of luck in order to make any money. In a more stable global economy businesses would be guaranteed to make money and so there would be no cheap deals that could be made.

    More taxes being available to the government means better public services.

    You forgot the /s

  • This post did not contain any content.

    And the ones preventing society from organizing properly are the ones building/using these shitty AIs to further manipulate society

  • At the very least it’ll help with your spelling and grammar.

    Ye, sure, any other bright thoughts?

  • This post did not contain any content.

    Very similar to global warming. If government AI policy is to strengthen military, empire, zionism, and oligarchy then voters need to be miserable and have bigger issues in their lives and hatred towards trans hispanic immigrant pet eaters.

    Skynet is awesome, and will be programmed for such supremacy. The same techbros who say polite things about UBI/freedom dividends/Universal high income are the ones vying to take all of our money to deliver skynet. If the slave class doesn't take political influence before skynet, then "power sharing with the slaves" through UBI is far less likely than genocide of the uppity classes.

  • 391 Stimmen
    104 Beiträge
    5 Aufrufe
    tocopherol@lemmy.dbzer0.comT
    I gave you the data, as they say "facts don't care about your feelings."
  • 165 Stimmen
    31 Beiträge
    112 Aufrufe
    M
    I have like a dozen Gmail accounts, and I know plenty of others who do too. Before I owned my own domain, I used the different accounts for different things.
  • France considers requiring Musk’s X to verify users’ age

    Technology technology
    20
    1
    142 Stimmen
    20 Beiträge
    66 Aufrufe
    C
    TBH, age verification services exist. If it becomes law, integrating them shouldn't be more difficult than integrating a OIDC login. So everyone should be able to do it. Depending on these services, you might not even need to give a name, or, because they are separate entities, don't give your name to the platform using them. Other parts of regulation are more difficult. Like these "upload filters" that need to figure out if something shared via a service is violating any copyright before it is made available.
  • 204 Stimmen
    6 Beiträge
    28 Aufrufe
    C
    One could say it's their fiduciary duty.
  • Cloudflare built an oauth provider with Claude

    Technology technology
    23
    1
    34 Stimmen
    23 Beiträge
    94 Aufrufe
    A
    I have to say that you just have to sayed something up
  • Cory Doctorow on how we lost the internet

    Technology technology
    19
    146 Stimmen
    19 Beiträge
    62 Aufrufe
    fizz@lemmy.nzF
    This is going to be my goto example of why people need to care about data privacy. This is fucking insane. I'd fire someone for even throwing that out as a suggestion.
  • 92 Stimmen
    42 Beiträge
    14 Aufrufe
    G
    You don’t understand. The tracking and spying is the entire point of the maneuver. The ‘children are accessing porn’ thing is just a Trojan horse to justify the spying. I understand what are you saying, I simply don't consider to check if a law is applied as a Trojan horse in itself. I would agree if the EU had said to these sites "give us all the the access log, a list of your subscriber, every data you gather and a list of every IP it ever connected to your site", and even this way does not imply that with only the IP you could know who the user is without even asking the telecom company for help. So, is it a Trojan horse ? Maybe, it heavily depend on how the EU want to do it. If they just ask "show me how you try to avoid that a minor access your material", which normally is the fist step, I don't see how it could be a Trojan horse. It could become, I agree on that. As you pointed out, it’s already illegal for them to access it, and parents are legally required to prevent their children from accessing it. No, parents are not legally required to prevent it. The seller (or provider) is legally required. It is a subtle but important difference. But you don’t lock down the entire population, or institute pre-crime surveillance policies, just because some parents are not going to follow the law. True. You simply impose laws that make mandatories for the provider to check if he can sell/serve something to someone. I mean asking that the cashier of mall check if I am an adult when I buy a bottle of wine is no different than asking to Pornhub to check if the viewer is an adult. I agree that in one case is really simple and in the other is really hard (and it is becoming harder by the day). You then charge the guilty parents after the offense. Ok, it would work, but then how do you caught the offendind parents if not checking what everyone do ? Is it not simpler to try to prevent it instead ?
  • Meta Reportedly Eyeing 'Super Sensing' Tech for Smart Glasses

    Technology technology
    4
    1
    34 Stimmen
    4 Beiträge
    22 Aufrufe
    M
    I see your point but also I just genuinely don't have a mind for that shit. Even my own close friends and family, it never pops into my head to ask about that vacation they just got back from or what their kids are up to. I rely on social cues from others, mainly my wife, to sort of kick start my brain. I just started a new job. I can't remember who said they were into fishing and who didn't, and now it's anxiety inducing to try to figure out who is who. Or they ask me a friendly question and I get caught up answering and when I'm done I forget to ask it back to them (because frequently asking someone about their weekend or kids or whatever is their way of getting to share their own life with you, but my brain doesn't think that way). I get what you're saying. It could absolutely be used for performative interactions but for some of us people drift away because we aren't good at being curious about them or remembering details like that. And also, I have to sit through awkward lunches at work where no one really knows what to talk about or ask about because outside of work we are completely alien to one another. And it's fine. It wouldn't be worth the damage it does. I have left behind all personally identifiable social media for the same reason. But I do hate how social anxiety and ADHD makes friendship so fleeting.