Skip to content

AI Utopia, AI Apocalypse, and AI Reality: If we can’t build an equitable, sustainable society on our own, it’s pointless to hope that a machine that can’t think straight will do it for us.

Technology
42 25 0
  • Would they though? I think if anything most industries and economies would be booming, more disposable income results in more people buying stuff. This results in more profitable businesses and thus more taxes are collected. More taxes being available to the government means better public services.

    Even the banks would benefit, loans would be more stable since the delinquency rate would be much lower if everyone had better pay.

    The only people who would lose out would be the idiot day traders who rely on uncertainty and quite a lot of luck in order to make any money. In a more stable global economy businesses would be guaranteed to make money and so there would be no cheap deals that could be made.

    1. Universal Healthcare - kills predatory health insurance and drug manufacturers
    2. State sponsored housing / accessable housing - kills the real estate market
    3. Well financed public education - kills private schools

    I am talking about the markets that rely on the suffering of people to make massive amounts of money. Monied interests have proven time and time again what our government stands for.

  • This post did not contain any content.

    fixed title

    If we can’t build an equitable, sustainable society on our own, it’s pointless to hope that a machine that can’t think straight will do it for us.

  • The problem is that we absolutely can build a sustainable society on our own. We've had the blueprints forever, the Romans worked this out centuries ago, the problem is that there's always some power seeking prick who messes it up. So we gave up trying to build a fair society and just went with feudalism and then capitalism instead.

    sustainable
    Romans

    Lol.

  • sustainable
    Romans

    Lol.

    they had reusable poop sponges what more do you want??

  • This is the same logic people apply to God being incomprehensible.

    Are you suggesting that if such a thing can be built, its word should be gospel, even if it is impossible for us to understand the logic behind it?

    I don't subscribe to this. Logic is logic. You don't need a new paradigm of mind to explore all conclusions that exist. If something cannot be explained and comprehended, transmitted from one sentient mind to another, then it didn't make sense in the first place.

    And you might bring up some of the stuff AI has done in material science as an example of it doing things human thinking cannot. But that's not some new kind of thinking. Once the molecular or material structure was found, humans have been perfectly capable of comprehending it.

    All it's doing, is exploring the conclusions that exist, faster. And when it comes to societal challenges, I don't think it's going to find some win-win solution we just haven't thought of. That's a level of optimism I would consider insane.

    It's not that the output of an ASI would be incomprehensible but that as humans we're simply incapable of predicting what it would do/say because we're not it. We're incapable of even imagining how convincing of an argument a system like this could make.

  • The problem is that we absolutely can build a sustainable society on our own. We've had the blueprints forever, the Romans worked this out centuries ago, the problem is that there's always some power seeking prick who messes it up. So we gave up trying to build a fair society and just went with feudalism and then capitalism instead.

    The Romans had a slave economy. I don't really think that counts as sustainable or even having worked it out.

  • That's fine, I'm just correcting the misrepresentation of the view that was in the headline.

    There is no misinterpreation of the headline. Plenty of people are expecting current LLMs to do exactly that, and are working on implementing those right at this moment for all kinds of crap.

  • Even if it is, I don't see what it's going to conclude that we haven't already.

    If we do build "the AI that will save us" it's just going to tell us "in order to ensure your existence as a species, take care of the planet and each other" and I really, really, can't picture a scenario where we actually listen.

  • It's not that the output of an ASI would be incomprehensible but that as humans we're simply incapable of predicting what it would do/say because we're not it. We're incapable of even imagining how convincing of an argument a system like this could make.

    We're incapable of even imagining how convincing of an argument a system like this could make.

    Vaguely gestures at all of sci-fi, depicting the full spectrum of artificial sentience, from funny comedic-relief idiot, to literal god.

    What exactly do you mean by that?

  • This post did not contain any content.

    Maybe we just can't count ''r's properly and it is our fault!

  • We're incapable of even imagining how convincing of an argument a system like this could make.

    Vaguely gestures at all of sci-fi, depicting the full spectrum of artificial sentience, from funny comedic-relief idiot, to literal god.

    What exactly do you mean by that?

    The issue isn’t whether we can imagine a smarter entity - obviously we can, as we do in sci-fi. But what we imagine are just results of human intelligence. They’re always bounded by our own cognitive limits. We picture a smarter person, not something categorically beyond us.

    The real concept behind Artificial Superintelligence is that it wouldn’t just be smarter in the way Einstein was smarter than average - it would be to us what we are to ants. Or less generously, what we are to bacteria. We can observe bacteria under a microscope, study their behavior, even manipulate them - and they have no concept of what we are, or that we even exist. That’s the kind of intelligence gap we're talking about.

    Imagine trying to argue against a perfect proof. Take something as basic as 1 + 1 = 2. Now imagine an argument for something much more complex - like a definitive answer to climate change, or consciousness, or free will - delivered with the same kind of clarity and irrefutability. That’s the kind of persuasive power we're dealing with. Not charisma. Not rhetoric. Not "debating skills." But precision of thought orders of magnitude beyond our own.

    The fact that we think we can comprehend what this would be like is part of the limitation. Just like a five-year-old thinks they understand what it means to be an adult - until they grow up and realize they had no idea.

  • This post did not contain any content.

    No way, do you want to tell me that spftware which is tailored and trained by megacorps, will not save our covilisation?!

  • The Romans had a slave economy. I don't really think that counts as sustainable or even having worked it out.

    So do we. We just leave them in another country so we don't have to think about them.

  • The issue isn’t whether we can imagine a smarter entity - obviously we can, as we do in sci-fi. But what we imagine are just results of human intelligence. They’re always bounded by our own cognitive limits. We picture a smarter person, not something categorically beyond us.

    The real concept behind Artificial Superintelligence is that it wouldn’t just be smarter in the way Einstein was smarter than average - it would be to us what we are to ants. Or less generously, what we are to bacteria. We can observe bacteria under a microscope, study their behavior, even manipulate them - and they have no concept of what we are, or that we even exist. That’s the kind of intelligence gap we're talking about.

    Imagine trying to argue against a perfect proof. Take something as basic as 1 + 1 = 2. Now imagine an argument for something much more complex - like a definitive answer to climate change, or consciousness, or free will - delivered with the same kind of clarity and irrefutability. That’s the kind of persuasive power we're dealing with. Not charisma. Not rhetoric. Not "debating skills." But precision of thought orders of magnitude beyond our own.

    The fact that we think we can comprehend what this would be like is part of the limitation. Just like a five-year-old thinks they understand what it means to be an adult - until they grow up and realize they had no idea.

    Logic is logic. There is no "advanced" logic that somehow allows you to decipher aspects of reality you otherwise could not. Humanity has yet to encounter anything that cannot be consistently explained in more and more detail, as we investigate it further.

    We can and do answer complex questions. That human society is too disorganized to disseminate the answers we do have, and act on them at scale, isn't going to be changed by explaining the same thing slightly better.

    Imagine trying to argue against a perfect proof. Take something as basic as 1 + 1 = 2. Now imagine an argument for something much more complex - like a definitive answer to climate change, or consciousness, or free will - delivered with the same kind of clarity and irrefutability.

    Absolutely nothing about humans makes me think we are incapable of finding such answers on our own. And if we are genuinely incapable of developing a definitive answer on something, I'm more inclined to believe there isn't one, than assume that we are simply too "small-minded" to find an answer that is obvious to the hypothetical superintelligence.

    But precision of thought orders of magnitude beyond our own.

    This is just the "god doesn't need to make sense to us, his thoughts are beyond our comprehension" -argument, again.

    Just like a five-year-old thinks they understand what it means to be an adult - until they grow up and realize they had no idea.

    They don't know, because we don't tell them. Children in adverse conditions are perfectly capable of understanding the realities of survival.

    You are using the fact that there are things we don't understand, yet, as if it were proof that there are things we can't understand, ever. Or eventually figure out on our own.

    That non-sentients cannot comprehend sentience (ants and humans) has absolutely no relevance on whether sentients are able to comprehend other sentients (humans and machine intelligences).

    I think machine thinking, in contrast to the human mind, will just be a faster processor of logic.

    There is absolutely nothing stopping the weakest modern CPU from running the exact same code as the fastest modern CPU. The only difference will be the rate at which the work is completed.

  • No way, do you want to tell me that spftware which is tailored and trained by megacorps, will not save our covilisation?!

    At the very least it’ll help with your spelling and grammar.

  • Logic is logic. There is no "advanced" logic that somehow allows you to decipher aspects of reality you otherwise could not. Humanity has yet to encounter anything that cannot be consistently explained in more and more detail, as we investigate it further.

    We can and do answer complex questions. That human society is too disorganized to disseminate the answers we do have, and act on them at scale, isn't going to be changed by explaining the same thing slightly better.

    Imagine trying to argue against a perfect proof. Take something as basic as 1 + 1 = 2. Now imagine an argument for something much more complex - like a definitive answer to climate change, or consciousness, or free will - delivered with the same kind of clarity and irrefutability.

    Absolutely nothing about humans makes me think we are incapable of finding such answers on our own. And if we are genuinely incapable of developing a definitive answer on something, I'm more inclined to believe there isn't one, than assume that we are simply too "small-minded" to find an answer that is obvious to the hypothetical superintelligence.

    But precision of thought orders of magnitude beyond our own.

    This is just the "god doesn't need to make sense to us, his thoughts are beyond our comprehension" -argument, again.

    Just like a five-year-old thinks they understand what it means to be an adult - until they grow up and realize they had no idea.

    They don't know, because we don't tell them. Children in adverse conditions are perfectly capable of understanding the realities of survival.

    You are using the fact that there are things we don't understand, yet, as if it were proof that there are things we can't understand, ever. Or eventually figure out on our own.

    That non-sentients cannot comprehend sentience (ants and humans) has absolutely no relevance on whether sentients are able to comprehend other sentients (humans and machine intelligences).

    I think machine thinking, in contrast to the human mind, will just be a faster processor of logic.

    There is absolutely nothing stopping the weakest modern CPU from running the exact same code as the fastest modern CPU. The only difference will be the rate at which the work is completed.

    Beginning by insulting your opponent isn’t exactly the best way to ensure they’ll finish reading your message.

    You have a great day.

  • Beginning by insulting your opponent isn’t exactly the best way to ensure they’ll finish reading your message.

    You have a great day.

    Fair.

    I've removed it, and I'm sorry.

  • This post did not contain any content.

    Kill the AI company CEOs and a few choice leadership, and we can end this nightmare now.

  • Even if it is, I don't see what it's going to conclude that we haven't already.

    If we do build "the AI that will save us" it's just going to tell us "in order to ensure your existence as a species, take care of the planet and each other" and I really, really, can't picture a scenario where we actually listen.

    It won't tell us what to do, it'll do the very complex thing we ask it to. The biggest issues facing our species and planet atm all boil down to highly complex logistics. We produce enough food to make everyone in the world fat. There is sufficient shelter and housing to make everyone safe and secure from the elements. We know how to generate electricity and even distribute it securely without destroying the global climate systems. What we seem unable to do is allocate, transport, and prioritize resources to effectively execute on these things. Because they present very challenging logistical problems. The various disciplines underpinning AI dev, however, from ML to network sciences to resource allocation algorithms making your computer work, all are very well suited to solving logistics problems/building systems that do so. I really don't see a sustainable future where "AI" is not fundamental to the logistics operations supporting it.

  • The problem is that we absolutely can build a sustainable society on our own. We've had the blueprints forever, the Romans worked this out centuries ago, the problem is that there's always some power seeking prick who messes it up. So we gave up trying to build a fair society and just went with feudalism and then capitalism instead.

    The Romans were one of the most extractive and wasteful empires in history. Wtf are you on about????

  • Uber, Lyft oppose some bills that aim to prevent assaults during rides

    Technology technology
    12
    94 Stimmen
    12 Beiträge
    40 Aufrufe
    F
    California is not Colorado nor is it federal No shit, did you even read my comment? Regulations already exist in every state that ride share companies operate in, including any state where taxis operate. People are already not supposed to sexually assault their passengers. Will adding another regulation saying they shouldn’t do that, even when one already exists, suddenly stop it from happening? No. Have you even looked at the regulations in Colorado for ride share drivers and companies? I’m guessing not. Here are the ones that were made in 2014: https://law.justia.com/codes/colorado/2021/title-40/article-10-1/part-6/section-40-10-1-605/#%3A~%3Atext=§+40-10.1-605.+Operational+Requirements+A+driver+shall+not%2Ca+ride%2C+otherwise+known+as+a+“street+hail”. Here’s just one little but relevant section: Before a person is permitted to act as a driver through use of a transportation network company's digital network, the person shall: Obtain a criminal history record check pursuant to the procedures set forth in section 40-10.1-110 as supplemented by the commission's rules promulgated under section 40-10.1-110 or through a privately administered national criminal history record check, including the national sex offender database; and If a privately administered national criminal history record check is used, provide a copy of the criminal history record check to the transportation network company. A driver shall obtain a criminal history record check in accordance with subparagraph (I) of paragraph (a) of this subsection (3) every five years while serving as a driver. A person who has been convicted of or pled guilty or nolo contendere to driving under the influence of drugs or alcohol in the previous seven years before applying to become a driver shall not serve as a driver. If the criminal history record check reveals that the person has ever been convicted of or pled guilty or nolo contendere to any of the following felony offenses, the person shall not serve as a driver: (c) (I) A person who has been convicted of or pled guilty or nolo contendere to driving under the influence of drugs or alcohol in the previous seven years before applying to become a driver shall not serve as a driver. If the criminal history record check reveals that the person has ever been convicted of or pled guilty or nolo contendere to any of the following felony offenses, the person shall not serve as a driver: An offense involving fraud, as described in article 5 of title 18, C.R.S.; An offense involving unlawful sexual behavior, as defined in section 16-22-102 (9), C.R.S.; An offense against property, as described in article 4 of title 18, C.R.S.; or A crime of violence, as described in section 18-1.3-406, C.R.S. A person who has been convicted of a comparable offense to the offenses listed in subparagraph (I) of this paragraph (c) in another state or in the United States shall not serve as a driver. A transportation network company or a third party shall retain true and accurate results of the criminal history record check for each driver that provides services for the transportation network company for at least five years after the criminal history record check was conducted. A person who has, within the immediately preceding five years, been convicted of or pled guilty or nolo contendere to a felony shall not serve as a driver. Before permitting an individual to act as a driver on its digital network, a transportation network company shall obtain and review a driving history research report for the individual. An individual with the following moving violations shall not serve as a driver: More than three moving violations in the three-year period preceding the individual's application to serve as a driver; or A major moving violation in the three-year period preceding the individual's application to serve as a driver, whether committed in this state, another state, or the United States, including vehicular eluding, as described in section 18-9-116.5, C.R.S., reckless driving, as described in section 42-4-1401, C.R.S., and driving under restraint, as described in section 42-2-138, C.R.S. A transportation network company or a third party shall retain true and accurate results of the driving history research report for each driver that provides services for the transportation network company for at least three years. So all sorts of criminal history, driving record, etc checks have been required since 2014. Colorado were actually the first state in the USA to implement rules like this for ride share companies lol.
  • SpaceX's Starship blows up ahead of 10th test flight

    Technology technology
    165
    1
    610 Stimmen
    165 Beiträge
    142 Aufrufe
    mycodesucks@lemmy.worldM
    In this case you happen to be right on both counts.
  • 781 Stimmen
    144 Beiträge
    172 Aufrufe
    D
    They can be LED I just want the aesthetic.
  • Bill Atkinson, Who Made Computers Easier to Use, Is Dead at 74

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    11 Aufrufe
    Niemand hat geantwortet
  • Stepping outside the algorithm

    Technology technology
    1
    1
    19 Stimmen
    1 Beiträge
    8 Aufrufe
    Niemand hat geantwortet
  • 323 Stimmen
    137 Beiträge
    217 Aufrufe
    F
    I think it would be best if that's a user setting, like dark mode. It would obviously be a popular setting to adjust. If they don't do that, there will doubtless be grease monkey and other scripts to hide it.
  • GeForce GTX 970 8GB mod is back for a full review

    Technology technology
    1
    34 Stimmen
    1 Beiträge
    10 Aufrufe
    Niemand hat geantwortet
  • 0 Stimmen
    2 Beiträge
    0 Aufrufe
    A
    How about right now? How's that going?