Skip to content

AI Utopia, AI Apocalypse, and AI Reality: If we can’t build an equitable, sustainable society on our own, it’s pointless to hope that a machine that can’t think straight will do it for us.

Technology
49 28 0
  • Logic is logic. There is no "advanced" logic that somehow allows you to decipher aspects of reality you otherwise could not. Humanity has yet to encounter anything that cannot be consistently explained in more and more detail, as we investigate it further.

    We can and do answer complex questions. That human society is too disorganized to disseminate the answers we do have, and act on them at scale, isn't going to be changed by explaining the same thing slightly better.

    Imagine trying to argue against a perfect proof. Take something as basic as 1 + 1 = 2. Now imagine an argument for something much more complex - like a definitive answer to climate change, or consciousness, or free will - delivered with the same kind of clarity and irrefutability.

    Absolutely nothing about humans makes me think we are incapable of finding such answers on our own. And if we are genuinely incapable of developing a definitive answer on something, I'm more inclined to believe there isn't one, than assume that we are simply too "small-minded" to find an answer that is obvious to the hypothetical superintelligence.

    But precision of thought orders of magnitude beyond our own.

    This is just the "god doesn't need to make sense to us, his thoughts are beyond our comprehension" -argument, again.

    Just like a five-year-old thinks they understand what it means to be an adult - until they grow up and realize they had no idea.

    They don't know, because we don't tell them. Children in adverse conditions are perfectly capable of understanding the realities of survival.

    You are using the fact that there are things we don't understand, yet, as if it were proof that there are things we can't understand, ever. Or eventually figure out on our own.

    That non-sentients cannot comprehend sentience (ants and humans) has absolutely no relevance on whether sentients are able to comprehend other sentients (humans and machine intelligences).

    I think machine thinking, in contrast to the human mind, will just be a faster processor of logic.

    There is absolutely nothing stopping the weakest modern CPU from running the exact same code as the fastest modern CPU. The only difference will be the rate at which the work is completed.

    Beginning by insulting your opponent isn’t exactly the best way to ensure they’ll finish reading your message.

    You have a great day.

  • Beginning by insulting your opponent isn’t exactly the best way to ensure they’ll finish reading your message.

    You have a great day.

    Fair.

    I've removed it, and I'm sorry.

  • This post did not contain any content.

    Kill the AI company CEOs and a few choice leadership, and we can end this nightmare now.

  • Even if it is, I don't see what it's going to conclude that we haven't already.

    If we do build "the AI that will save us" it's just going to tell us "in order to ensure your existence as a species, take care of the planet and each other" and I really, really, can't picture a scenario where we actually listen.

    It won't tell us what to do, it'll do the very complex thing we ask it to. The biggest issues facing our species and planet atm all boil down to highly complex logistics. We produce enough food to make everyone in the world fat. There is sufficient shelter and housing to make everyone safe and secure from the elements. We know how to generate electricity and even distribute it securely without destroying the global climate systems. What we seem unable to do is allocate, transport, and prioritize resources to effectively execute on these things. Because they present very challenging logistical problems. The various disciplines underpinning AI dev, however, from ML to network sciences to resource allocation algorithms making your computer work, all are very well suited to solving logistics problems/building systems that do so. I really don't see a sustainable future where "AI" is not fundamental to the logistics operations supporting it.

  • The problem is that we absolutely can build a sustainable society on our own. We've had the blueprints forever, the Romans worked this out centuries ago, the problem is that there's always some power seeking prick who messes it up. So we gave up trying to build a fair society and just went with feudalism and then capitalism instead.

    The Romans were one of the most extractive and wasteful empires in history. Wtf are you on about????

  • Fair.

    I've removed it, and I'm sorry.

    I’m not saying ASI would think in some magical new way. I’m saying it could process so much more data with such precision that it would detect patterns or connections we physically can’t. Like how an AI can tell biological sex from a retina scan, but no human doctor can do even knowing it's possible. That’s not just “faster logic.” It’s a cognitive scale we simply don’t have. I see no reason to assume that we're anywhere near the far end of the intelligence spectrum.

    My comment about it's potenttial persuation capabilities was more of the dangers of such system. That an ASI might be so good at persuasion, threat construction, and lying that it could influence us in ways we don’t even fully realize. Not because it’s “divine” - but because it’s just far more competent at manipulating human behavior than any human is.

  • they had reusable poop sponges what more do you want??

    More sponges to begin with.

  • Would they though? I think if anything most industries and economies would be booming, more disposable income results in more people buying stuff. This results in more profitable businesses and thus more taxes are collected. More taxes being available to the government means better public services.

    Even the banks would benefit, loans would be more stable since the delinquency rate would be much lower if everyone had better pay.

    The only people who would lose out would be the idiot day traders who rely on uncertainty and quite a lot of luck in order to make any money. In a more stable global economy businesses would be guaranteed to make money and so there would be no cheap deals that could be made.

    More taxes being available to the government means better public services.

    You forgot the /s

  • This post did not contain any content.

    And the ones preventing society from organizing properly are the ones building/using these shitty AIs to further manipulate society

  • At the very least it’ll help with your spelling and grammar.

    Ye, sure, any other bright thoughts?

  • This post did not contain any content.

    Very similar to global warming. If government AI policy is to strengthen military, empire, zionism, and oligarchy then voters need to be miserable and have bigger issues in their lives and hatred towards trans hispanic immigrant pet eaters.

    Skynet is awesome, and will be programmed for such supremacy. The same techbros who say polite things about UBI/freedom dividends/Universal high income are the ones vying to take all of our money to deliver skynet. If the slave class doesn't take political influence before skynet, then "power sharing with the slaves" through UBI is far less likely than genocide of the uppity classes.

  • I’m not saying ASI would think in some magical new way. I’m saying it could process so much more data with such precision that it would detect patterns or connections we physically can’t. Like how an AI can tell biological sex from a retina scan, but no human doctor can do even knowing it's possible. That’s not just “faster logic.” It’s a cognitive scale we simply don’t have. I see no reason to assume that we're anywhere near the far end of the intelligence spectrum.

    My comment about it's potenttial persuation capabilities was more of the dangers of such system. That an ASI might be so good at persuasion, threat construction, and lying that it could influence us in ways we don’t even fully realize. Not because it’s “divine” - but because it’s just far more competent at manipulating human behavior than any human is.

    Superpowered lying is already a thing, and all we needed was demographic data and context control.

    Today, it is possible to get a population to believe almost anything. Show them the right argument, at the right time, in the right context, and they believe it. Facebook and google have scaled up exactly that into their main sources of revenue.

    Same goes for attention hacking. AI generated content designed to hook viewers functions in entirely predictable, and fairly well understood ways. And the same goes for the algorithms which "recommend" additional content based on what someone is watching.

    As for why doctors can't do things AIs are pulling off, I'd suggest that's because current systems are using indicators we don't know about, which they aren't sentient enough to explain. If they could, I have no doubt a human doctor, given enough time, could learn about, and detect, such indicators.

    There is no evidence that what these models are doing, is "beyond our scale of thinking".

    But again, I do think the machine will be faster.

    Current models display "emergent capabilities", as in abilities we don't know about before the model is created and tested. But once it is created, we can and have figured out what it is doing and how.

  • None of those things directly threatened the power of an oligarch.

    They are examples of complex and difficult tasks that humans are capable of when working together, implying through comparison reordering society is also achievable.

  • 196 Stimmen
    30 Beiträge
    2 Aufrufe
    D
    This guy gets it. And from my professional experience, Gen Z sucks at separating the two.
  • 295 Stimmen
    47 Beiträge
    22 Aufrufe
    T
    I worked in a bank for a bit. Literally any transaction that's large and unusual for the account will be flagged. Also people do bonkers things with their money for the stupidest reasons all the time so all that one has to do if they're making large transactions is be prepared to talk to the bank and explain what's going on. Unless of course you are handling money in relation to organized crime, in which case you were fucked the moment the money touched the banking system
  • 30 Stimmen
    5 Beiträge
    26 Aufrufe
    I
    That is a drive unit. The robot is bending down next to it wearing a vest.
  • We caught 4 states sharing personal health data with Big Tech

    Technology technology
    12
    1
    327 Stimmen
    12 Beiträge
    49 Aufrufe
    M
    Can these types of post include countries in the title? This USA defaultism makes the experience worse for everyone else with no benefit whatsoever
  • It is OutfinityGift project better then all NFTs?

    Technology technology
    1
    2
    1 Stimmen
    1 Beiträge
    11 Aufrufe
    Niemand hat geantwortet
  • 1k Stimmen
    95 Beiträge
    16 Aufrufe
    G
    Obviously the law must be simple enough to follow so that for Jim’s furniture shop is not a problem nor a too high cost to respect it, but it must be clear that if you break it you can cease to exist as company. I think this may be the root of our disagreement, I do not believe that there is any law making body today that is capable of an elegantly simple law. I could be too naive, but I think it is possible. We also definitely have a difference on opinion when it comes to the severity of the infraction, in my mind, while privacy is important, it should not have the same level of punishments associated with it when compared to something on the level of poisoning water ways; I think that a privacy law should hurt but be able to be learned from while in the poison case it should result in the bankruptcy of a company. The severity is directly proportional to the number of people affected. If you violate the privacy of 200 million people is the same that you poison the water of 10 people. And while with the poisoning scenario it could be better to jail the responsible people (for a very, very long time) and let the company survive to clean the water, once your privacy is violated there is no way back, a company could not fix it. The issue we find ourselves with today is that the aggregate of all privacy breaches makes it harmful to the people, but with a sizeable enough fine, I find it hard to believe that there would be major or lasting damage. So how much money your privacy it's worth ? 6 For this reason I don’t think it is wise to write laws that will bankrupt a company off of one infraction which was not directly or indirectly harmful to the physical well being of the people: and I am using indirectly a little bit more strict than I would like to since as I said before, the aggregate of all the information is harmful. The point is that the goal is not to bankrupt companies but to have them behave right. The penalty associated to every law IS the tool that make you respect the law. And it must be so high that you don't want to break the law. I would have to look into the laws in question, but on a surface level I think that any company should be subjected to the same baseline privacy laws, so if there isn’t anything screwy within the law that apple, Google, and Facebook are ignoring, I think it should apply to them. Trust me on this one, direct experience payment processors have a lot more rules to follow to be able to work. I do not want jail time for the CEO by default but he need to know that he will pay personally if the company break the law, it is the only way to make him run the company being sure that it follow the laws. For some reason I don’t have my usual cynicism when it comes to this issue. I think that the magnitude of loses that vested interests have in these companies would make it so that companies would police themselves for fear of losing profits. That being said I wouldn’t be opposed to some form of personal accountability on corporate leadership, but I fear that they will just end up finding a way to create a scapegoat everytime. It is not cynicism. I simply think that a huge fine to a single person (the CEO for example) is useless since it too easy to avoid and if it really huge realistically it would be never paid anyway so nothing usefull since the net worth of this kind of people is only on the paper. So if you slap a 100 billion file to Musk he will never pay because he has not the money to pay even if technically he is worth way more than that. Jail time instead is something that even Musk can experience. In general I like laws that are as objective as possible, I think that a privacy law should be written so that it is very objectively overbearing, but that has a smaller fine associated with it. This way the law is very clear on right and wrong, while also giving the businesses time and incentive to change their practices without having to sink large amount of expenses into lawyers to review every minute detail, which is the logical conclusion of the one infraction bankrupt system that you seem to be supporting. Then you write a law that explicitally state what you can do and what is not allowed is forbidden by default.
  • Acute Leukemia Burden Trends and Future Predictions

    Technology technology
    5
    1
    5 Stimmen
    5 Beiträge
    25 Aufrufe
    G
    Looks like the delay in 2011 was so big the data became available after the 2017 one
  • 0 Stimmen
    2 Beiträge
    8 Aufrufe
    P
    It's a shame. AI has potential but most people just want to exploit its development for their own gain.