Skip to content

Human-level AI is not inevitable. We have the power to change course

Technology
47 29 0
  • We're not even remotely close. The promise of AGI is part of the AI hype machine and taking it seriously is playing into their hands.

    Irrelevant at best, harmful at worst 🤷

    We’re not even remotely close.

    That’s just the other side of the same coin whose flip side claims AGI is right around the corner. The truth is, you couldn’t possibly know either way.

  • I’m sorry, but this reads to me like “I am certain I am right, so evidence that implies I’m wrong must be wrong.” And while sometimes that really is the right approach to take, more often than not you really should update the confidence in your hypothesis rather than discarding contradictory data.

    But, there must be SOMETHING which is a good measure of the ability to reason, yes? If reasoning is an actual thing that actually exists, then it must be detectable, and there must be a way to detect it. What benchmark do you purpose?

    You don’t have to seriously answer, but I hope you see where I’m coming from. I assume you’ve read Searle, and I cannot express to you the contempt in which I hold him. I think, if we are to be scientists and not philosophers (and good philosophers should be scientists too) we have to look to the external world to test our theories.

    For me, what goes on inside does matter, but what goes on inside everyone everywhere is just math, and I haven’t formed an opinion about what math is really most efficient at instantiating reasoning, or thinking, or whatever you want to talk about.

    To be honest, the other day I was convinced it was actually derivatives and integrals, and, because of this, that analog computers would make much better AIs than digital computers. (But Hava Siegelmann’s book is expensive, and, while I had briefly lifted my book buying moratorium, I think I have to impose it again).

    Hell, maybe Penrose is right and we need quantum effects (I really really really doubt it, but, to the extent that it is possible for me, I try to keep an open mind).

    🤷♂

    I'm not sure I can give a satisfying answer. There are a lot of moving parts here, and a big issue here is definitions which you also touch upon with your reference to Searle.

    I agree with the sentiment that there must be some objective measure of reasoning ability. To me, reasoning is more than following logical rules. It's also about interpreting the intent of the task. The reasoning models are very sensitive to initial conditions and tend to drift when the question is not super precise or if they don't have sufficient context.

    The AI models are in a sense very fragile to the input. Organic intelligence on the other hand is resilient and also heuristic. I don't have any specific idea for the test, but it should test the ability to solve a very ill-posed problem.

  • This post did not contain any content.

    A lot of people making baseless claims about it being inevitable...i mean it could happen but the hard problem of consciousness is not inevitable to solve

  • We’re not even remotely close.

    That’s just the other side of the same coin whose flip side claims AGI is right around the corner. The truth is, you couldn’t possibly know either way.

    The truth is, you couldn’t possibly know either way.

    I think the argument is we're not remotely close when considering the specific techniques used by current generation of AI tools. Of course people can make new discovery any day and achieve AGI but it's a different discussion.

  • AI will not threaten humans due to sadism or boredom, but because it takes jobs and makes people jobless.

    When there is lower demand for human labor, according to the rule of supply and demand, prices (aka. wages) for human labor go down.

    The real crisis is one of sinking wages, lack of social safety nets, and lack of future perspective for workers. That's what should actually be discussed.

    Not sure if we will even really notice that in our lifetime, it is taking decades to get things like invoice processing to automate. Heck in the US they can't even get proper bank connections made.

    Also, tractors have replaced a lot of workers on the land, computers have both lost a lot of jobs in offices and created a lot at the same time.

    Jobs will change, that's for sure and I think most of the heavy labour jobs will become more expensive since they are harder to replace.

  • This post did not contain any content.

    Human level? That’s not setting the bar very high. Surely the aim would be to surpass human, or why bother?

  • The path to AGI seems inevitable - not because it’s around the corner, but because of the nature of technological progress itself. Unless one of two things stops us, we’ll get there eventually:

    1. Either there’s something fundamentally unique about how the biological brain processes information - something that cannot, even in principle, be replicated in silicon,

    2. Or we wipe ourselves out before we get the chance.

    Barring those, the outcome is just a matter of time. This argument makes no claim about timelines - only trajectory. Even if we stopped AI research for a thousand years, it’s hard to imagine a future where we wouldn’t eventually resume it. That's what humans do; improve our technology.

    The article points to cloning as a counterexample but that’s not a technological dead end, that’s a moral boundary. If one thinks we’ll hold that line forever, I’d call that naïve. When it comes to AGI, there’s no moral firewall strong enough to hold back the drive toward it. Not permanently.

    something that cannot, even in principle, be replicated in silicon

    As if silicon were the only technology we have to build computers.

  • something that cannot, even in principle, be replicated in silicon

    As if silicon were the only technology we have to build computers.

    Did you genuinely not understand the point I was making, or are you just being pedantic? "Silicon" obviously refers to current computing substrates, not a literal constraint on all future hardware. If you’d prefer I rewrite it as "in non-biological substrates," I’m happy to oblige - but I have a feeling you already knew that.

  • Did you genuinely not understand the point I was making, or are you just being pedantic? "Silicon" obviously refers to current computing substrates, not a literal constraint on all future hardware. If you’d prefer I rewrite it as "in non-biological substrates," I’m happy to oblige - but I have a feeling you already knew that.

    And why is "non-biological" a limitation?

  • Ummm no? If moneyed interests want it then it happens. We have absolutely no control over whether it happens. Did we stop Recall from being forced down our throats with windows 11? Did we stop Gemini from being forced down our throats?

    If capital wants it capital gets it. 😞

    Couldn’t we have a good old fashioned butlerian jihad?

  • And why is "non-biological" a limitation?

    I haven’t claimed that it is. The point is, the only two plausible scenarios I can think of where we don’t eventually reach AGI are: either we destroy ourselves before we get there, or there’s something fundamentally mysterious about the biological computer that is the human brain - something that allows it to process information in a way we simply can’t replicate any other way.

    I don’t think that’s the case, since both the brain and computers are made of matter, and matter obeys the laws of physics. But it’s at least conceivable that there could be more to it.

  • I haven’t claimed that it is. The point is, the only two plausible scenarios I can think of where we don’t eventually reach AGI are: either we destroy ourselves before we get there, or there’s something fundamentally mysterious about the biological computer that is the human brain - something that allows it to process information in a way we simply can’t replicate any other way.

    I don’t think that’s the case, since both the brain and computers are made of matter, and matter obeys the laws of physics. But it’s at least conceivable that there could be more to it.

    I personally think that the additional component (suppose it's energy) that modern approaches miss is the sheer amount of entropy a human brain gets - plenty of many times duplicated sensory signals with pseudo-random fluctuations. I don't know how one can use lots of entropy to replace lots of computation (OK, I know what Monte-Carlo method is, just how it applies to AI), but superficially this seems to be the way that will be taken at some point.

    On your point - I agree.

    I'd say we might reach AGI soon enough, but it will be impractical to use as compared to a human.

    While the matching efficiency is something very far away, because a human brain has undergone, so to say, an optimization\compression taking the energy of evolution since the beginning of life on Earth.

  • Human level? That’s not setting the bar very high. Surely the aim would be to surpass human, or why bother?

    Yeah. Cheap labor is so much better than this bullshit

  • This post did not contain any content.

    Why would we want to? 99% of the issues people have with "AI" are just problems with society more broadly that AI didn't really cause, only exacerbated. I think it's absurd to just reject this entire field because of a bunch of shitty fads going on right now with LLMs and image generators.

  • We're not even remotely close. The promise of AGI is part of the AI hype machine and taking it seriously is playing into their hands.

    Irrelevant at best, harmful at worst 🤷

    In some dimensions, current day LLMs are already superintelligent. They are extremely good knowledge retrieval engines that can far outperform traditional search engines, once you learn how properly to use them. No, they are not AGIs, because they're not sentient or self-motivated, but I'm not sure those are desirable or useful dimensions of intellect to work towards anyway.

  • 14 Stimmen
    17 Beiträge
    113 Aufrufe
    M
    That was 20 years ago.
  • New Grads Hit AI Job Wall as Market Flips Upside Down

    Technology technology
    1
    1
    29 Stimmen
    1 Beiträge
    12 Aufrufe
    Niemand hat geantwortet
  • 337 Stimmen
    19 Beiträge
    112 Aufrufe
    R
    What I'm speaking about is that it should be impossible to do some things. If it's possible, they will be done, and there's nothing you can do about it. To solve the problem of twiddled social media (and moderation used to assert dominance) we need a decentralized system of 90s Web reimagined, and Fediverse doesn't deliver it - if Facebook and Reddit are feudal states, then Fediverse is a confederation of smaller feudal entities. A post, a person, a community, a reaction and a change (by moderator or by the user) should be global entities (with global identifiers, so that the object by id of #0000001a2b3c4d6e7f890 would be the same object today or 10 years later on every server storing it) replicated over a network of servers similarly to Usenet (and to an IRC network, but in an IRC network servers are trusted, so it's not a good example for a global system). Really bad posts (or those by persons with history of posting such) should be banned on server level by everyone. The rest should be moderated by moderator reactions\changes of certain type. Ideally, for pooling of resources and resilience, servers would be separated by types into storage nodes (I think the name says it, FTP servers can do the job, but no need to be limited by it), index nodes (scraping many storage nodes, giving out results in structured format fit for any user representation, say, as a sequence of posts in one community, or like a list of communities found by tag, or ... , and possibly being connected into one DHT for Kademlia-like search, since no single index node will have everything), and (like in torrents?) tracker nodes for these and for identities, I think torrent-like announce-retrieve service is enough - to return a list of storage nodes storing, say, a specified partition (subspace of identifiers of objects, to make looking for something at least possibly efficient), or return a list of index nodes, or return a bunch of certificates and keys for an identity (should be somehow cryptographically connected to the global identifier of a person). So when a storage node comes online, it announces itself to a bunch of such trackers, similarly with index nodes, similarly with a user. One can also have a NOSTR-like service for real-time notifications by users. This way you'd have a global untrusted pooled infrastructure, allowing to replace many platforms. With common data, identities, services. Objects in storage and index services can be, say, in a format including a set of tags and then the body. So a specific application needing to show only data related to it would just search on index services and display only objects with tags of, say, "holo_ns:talk.bullshit.starwars" and "holo_t:post", like a sequence of posts with ability to comment, or maybe it would search objects with tags "holo_name:My 1999-like Star Wars holopage" and "holo_t:page" and display the links like search results in Google, and then clicking on that you'd see something presented like a webpage, except links would lead to global identifiers (or tag expressions interpreted by the particular application, who knows). (An index service may return, say, an array of objects, each with identifier, tags, list of locations on storage nodes where it's found or even bittorrent magnet links, and a free description possibly ; then the user application can unify responses of a few such services to avoid repetitions, maybe sort them, represent them as needed, so on.) The user applications for that common infrastructure can be different at the same time. Some like Facebook, some like ICQ, some like a web browser, some like a newsreader. (Star Wars is not a random reference, my whole habit of imagining tech stuff is from trying to imagine a science fiction world of the future, so yeah, this may seem like passive dreaming and it is.)
  • 86 Stimmen
    12 Beiträge
    59 Aufrufe
    R
    TIL. Never used either.
  • 271 Stimmen
    77 Beiträge
    288 Aufrufe
    S
    I don't believe the idea of aggregating information is bad, moreso the ability to properly vet your sources yourself. I don't know what sources an AI chatbot could be pulling from. It could be a lot of sources, or it could be one source. Does it know which sources are reliable? Not really. AI has been infamous for hallucinating even with simple prompts. Being able to independently check where your info comes from is an important part of stopping the spread of misinfo. AI can't do that, and, in it's current state, I wouldn't want it to try. Convenience is a rat race of cutting corners. What is convenient isn't always what is best in the long run.
  • 41 Stimmen
    3 Beiträge
    25 Aufrufe
    P
    Yes. I can't use lynx for most of the sites I am used to go with it. They are all protecting themselves with captcha and other form of javascript computation. The net is dying. Fucking thank you AI-bullshitery...
  • 1 Stimmen
    3 Beiträge
    25 Aufrufe
    B
    They’re trash because the entire rag is right-wing billionaire propaganda by design.
  • Bill Gates to give away 99% of his wealth in the next 20 years

    Technology technology
    21
    150 Stimmen
    21 Beiträge
    106 Aufrufe
    G
    hehehehe You know, it's hilarious that you say that. Nobody ever realizes that they're talking to a starving homeless person on the internet when they meet one, do they? Believe it or not, quite a few of us do have jobs. Not all of us are disabled or addicted. That is the problem with the society we live in. We're invisible until we talk to you.