Skip to content

Human-level AI is not inevitable. We have the power to change course

Technology
50 30 8
  • We're not even remotely close. The promise of AGI is part of the AI hype machine and taking it seriously is playing into their hands.

    Irrelevant at best, harmful at worst 🤷

    We’re not even remotely close.

    That’s just the other side of the same coin whose flip side claims AGI is right around the corner. The truth is, you couldn’t possibly know either way.

  • I’m sorry, but this reads to me like “I am certain I am right, so evidence that implies I’m wrong must be wrong.” And while sometimes that really is the right approach to take, more often than not you really should update the confidence in your hypothesis rather than discarding contradictory data.

    But, there must be SOMETHING which is a good measure of the ability to reason, yes? If reasoning is an actual thing that actually exists, then it must be detectable, and there must be a way to detect it. What benchmark do you purpose?

    You don’t have to seriously answer, but I hope you see where I’m coming from. I assume you’ve read Searle, and I cannot express to you the contempt in which I hold him. I think, if we are to be scientists and not philosophers (and good philosophers should be scientists too) we have to look to the external world to test our theories.

    For me, what goes on inside does matter, but what goes on inside everyone everywhere is just math, and I haven’t formed an opinion about what math is really most efficient at instantiating reasoning, or thinking, or whatever you want to talk about.

    To be honest, the other day I was convinced it was actually derivatives and integrals, and, because of this, that analog computers would make much better AIs than digital computers. (But Hava Siegelmann’s book is expensive, and, while I had briefly lifted my book buying moratorium, I think I have to impose it again).

    Hell, maybe Penrose is right and we need quantum effects (I really really really doubt it, but, to the extent that it is possible for me, I try to keep an open mind).

    🤷♂

    I'm not sure I can give a satisfying answer. There are a lot of moving parts here, and a big issue here is definitions which you also touch upon with your reference to Searle.

    I agree with the sentiment that there must be some objective measure of reasoning ability. To me, reasoning is more than following logical rules. It's also about interpreting the intent of the task. The reasoning models are very sensitive to initial conditions and tend to drift when the question is not super precise or if they don't have sufficient context.

    The AI models are in a sense very fragile to the input. Organic intelligence on the other hand is resilient and also heuristic. I don't have any specific idea for the test, but it should test the ability to solve a very ill-posed problem.

  • This post did not contain any content.

    A lot of people making baseless claims about it being inevitable...i mean it could happen but the hard problem of consciousness is not inevitable to solve

  • We’re not even remotely close.

    That’s just the other side of the same coin whose flip side claims AGI is right around the corner. The truth is, you couldn’t possibly know either way.

    The truth is, you couldn’t possibly know either way.

    I think the argument is we're not remotely close when considering the specific techniques used by current generation of AI tools. Of course people can make new discovery any day and achieve AGI but it's a different discussion.

  • AI will not threaten humans due to sadism or boredom, but because it takes jobs and makes people jobless.

    When there is lower demand for human labor, according to the rule of supply and demand, prices (aka. wages) for human labor go down.

    The real crisis is one of sinking wages, lack of social safety nets, and lack of future perspective for workers. That's what should actually be discussed.

    Not sure if we will even really notice that in our lifetime, it is taking decades to get things like invoice processing to automate. Heck in the US they can't even get proper bank connections made.

    Also, tractors have replaced a lot of workers on the land, computers have both lost a lot of jobs in offices and created a lot at the same time.

    Jobs will change, that's for sure and I think most of the heavy labour jobs will become more expensive since they are harder to replace.

  • This post did not contain any content.

    Human level? That’s not setting the bar very high. Surely the aim would be to surpass human, or why bother?

  • The path to AGI seems inevitable - not because it’s around the corner, but because of the nature of technological progress itself. Unless one of two things stops us, we’ll get there eventually:

    1. Either there’s something fundamentally unique about how the biological brain processes information - something that cannot, even in principle, be replicated in silicon,

    2. Or we wipe ourselves out before we get the chance.

    Barring those, the outcome is just a matter of time. This argument makes no claim about timelines - only trajectory. Even if we stopped AI research for a thousand years, it’s hard to imagine a future where we wouldn’t eventually resume it. That's what humans do; improve our technology.

    The article points to cloning as a counterexample but that’s not a technological dead end, that’s a moral boundary. If one thinks we’ll hold that line forever, I’d call that naïve. When it comes to AGI, there’s no moral firewall strong enough to hold back the drive toward it. Not permanently.

    something that cannot, even in principle, be replicated in silicon

    As if silicon were the only technology we have to build computers.

  • something that cannot, even in principle, be replicated in silicon

    As if silicon were the only technology we have to build computers.

    Did you genuinely not understand the point I was making, or are you just being pedantic? "Silicon" obviously refers to current computing substrates, not a literal constraint on all future hardware. If you’d prefer I rewrite it as "in non-biological substrates," I’m happy to oblige - but I have a feeling you already knew that.

  • Did you genuinely not understand the point I was making, or are you just being pedantic? "Silicon" obviously refers to current computing substrates, not a literal constraint on all future hardware. If you’d prefer I rewrite it as "in non-biological substrates," I’m happy to oblige - but I have a feeling you already knew that.

    And why is "non-biological" a limitation?

  • Ummm no? If moneyed interests want it then it happens. We have absolutely no control over whether it happens. Did we stop Recall from being forced down our throats with windows 11? Did we stop Gemini from being forced down our throats?

    If capital wants it capital gets it. 😞

    Couldn’t we have a good old fashioned butlerian jihad?

  • And why is "non-biological" a limitation?

    I haven’t claimed that it is. The point is, the only two plausible scenarios I can think of where we don’t eventually reach AGI are: either we destroy ourselves before we get there, or there’s something fundamentally mysterious about the biological computer that is the human brain - something that allows it to process information in a way we simply can’t replicate any other way.

    I don’t think that’s the case, since both the brain and computers are made of matter, and matter obeys the laws of physics. But it’s at least conceivable that there could be more to it.

  • I haven’t claimed that it is. The point is, the only two plausible scenarios I can think of where we don’t eventually reach AGI are: either we destroy ourselves before we get there, or there’s something fundamentally mysterious about the biological computer that is the human brain - something that allows it to process information in a way we simply can’t replicate any other way.

    I don’t think that’s the case, since both the brain and computers are made of matter, and matter obeys the laws of physics. But it’s at least conceivable that there could be more to it.

    I personally think that the additional component (suppose it's energy) that modern approaches miss is the sheer amount of entropy a human brain gets - plenty of many times duplicated sensory signals with pseudo-random fluctuations. I don't know how one can use lots of entropy to replace lots of computation (OK, I know what Monte-Carlo method is, just how it applies to AI), but superficially this seems to be the way that will be taken at some point.

    On your point - I agree.

    I'd say we might reach AGI soon enough, but it will be impractical to use as compared to a human.

    While the matching efficiency is something very far away, because a human brain has undergone, so to say, an optimization\compression taking the energy of evolution since the beginning of life on Earth.

  • Human level? That’s not setting the bar very high. Surely the aim would be to surpass human, or why bother?

    Yeah. Cheap labor is so much better than this bullshit

  • This post did not contain any content.

    Why would we want to? 99% of the issues people have with "AI" are just problems with society more broadly that AI didn't really cause, only exacerbated. I think it's absurd to just reject this entire field because of a bunch of shitty fads going on right now with LLMs and image generators.

  • We're not even remotely close. The promise of AGI is part of the AI hype machine and taking it seriously is playing into their hands.

    Irrelevant at best, harmful at worst 🤷

    In some dimensions, current day LLMs are already superintelligent. They are extremely good knowledge retrieval engines that can far outperform traditional search engines, once you learn how properly to use them. No, they are not AGIs, because they're not sentient or self-motivated, but I'm not sure those are desirable or useful dimensions of intellect to work towards anyway.

  • We’re not even remotely close.

    That’s just the other side of the same coin whose flip side claims AGI is right around the corner. The truth is, you couldn’t possibly know either way.

    That's true in a somewhat abstract way, but I just don't see any evidence of the claim that it is just around the corner. I don't see what currently existing technology can facilitate it. Faster-than-light travel could also theoretically be just around the corner, but it would surprise me if it was, because we just don't have the technology.

    On the other hand, the people who push the claim that AGI is just around the corner usually have huge vested interests.

  • In some dimensions, current day LLMs are already superintelligent. They are extremely good knowledge retrieval engines that can far outperform traditional search engines, once you learn how properly to use them. No, they are not AGIs, because they're not sentient or self-motivated, but I'm not sure those are desirable or useful dimensions of intellect to work towards anyway.

    I think that's a very generous use of the word "superintelligent". They aren't anything like what I associate with that word anyhow.

    I also don't really think they are knowledge retrieval engines. I use them extensively in my daily work, for example to write emails and generate ideas. But when it comes to facts they are flaky at best. It's more of a free association game than knowledge retrieval IMO.

  • We're not even remotely close. The promise of AGI is part of the AI hype machine and taking it seriously is playing into their hands.

    Irrelevant at best, harmful at worst 🤷

    "Dude trust me, just give me 40 billion more dollars, lobby for complete deregulation of the industry, and get me 50 more petabytes of data, then we will have a little human in the computer! RealshitGPT will have human level intelligence!"

  • 3 Stimmen
    1 Beiträge
    10 Aufrufe
    Niemand hat geantwortet
  • 96 Stimmen
    2 Beiträge
    22 Aufrufe
    U
    Still, a 2025 University of Arizona study that interviewed farmers and government officials in Pinal County, Arizona, found that a number of them questioned agrivoltaics’ compatibility with large-scale agriculture. “I think it’s a great idea, but the only thing … it wouldn’t be cost-efficient … everything now with labor and cost of everything, fuel, tractors, it almost has to be super big … to do as much with as least amount of people as possible,” one farmer stated. Many farmers are also leery of solar, worrying that agrivoltaics could take working farmland out of use, affect their current operations or deteriorate soils. Those fears have been amplified by larger utility-scale initiatives, like Ohio’s planned Oak Run Solar Project, an 800 megawatt project that will include 300 megawatts of battery storage, 4,000 acres of crops and 1,000 grazing sheep in what will be the country’s largest agrivoltaics endeavor to date. Opponents of the project worry about its visual impacts and the potential loss of farmland.
  • 148 Stimmen
    92 Beiträge
    428 Aufrufe
    B
    You don't even need a VPN. Only the legit sites will play ball. Porn will still be there.
  • 299 Stimmen
    17 Beiträge
    65 Aufrufe
    P
    Unfortunately, pouring sugar into a gas tank will do just about zero damage to an engine. It might clog up the fuel filter, or maybe the pump, but the engine would be fine. Bleach on the other hand….
  • getoffpocket.com, my guide to Pocket alternatives, just got a redesign

    Technology technology
    23
    85 Stimmen
    23 Beiträge
    114 Aufrufe
    B
    I've made some updates. There are many perspectives to view a guide like this. I hope there are some improvements to the self-hosting perspective. https://getoffpocket.com/
  • For All That Is Good About Humankind, Ban Smartphones

    Technology technology
    89
    1
    132 Stimmen
    89 Beiträge
    430 Aufrufe
    D
    Appreciated, but do you think the authorities want to win the war on drugs?
  • Companies are using Ribbon AI, an AI interviewer to screen candidates.

    Technology technology
    52
    56 Stimmen
    52 Beiträge
    205 Aufrufe
    P
    I feel like I could succeed in an LLM selection process. I could sell my skills to a robot, could get an LLM to help. It's a long way ahead of keyword based automatic selectors At least an LLM is predictable, human judges are so variable
  • 0 Stimmen
    4 Beiträge
    29 Aufrufe
    K
    Only way I'll want a different phone brand is if it comes with ZERO bloatware and has an excellent internal memory/storage cleanse that has nothing to do with Google's Files or a random app I'm not sure I can trust without paying or rooting. So far my A series phones do what I need mostly and in my opinion is superior to the Motorola's my fiancé prefers minus the phone-phone charge ability his has, everything else I'm just glad I have enough control to tweak things to my liking, however these days Samsungs seem to be infested with Google bloatware and apps that insist on opening themselves back up regardless of the widespread battery restrictions I've assigned (even was sent a "Stop Closing my Apps" notif that sent me to an article ) short of Disabling many unnecessary apps bc fully rooting my devices is something I rarely do anymore. I have a random Chinese brand tablet where I actually have more control over the apps than either of my A series phones whee Force Stopping STAYS that way when I tell them to! I hate being listened to for ads and the unwanted draining my battery life and data (I live off-grid and pay data rates because "Unlimited" is some throttled BS) so my ability to control what's going on in the background matters a lot to me, enough that I'm anti Meta-apps and avoid all non-essential Google apps. I can't afford topline phones and the largest data plan, so I work with what I can afford and I'm sad refurbished A lines seem to be getting more expensive while giving away my control to companies. Last A line I bought that was supposed to be my first 5G phone was network locked, so I got ripped off, but it still serves me well in off-grid life. Only app that actually regularly malfunctions when I Force Stop it's background presence is Roku, which I find to have very an almost insidious presence in our lives. Google Play, Chrome, and Spotify never acts incompetent in any way no matter how I have to open the setting every single time I turn Airplane Mode off. Don't need Gmail with Chrome and DuckDuckGo has been awesome at intercepting self-loading ads. I hope one day DDG gets better bc Google seems to be terrible lately and I even caught their AI contradicting itself when asking about if Homo Florensis is considered Human (yes) and then asked the oldest age of human remains, and was fed the outdated narrative of 300,000 years versus 700,000+ years bipedal pre-humans have been carbon dated outside of the Cradle of Humanity in South Africa. SO sorry to go off-topic, but I've got a big gripe with Samsung's partnership with Google, especially considering the launch of Quantum Computed AI that is still being fine-tuned with company-approved censorships.