Skip to content

Human-level AI is not inevitable. We have the power to change course

Technology
50 30 5
  • This post did not contain any content.
  • This post did not contain any content.

    Ummm no? If moneyed interests want it then it happens. We have absolutely no control over whether it happens. Did we stop Recall from being forced down our throats with windows 11? Did we stop Gemini from being forced down our throats?

    If capital wants it capital gets it. 😞

  • Ummm no? If moneyed interests want it then it happens. We have absolutely no control over whether it happens. Did we stop Recall from being forced down our throats with windows 11? Did we stop Gemini from being forced down our throats?

    If capital wants it capital gets it. 😞

    😳 unless we destroy capitalism? 👉🏾👈🏾

  • This post did not contain any content.

    We're not even remotely close. The promise of AGI is part of the AI hype machine and taking it seriously is playing into their hands.

    Irrelevant at best, harmful at worst 🤷

  • Ummm no? If moneyed interests want it then it happens. We have absolutely no control over whether it happens. Did we stop Recall from being forced down our throats with windows 11? Did we stop Gemini from being forced down our throats?

    If capital wants it capital gets it. 😞

    Use Linux and don’t have any of those issues.

    Get off the capitalist owned platforms.

  • 😳 unless we destroy capitalism? 👉🏾👈🏾

    The only problem with destroying capitalism is deciding who gets all the nukes.

  • Ummm no? If moneyed interests want it then it happens. We have absolutely no control over whether it happens. Did we stop Recall from being forced down our throats with windows 11? Did we stop Gemini from being forced down our throats?

    If capital wants it capital gets it. 😞

    In the US, sure, but there have been class revolts in other nations. I’m not saying they lead to good outcomes, but king Louis XVI was rich. And being rich did not save him. There was a capitalist class in China during the cultural revolution. They didn’t make it through. If it means we won’t go extinct, why can we have a revolution to prevent extinction?

  • We're not even remotely close. The promise of AGI is part of the AI hype machine and taking it seriously is playing into their hands.

    Irrelevant at best, harmful at worst 🤷

    How do you know we’re not remotely close to AGI? Do you have any expertise on the issue? And, expertise is not “I can download Python libraries and use them” it is “I can explain the mathematics behind what is going on, and understand the technical and theoretical challenges”.

  • How do you know we’re not remotely close to AGI? Do you have any expertise on the issue? And, expertise is not “I can download Python libraries and use them” it is “I can explain the mathematics behind what is going on, and understand the technical and theoretical challenges”.

    Part of this is a debate on what the definition of intelligence and/or consciousness is, which I am not qualified to discuss. (I say "discuss" instead of "answer" because there is not an agreed upon answer to either of those.)

    That said, one of the main purposes of AGI would be able to learn novel subject matter, and to come up with solutions to novel problems. No machine learning tool we have created so far is capable of that, on a fundamental level. They require humans to frame their training data by defining what the success criteria is, or they spit out the statistically likely human-like response based on all of the human-generated content they've consumed.

    In short, they cannot understand a concept that humans haven't yet understood, and can only echo solutions that humans have already tried.

  • Part of this is a debate on what the definition of intelligence and/or consciousness is, which I am not qualified to discuss. (I say "discuss" instead of "answer" because there is not an agreed upon answer to either of those.)

    That said, one of the main purposes of AGI would be able to learn novel subject matter, and to come up with solutions to novel problems. No machine learning tool we have created so far is capable of that, on a fundamental level. They require humans to frame their training data by defining what the success criteria is, or they spit out the statistically likely human-like response based on all of the human-generated content they've consumed.

    In short, they cannot understand a concept that humans haven't yet understood, and can only echo solutions that humans have already tried.

    I don’t see why AGI must be conscious, and the fact that you even bring it up makes me think you haven’t thought too hard about any of this.

    When you say “novel answers” what is it you mean? The questions on the IMO have never been asked to any human before the Math Olympiad, and almost all humans cannot answer those quesion.

    Why does answering those questions not count as novel? What is a question whose answer you would count as novel, and which you yourself could answer? Presuming that you count yourself as intelligent.

  • This post did not contain any content.

    AI will not threaten humans due to sadism or boredom, but because it takes jobs and makes people jobless.

    When there is lower demand for human labor, according to the rule of supply and demand, prices (aka. wages) for human labor go down.

    The real crisis is one of sinking wages, lack of social safety nets, and lack of future perspective for workers. That's what should actually be discussed.

  • I don’t see why AGI must be conscious, and the fact that you even bring it up makes me think you haven’t thought too hard about any of this.

    When you say “novel answers” what is it you mean? The questions on the IMO have never been asked to any human before the Math Olympiad, and almost all humans cannot answer those quesion.

    Why does answering those questions not count as novel? What is a question whose answer you would count as novel, and which you yourself could answer? Presuming that you count yourself as intelligent.

    What is a question whose answer you would count as novel, and which you yourself could answer?

    AI does not have genetics and therefore no instincts that was shaped by billions of years of evolution,

    so when presented with a challenge that doesn't appear in its training data, such as whether to love your neighbor or not, it might not be able to answer because that exact scenario doesn't appear in its training data.

    humans can answer it instinctively because we have billions of years of experience behind us backing us up and providing us with a solid long-term positive decision-making capability.

  • AI will not threaten humans due to sadism or boredom, but because it takes jobs and makes people jobless.

    When there is lower demand for human labor, according to the rule of supply and demand, prices (aka. wages) for human labor go down.

    The real crisis is one of sinking wages, lack of social safety nets, and lack of future perspective for workers. That's what should actually be discussed.

    But scary robots will take over the world! That's what all the movies are about! If it's in a movie, it has to be real.

  • This post did not contain any content.

    Honestly I welcome our AI overlords. They can't possibly fuck things up harder than we have.

  • How do you know we’re not remotely close to AGI? Do you have any expertise on the issue? And, expertise is not “I can download Python libraries and use them” it is “I can explain the mathematics behind what is going on, and understand the technical and theoretical challenges”.

    Engineer here with a CS minor in case you care about ethos: We are not remotely close to AGI.

    I loathe python irrationally (and I guess I’m masochist who likes to reinvent the wheel programming wise lol) so I’ve written my own neural nets from scratch a few times.

    Most common models are trained by gradient descent, but this only works when you have a specific response in mind for certain inputs. You use the difference between the desired outcome and actual outcome to calculate a change in weights that would minimize that error.

    This has two major preventative issues for AGI: input size limits, and determinism.

    The weight matrices are set for a certain number of inputs. Unfortunately you can’t just add a new unit of input and assume the weights will be nearly the same. Instead you have to retrain the entire network. (This problem is called transfer learning if you want to learn more)

    This input constraint is preventative of AGI because it means a network trained like this cannot have an input larger than a certain size. Problematic since the illusion of memory that LLMs like ChatGPT have comes from the fact they run the entire conversation through the net. Also just problematic from a size and training time perspective as increasing the input size exponentially increases basically everything else.

    Point is, current models are only able to simulate memory by literally holding onto all the information and processing all of it for each new word which means there is a limit to its memory unless you retrain the entire net to know the answers you want. (And it’s slow af) Doesn’t sound like a mind to me…

    Now determinism is the real problem for AGI from a cognitive standpoint. The neural nets you’ve probably used are not thinking… at all. They literally are just a complicated predictive algorithm like linear regression. I’m dead serious. It’s basically regression just in a very high dimensional vector space.

    ChatGPT does not think about its answer. It doesn’t have any sort of object identification or thought delineation because it doesn’t have thoughts. You train it on a bunch of text and have it attempt to predict the next word. If it’s off, you do some math to figure out what weight modifications would have lead it to a better answer.

    All these models do is what they were trained to do. Now they were trained to be able to predict human responses so yeah it sounds pretty human. They were trained to reproduce answers on stack overflow and Reddit etc. so they can answer those questions relatively well. And hey it is kind of cool that they can even answer some questions they weren’t trained on because it’s similar enough to the questions they weren’t trained on… but it’s not thinking. It isn’t doing anything. The program is just multiplying numbers that were previously set by an input to find the most likely next word.

    This is why LLMs can’t do math. Because they don’t actually see the numbers, they don’t know what numbers are. They don’t know anything at all because they’re incapable of thought. Instead there are simply patterns in which certain numbers show up and the model gets trained on some of them but you can get it to make incredibly simple math mistakes by phrasing the math slightly differently or just by surrounding it with different words because the model was never trained for that scenario.

    Models can only “know” as much as what was fed into them and hey sometimes those patterns extend, but a lot of the time they don’t. And you can’t just say “you were wrong” because the model isn’t transient (capable of changing from inputs alone). You have to train it with the correct response in mind to get it to “learn” which again takes time and really isn’t learning or intelligence at all.

    Now there are some more exotic neural networks architectures that could surpass these limitations.

    Currently I’m experimenting with Spiking Neural Nets which are much more capable of transfer learning and more closely model biological neurons along with other cool features like being good with temporal changes in input.

    However, there are significant obstacles with these networks and not as much research because they only run well on specialized hardware (because they are meant to mimic biological neurons who run simultaneously) and you kind of have to train them slowly.

    You can do some tricks to use gradient descent but doing so brings back the problems of typical ANNs (though this is still possibly useful for speeding up ANNs by converting them to SNNs and then building the neuromorphic hardware for them).

    SNNs with time based learning rules (typically some form of STDP which mimics Hebbian learning as per biological neurons) are basically the only kinds of neural nets that are even remotely capable of having thoughts and learning (changing weights) in real time. Capable as in “this could have discrete time dependent waves of continuous self modifying spike patterns which could theoretically be thoughts” not as in “we can make something that thinks.”

    Like these neural nets are good with sensory input and that’s about as far as we’ve gotten (hyperbole but not by that much). But these networks are still fascinating, and they do help us test theories about how the human brain works so eventually maybe we’ll make a real intelligent being with them, but that day isn’t even on the horizon currently

    In conclusion, we are not remotely close to AGI. Current models that seem to think are verifiably not thinking and are incapable of it from a structural standpoint. You cannot make an actual thinking machine using the current mainstream model architectures.

    The closest alternative that might be able to do this (as far as I’m aware) is relatively untested and difficult to prototype (trust me I’m trying). Furthermore the requirements of learning and thinking largely prohibit the use of gradient descent or similar algorithms meaning training must be done on a much more rigorous and time consuming basis that is not economically favorable. Ergo, we’re not even all that motivated to move towards AGI territory.

    Lying to say we are close to AGI when we aren’t at all close, however, is economically favorable which is why you get headlines like this.

  • This post did not contain any content.

    It’s just a cash grab to take peoples jobs and give it to a chat bot that’s fed Wikipedia’s data on crack.

  • Honestly I welcome our AI overlords. They can't possibly fuck things up harder than we have.

    Can't they?

  • How do you know we’re not remotely close to AGI? Do you have any expertise on the issue? And, expertise is not “I can download Python libraries and use them” it is “I can explain the mathematics behind what is going on, and understand the technical and theoretical challenges”.

    Do you have any expertise on the issue?

    I hold a PhD in probabilistic machine learning and advise businesses on how to use AI effectively for a living so yes.

    IMHO, there is simply nothing indicating that it's close. Sure LLMs can do some incredibly clever sounding word-extrapolation, but the current "reasoning models" still don't actually reason. They are just LLMs with some extra steps.

    There is lots of information out there on the topic so I'm not going to write a long justification here. Gary Marcus has some good points if you want to learn more about what the skeptics say.

  • Engineer here with a CS minor in case you care about ethos: We are not remotely close to AGI.

    I loathe python irrationally (and I guess I’m masochist who likes to reinvent the wheel programming wise lol) so I’ve written my own neural nets from scratch a few times.

    Most common models are trained by gradient descent, but this only works when you have a specific response in mind for certain inputs. You use the difference between the desired outcome and actual outcome to calculate a change in weights that would minimize that error.

    This has two major preventative issues for AGI: input size limits, and determinism.

    The weight matrices are set for a certain number of inputs. Unfortunately you can’t just add a new unit of input and assume the weights will be nearly the same. Instead you have to retrain the entire network. (This problem is called transfer learning if you want to learn more)

    This input constraint is preventative of AGI because it means a network trained like this cannot have an input larger than a certain size. Problematic since the illusion of memory that LLMs like ChatGPT have comes from the fact they run the entire conversation through the net. Also just problematic from a size and training time perspective as increasing the input size exponentially increases basically everything else.

    Point is, current models are only able to simulate memory by literally holding onto all the information and processing all of it for each new word which means there is a limit to its memory unless you retrain the entire net to know the answers you want. (And it’s slow af) Doesn’t sound like a mind to me…

    Now determinism is the real problem for AGI from a cognitive standpoint. The neural nets you’ve probably used are not thinking… at all. They literally are just a complicated predictive algorithm like linear regression. I’m dead serious. It’s basically regression just in a very high dimensional vector space.

    ChatGPT does not think about its answer. It doesn’t have any sort of object identification or thought delineation because it doesn’t have thoughts. You train it on a bunch of text and have it attempt to predict the next word. If it’s off, you do some math to figure out what weight modifications would have lead it to a better answer.

    All these models do is what they were trained to do. Now they were trained to be able to predict human responses so yeah it sounds pretty human. They were trained to reproduce answers on stack overflow and Reddit etc. so they can answer those questions relatively well. And hey it is kind of cool that they can even answer some questions they weren’t trained on because it’s similar enough to the questions they weren’t trained on… but it’s not thinking. It isn’t doing anything. The program is just multiplying numbers that were previously set by an input to find the most likely next word.

    This is why LLMs can’t do math. Because they don’t actually see the numbers, they don’t know what numbers are. They don’t know anything at all because they’re incapable of thought. Instead there are simply patterns in which certain numbers show up and the model gets trained on some of them but you can get it to make incredibly simple math mistakes by phrasing the math slightly differently or just by surrounding it with different words because the model was never trained for that scenario.

    Models can only “know” as much as what was fed into them and hey sometimes those patterns extend, but a lot of the time they don’t. And you can’t just say “you were wrong” because the model isn’t transient (capable of changing from inputs alone). You have to train it with the correct response in mind to get it to “learn” which again takes time and really isn’t learning or intelligence at all.

    Now there are some more exotic neural networks architectures that could surpass these limitations.

    Currently I’m experimenting with Spiking Neural Nets which are much more capable of transfer learning and more closely model biological neurons along with other cool features like being good with temporal changes in input.

    However, there are significant obstacles with these networks and not as much research because they only run well on specialized hardware (because they are meant to mimic biological neurons who run simultaneously) and you kind of have to train them slowly.

    You can do some tricks to use gradient descent but doing so brings back the problems of typical ANNs (though this is still possibly useful for speeding up ANNs by converting them to SNNs and then building the neuromorphic hardware for them).

    SNNs with time based learning rules (typically some form of STDP which mimics Hebbian learning as per biological neurons) are basically the only kinds of neural nets that are even remotely capable of having thoughts and learning (changing weights) in real time. Capable as in “this could have discrete time dependent waves of continuous self modifying spike patterns which could theoretically be thoughts” not as in “we can make something that thinks.”

    Like these neural nets are good with sensory input and that’s about as far as we’ve gotten (hyperbole but not by that much). But these networks are still fascinating, and they do help us test theories about how the human brain works so eventually maybe we’ll make a real intelligent being with them, but that day isn’t even on the horizon currently

    In conclusion, we are not remotely close to AGI. Current models that seem to think are verifiably not thinking and are incapable of it from a structural standpoint. You cannot make an actual thinking machine using the current mainstream model architectures.

    The closest alternative that might be able to do this (as far as I’m aware) is relatively untested and difficult to prototype (trust me I’m trying). Furthermore the requirements of learning and thinking largely prohibit the use of gradient descent or similar algorithms meaning training must be done on a much more rigorous and time consuming basis that is not economically favorable. Ergo, we’re not even all that motivated to move towards AGI territory.

    Lying to say we are close to AGI when we aren’t at all close, however, is economically favorable which is why you get headlines like this.

    This is a fantastic response. I'm saving this so I can use it to show people that LLMs are not thinking machines.

  • This post did not contain any content.

    We can change course if we can change course on capitalism

  • Netflix uses AI effects for first time to cut costs

    Technology technology
    54
    1
    201 Stimmen
    54 Beiträge
    437 Aufrufe
    G
    yo ho fiddle dee free
  • 14 Stimmen
    17 Beiträge
    113 Aufrufe
    M
    That was 20 years ago.
  • 25 Stimmen
    1 Beiträge
    11 Aufrufe
    Niemand hat geantwortet
  • 103 Stimmen
    6 Beiträge
    53 Aufrufe
    F
    Anybody got a time machine? Stop this man!
  • 51 Stimmen
    8 Beiträge
    52 Aufrufe
    B
    But do you also sometimes leave out AI for steps the AI often does for you, like the conceptualisation or the implementation? Would it be possible for you to do these steps as efficiently as before the use of AI? Would you be able to spot the mistakes the AI makes in these steps, even months or years along those lines? The main issue I have with AI being used in tasks is that it deprives you from using logic by applying it to real life scenarios, the thing we excel at. It would be better to use AI in the opposite direction you are currently use it as: develop methods to view the works critically. After all, if there is one thing a lot of people are bad at, it's thorough critical thinking. We just suck at knowing of all edge cases and how we test for them. Let the AI come up with unit tests, let it be the one that questions your work, in order to get a better perspective on it.
  • 66 Stimmen
    5 Beiträge
    36 Aufrufe
    M
    FYI- insurance company data breaches impact more than just customers. I had my identity stolen a few years ago because a small car insurance company I've never heard of was able to buy data on me from my state's government to build a potential customer profile, and then they got hacked. I would assume Aflac has data on just about everyone in the US.
  • Diego

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    11 Aufrufe
    Niemand hat geantwortet
  • 288 Stimmen
    46 Beiträge
    474 Aufrufe
    G
    Just for the record, even in Italy the winter tires are required for the season (but we can just have chains on board and we are good). Double checking and it doesn’t seem like it? Then again I don’t live in Italy. Here in Sweden you’ll face a fine of ~2000kr (roughly 200€) per tire on your vehicle that is out of spec. https://www.europe-consommateurs.eu/en/travelling-motor-vehicles/motor-vehicles/winter-tyres-in-europe.html Well, I live in Italy and they are required at least in all the northern regions and over a certain altitude in all the others from 15th November to 15th April. Then in some regions these limits are differents as you have seen. So we in Italy already have a law that consider a different situation for the same rule. Granted that you need to write a more complex law, but in the end it is nothing impossible. …and thus it is much simpler to handle these kinds of regulations at a lower level. No need for everyone everywhere to agree, people can have rules that work for them where they live, folks are happier and don’t have to struggle against a system run by bureaucrats so far away they have no idea what reality on the ground is (and they can’t, it’s impossible to account for every scenario centrally). Even on a municipal level certain regulations differ, and that’s completely ok! So it is not that difficult, just write a directive that say: "All the member states should make laws that require winter tires in every place it is deemed necessary". I don't really think that making EU more integrated is impossibile