Skip to content

Human-level AI is not inevitable. We have the power to change course

Technology
47 29 0
  • Do you have any expertise on the issue?

    I hold a PhD in probabilistic machine learning and advise businesses on how to use AI effectively for a living so yes.

    IMHO, there is simply nothing indicating that it's close. Sure LLMs can do some incredibly clever sounding word-extrapolation, but the current "reasoning models" still don't actually reason. They are just LLMs with some extra steps.

    There is lots of information out there on the topic so I'm not going to write a long justification here. Gary Marcus has some good points if you want to learn more about what the skeptics say.

    Gary Marcus is certainly good. It’s not as if I think say, LeCun, or any of the many people who think that LLMs aren’t the way are morons. I don’t think anyone thinks all the problems are currently solved. And I think long time lines are still plausible, but, I think dismissing short time line out of hand is thoughtless.

    My main gripe is how certain people are about things they know virtually nothing about. And how slap dashed their reasoning is. It seems to me most people’s reasoning goes something like “there is no little man in the box, it’s just math, and math can’t think.” Of course, they say it with a lot fancier words, like “it’s just gradient decent” as if human brains couldn’t have gradient decent baked in anywhere.

    But, out of interest what is your take on the Stochastic Parrot? I find the arguments deeply implausible.

  • The only problem with destroying capitalism is deciding who gets all the nukes.

    Capitalism is just an economic system, I'm not sure what nukes has to do with it. It's not like billionaires directly own them, and we have to distribute the "nuke wealth" to the people or anything lol

  • So, how would you define AGI, and what sorts of tasks require reasoning? I would have thought earning the gold medal on the IMO would have been a reasoning task, but I’m happy to learn why I’m wrong.

    I definitely think that's remarkable. But I don't think scoring high on an external measure like a test is enough to prove the ability to reason. For reasoning, the process matters, IMO.

    Reasoning models work by Chain-of-Thought which has been shown to provide some false reassurances about their process https://arxiv.org/abs/2305.04388 .

    Maybe passing some math test is enough evidence for you but I think it matters what's inside the box. For me it's only proved that tests are a poor measure of the ability to reason.

  • Engineer here with a CS minor in case you care about ethos: We are not remotely close to AGI.

    I loathe python irrationally (and I guess I’m masochist who likes to reinvent the wheel programming wise lol) so I’ve written my own neural nets from scratch a few times.

    Most common models are trained by gradient descent, but this only works when you have a specific response in mind for certain inputs. You use the difference between the desired outcome and actual outcome to calculate a change in weights that would minimize that error.

    This has two major preventative issues for AGI: input size limits, and determinism.

    The weight matrices are set for a certain number of inputs. Unfortunately you can’t just add a new unit of input and assume the weights will be nearly the same. Instead you have to retrain the entire network. (This problem is called transfer learning if you want to learn more)

    This input constraint is preventative of AGI because it means a network trained like this cannot have an input larger than a certain size. Problematic since the illusion of memory that LLMs like ChatGPT have comes from the fact they run the entire conversation through the net. Also just problematic from a size and training time perspective as increasing the input size exponentially increases basically everything else.

    Point is, current models are only able to simulate memory by literally holding onto all the information and processing all of it for each new word which means there is a limit to its memory unless you retrain the entire net to know the answers you want. (And it’s slow af) Doesn’t sound like a mind to me…

    Now determinism is the real problem for AGI from a cognitive standpoint. The neural nets you’ve probably used are not thinking… at all. They literally are just a complicated predictive algorithm like linear regression. I’m dead serious. It’s basically regression just in a very high dimensional vector space.

    ChatGPT does not think about its answer. It doesn’t have any sort of object identification or thought delineation because it doesn’t have thoughts. You train it on a bunch of text and have it attempt to predict the next word. If it’s off, you do some math to figure out what weight modifications would have lead it to a better answer.

    All these models do is what they were trained to do. Now they were trained to be able to predict human responses so yeah it sounds pretty human. They were trained to reproduce answers on stack overflow and Reddit etc. so they can answer those questions relatively well. And hey it is kind of cool that they can even answer some questions they weren’t trained on because it’s similar enough to the questions they weren’t trained on… but it’s not thinking. It isn’t doing anything. The program is just multiplying numbers that were previously set by an input to find the most likely next word.

    This is why LLMs can’t do math. Because they don’t actually see the numbers, they don’t know what numbers are. They don’t know anything at all because they’re incapable of thought. Instead there are simply patterns in which certain numbers show up and the model gets trained on some of them but you can get it to make incredibly simple math mistakes by phrasing the math slightly differently or just by surrounding it with different words because the model was never trained for that scenario.

    Models can only “know” as much as what was fed into them and hey sometimes those patterns extend, but a lot of the time they don’t. And you can’t just say “you were wrong” because the model isn’t transient (capable of changing from inputs alone). You have to train it with the correct response in mind to get it to “learn” which again takes time and really isn’t learning or intelligence at all.

    Now there are some more exotic neural networks architectures that could surpass these limitations.

    Currently I’m experimenting with Spiking Neural Nets which are much more capable of transfer learning and more closely model biological neurons along with other cool features like being good with temporal changes in input.

    However, there are significant obstacles with these networks and not as much research because they only run well on specialized hardware (because they are meant to mimic biological neurons who run simultaneously) and you kind of have to train them slowly.

    You can do some tricks to use gradient descent but doing so brings back the problems of typical ANNs (though this is still possibly useful for speeding up ANNs by converting them to SNNs and then building the neuromorphic hardware for them).

    SNNs with time based learning rules (typically some form of STDP which mimics Hebbian learning as per biological neurons) are basically the only kinds of neural nets that are even remotely capable of having thoughts and learning (changing weights) in real time. Capable as in “this could have discrete time dependent waves of continuous self modifying spike patterns which could theoretically be thoughts” not as in “we can make something that thinks.”

    Like these neural nets are good with sensory input and that’s about as far as we’ve gotten (hyperbole but not by that much). But these networks are still fascinating, and they do help us test theories about how the human brain works so eventually maybe we’ll make a real intelligent being with them, but that day isn’t even on the horizon currently

    In conclusion, we are not remotely close to AGI. Current models that seem to think are verifiably not thinking and are incapable of it from a structural standpoint. You cannot make an actual thinking machine using the current mainstream model architectures.

    The closest alternative that might be able to do this (as far as I’m aware) is relatively untested and difficult to prototype (trust me I’m trying). Furthermore the requirements of learning and thinking largely prohibit the use of gradient descent or similar algorithms meaning training must be done on a much more rigorous and time consuming basis that is not economically favorable. Ergo, we’re not even all that motivated to move towards AGI territory.

    Lying to say we are close to AGI when we aren’t at all close, however, is economically favorable which is why you get headlines like this.

    Wow, what an insightful answer.

    I have been trying to separate the truth from the hype, and learn more about how LLMs work, and this explanation has been one of the best one I’ve read on the topic. You strike a very good balance by going deep enough, but still keeping it understandable.

    A question: I remember using Wolfram Alpha a lot back in university 15+ years ago. From a user perspective, it seems very similar to LLMs, but it was very accurate with math. From this, I take that modern LLMs are not the evolution of that model, but WA still appeared to be ahead of it’s time. What is/was the difference?

  • Gary Marcus is certainly good. It’s not as if I think say, LeCun, or any of the many people who think that LLMs aren’t the way are morons. I don’t think anyone thinks all the problems are currently solved. And I think long time lines are still plausible, but, I think dismissing short time line out of hand is thoughtless.

    My main gripe is how certain people are about things they know virtually nothing about. And how slap dashed their reasoning is. It seems to me most people’s reasoning goes something like “there is no little man in the box, it’s just math, and math can’t think.” Of course, they say it with a lot fancier words, like “it’s just gradient decent” as if human brains couldn’t have gradient decent baked in anywhere.

    But, out of interest what is your take on the Stochastic Parrot? I find the arguments deeply implausible.

    I'm not saying that we can't ever build a machine that can think. You can do some remarkable things with math. I personally don't think our brains have baked in gradient descent, and I don't think neural networks are a lot like brains at all.

    The stochastic parrot is a useful vehicle for criticism and I think there is some truth to it. But I also think LMMs display some super impressive emergent features. But I still think they are really far from AGI.

  • Engineer here with a CS minor in case you care about ethos: We are not remotely close to AGI.

    I loathe python irrationally (and I guess I’m masochist who likes to reinvent the wheel programming wise lol) so I’ve written my own neural nets from scratch a few times.

    Most common models are trained by gradient descent, but this only works when you have a specific response in mind for certain inputs. You use the difference between the desired outcome and actual outcome to calculate a change in weights that would minimize that error.

    This has two major preventative issues for AGI: input size limits, and determinism.

    The weight matrices are set for a certain number of inputs. Unfortunately you can’t just add a new unit of input and assume the weights will be nearly the same. Instead you have to retrain the entire network. (This problem is called transfer learning if you want to learn more)

    This input constraint is preventative of AGI because it means a network trained like this cannot have an input larger than a certain size. Problematic since the illusion of memory that LLMs like ChatGPT have comes from the fact they run the entire conversation through the net. Also just problematic from a size and training time perspective as increasing the input size exponentially increases basically everything else.

    Point is, current models are only able to simulate memory by literally holding onto all the information and processing all of it for each new word which means there is a limit to its memory unless you retrain the entire net to know the answers you want. (And it’s slow af) Doesn’t sound like a mind to me…

    Now determinism is the real problem for AGI from a cognitive standpoint. The neural nets you’ve probably used are not thinking… at all. They literally are just a complicated predictive algorithm like linear regression. I’m dead serious. It’s basically regression just in a very high dimensional vector space.

    ChatGPT does not think about its answer. It doesn’t have any sort of object identification or thought delineation because it doesn’t have thoughts. You train it on a bunch of text and have it attempt to predict the next word. If it’s off, you do some math to figure out what weight modifications would have lead it to a better answer.

    All these models do is what they were trained to do. Now they were trained to be able to predict human responses so yeah it sounds pretty human. They were trained to reproduce answers on stack overflow and Reddit etc. so they can answer those questions relatively well. And hey it is kind of cool that they can even answer some questions they weren’t trained on because it’s similar enough to the questions they weren’t trained on… but it’s not thinking. It isn’t doing anything. The program is just multiplying numbers that were previously set by an input to find the most likely next word.

    This is why LLMs can’t do math. Because they don’t actually see the numbers, they don’t know what numbers are. They don’t know anything at all because they’re incapable of thought. Instead there are simply patterns in which certain numbers show up and the model gets trained on some of them but you can get it to make incredibly simple math mistakes by phrasing the math slightly differently or just by surrounding it with different words because the model was never trained for that scenario.

    Models can only “know” as much as what was fed into them and hey sometimes those patterns extend, but a lot of the time they don’t. And you can’t just say “you were wrong” because the model isn’t transient (capable of changing from inputs alone). You have to train it with the correct response in mind to get it to “learn” which again takes time and really isn’t learning or intelligence at all.

    Now there are some more exotic neural networks architectures that could surpass these limitations.

    Currently I’m experimenting with Spiking Neural Nets which are much more capable of transfer learning and more closely model biological neurons along with other cool features like being good with temporal changes in input.

    However, there are significant obstacles with these networks and not as much research because they only run well on specialized hardware (because they are meant to mimic biological neurons who run simultaneously) and you kind of have to train them slowly.

    You can do some tricks to use gradient descent but doing so brings back the problems of typical ANNs (though this is still possibly useful for speeding up ANNs by converting them to SNNs and then building the neuromorphic hardware for them).

    SNNs with time based learning rules (typically some form of STDP which mimics Hebbian learning as per biological neurons) are basically the only kinds of neural nets that are even remotely capable of having thoughts and learning (changing weights) in real time. Capable as in “this could have discrete time dependent waves of continuous self modifying spike patterns which could theoretically be thoughts” not as in “we can make something that thinks.”

    Like these neural nets are good with sensory input and that’s about as far as we’ve gotten (hyperbole but not by that much). But these networks are still fascinating, and they do help us test theories about how the human brain works so eventually maybe we’ll make a real intelligent being with them, but that day isn’t even on the horizon currently

    In conclusion, we are not remotely close to AGI. Current models that seem to think are verifiably not thinking and are incapable of it from a structural standpoint. You cannot make an actual thinking machine using the current mainstream model architectures.

    The closest alternative that might be able to do this (as far as I’m aware) is relatively untested and difficult to prototype (trust me I’m trying). Furthermore the requirements of learning and thinking largely prohibit the use of gradient descent or similar algorithms meaning training must be done on a much more rigorous and time consuming basis that is not economically favorable. Ergo, we’re not even all that motivated to move towards AGI territory.

    Lying to say we are close to AGI when we aren’t at all close, however, is economically favorable which is why you get headlines like this.

    Thank you for great comment!

  • So, how would you define AGI, and what sorts of tasks require reasoning? I would have thought earning the gold medal on the IMO would have been a reasoning task, but I’m happy to learn why I’m wrong.

    I think we also should require to set some energy limits to those tests. Before it was assumed that those tests are done by humans, that can do those tests after eating some crackers and a bit of water.

    Now we are comparing that to massive data centers that need nuclear reactors to have enough power to work through these problems...

  • Wow, what an insightful answer.

    I have been trying to separate the truth from the hype, and learn more about how LLMs work, and this explanation has been one of the best one I’ve read on the topic. You strike a very good balance by going deep enough, but still keeping it understandable.

    A question: I remember using Wolfram Alpha a lot back in university 15+ years ago. From a user perspective, it seems very similar to LLMs, but it was very accurate with math. From this, I take that modern LLMs are not the evolution of that model, but WA still appeared to be ahead of it’s time. What is/was the difference?

    Thanks, I almost didn’t post because it was an essay of a comment lol, glad you found it insightful

    As for Wolfram Alpha, I’m definitely not an expert but I’d guess the reason it was good at math was that it would simply translate your problem from natural language into commands that could be sent to a math engine that would do the actual calculation.

    So basically act like a language translator but for typed out math to a programming language for some advanced calculation program (like wolfram Mathematica)

    Again, this is just speculation because I’m a bit too tired to look into it rn, but it seems plausible since we had basic language translators online back then (I think…) and I’d imagine parsing written math is probably easier than natural language translation

  • I definitely think that's remarkable. But I don't think scoring high on an external measure like a test is enough to prove the ability to reason. For reasoning, the process matters, IMO.

    Reasoning models work by Chain-of-Thought which has been shown to provide some false reassurances about their process https://arxiv.org/abs/2305.04388 .

    Maybe passing some math test is enough evidence for you but I think it matters what's inside the box. For me it's only proved that tests are a poor measure of the ability to reason.

    I’m sorry, but this reads to me like “I am certain I am right, so evidence that implies I’m wrong must be wrong.” And while sometimes that really is the right approach to take, more often than not you really should update the confidence in your hypothesis rather than discarding contradictory data.

    But, there must be SOMETHING which is a good measure of the ability to reason, yes? If reasoning is an actual thing that actually exists, then it must be detectable, and there must be a way to detect it. What benchmark do you purpose?

    You don’t have to seriously answer, but I hope you see where I’m coming from. I assume you’ve read Searle, and I cannot express to you the contempt in which I hold him. I think, if we are to be scientists and not philosophers (and good philosophers should be scientists too) we have to look to the external world to test our theories.

    For me, what goes on inside does matter, but what goes on inside everyone everywhere is just math, and I haven’t formed an opinion about what math is really most efficient at instantiating reasoning, or thinking, or whatever you want to talk about.

    To be honest, the other day I was convinced it was actually derivatives and integrals, and, because of this, that analog computers would make much better AIs than digital computers. (But Hava Siegelmann’s book is expensive, and, while I had briefly lifted my book buying moratorium, I think I have to impose it again).

    Hell, maybe Penrose is right and we need quantum effects (I really really really doubt it, but, to the extent that it is possible for me, I try to keep an open mind).

    🤷♂

  • This post did not contain any content.

    The path to AGI seems inevitable - not because it’s around the corner, but because of the nature of technological progress itself. Unless one of two things stops us, we’ll get there eventually:

    1. Either there’s something fundamentally unique about how the biological brain processes information - something that cannot, even in principle, be replicated in silicon,

    2. Or we wipe ourselves out before we get the chance.

    Barring those, the outcome is just a matter of time. This argument makes no claim about timelines - only trajectory. Even if we stopped AI research for a thousand years, it’s hard to imagine a future where we wouldn’t eventually resume it. That's what humans do; improve our technology.

    The article points to cloning as a counterexample but that’s not a technological dead end, that’s a moral boundary. If one thinks we’ll hold that line forever, I’d call that naïve. When it comes to AGI, there’s no moral firewall strong enough to hold back the drive toward it. Not permanently.

  • It’s just a cash grab to take peoples jobs and give it to a chat bot that’s fed Wikipedia’s data on crack.

    Don't confuse AGI with LLMs. Both being AI systems is the only thing they have in common. They couldn't be further apart when it comes to cognitive capabilities.

  • We're not even remotely close. The promise of AGI is part of the AI hype machine and taking it seriously is playing into their hands.

    Irrelevant at best, harmful at worst 🤷

    We’re not even remotely close.

    That’s just the other side of the same coin whose flip side claims AGI is right around the corner. The truth is, you couldn’t possibly know either way.

  • I’m sorry, but this reads to me like “I am certain I am right, so evidence that implies I’m wrong must be wrong.” And while sometimes that really is the right approach to take, more often than not you really should update the confidence in your hypothesis rather than discarding contradictory data.

    But, there must be SOMETHING which is a good measure of the ability to reason, yes? If reasoning is an actual thing that actually exists, then it must be detectable, and there must be a way to detect it. What benchmark do you purpose?

    You don’t have to seriously answer, but I hope you see where I’m coming from. I assume you’ve read Searle, and I cannot express to you the contempt in which I hold him. I think, if we are to be scientists and not philosophers (and good philosophers should be scientists too) we have to look to the external world to test our theories.

    For me, what goes on inside does matter, but what goes on inside everyone everywhere is just math, and I haven’t formed an opinion about what math is really most efficient at instantiating reasoning, or thinking, or whatever you want to talk about.

    To be honest, the other day I was convinced it was actually derivatives and integrals, and, because of this, that analog computers would make much better AIs than digital computers. (But Hava Siegelmann’s book is expensive, and, while I had briefly lifted my book buying moratorium, I think I have to impose it again).

    Hell, maybe Penrose is right and we need quantum effects (I really really really doubt it, but, to the extent that it is possible for me, I try to keep an open mind).

    🤷♂

    I'm not sure I can give a satisfying answer. There are a lot of moving parts here, and a big issue here is definitions which you also touch upon with your reference to Searle.

    I agree with the sentiment that there must be some objective measure of reasoning ability. To me, reasoning is more than following logical rules. It's also about interpreting the intent of the task. The reasoning models are very sensitive to initial conditions and tend to drift when the question is not super precise or if they don't have sufficient context.

    The AI models are in a sense very fragile to the input. Organic intelligence on the other hand is resilient and also heuristic. I don't have any specific idea for the test, but it should test the ability to solve a very ill-posed problem.

  • This post did not contain any content.

    A lot of people making baseless claims about it being inevitable...i mean it could happen but the hard problem of consciousness is not inevitable to solve

  • We’re not even remotely close.

    That’s just the other side of the same coin whose flip side claims AGI is right around the corner. The truth is, you couldn’t possibly know either way.

    The truth is, you couldn’t possibly know either way.

    I think the argument is we're not remotely close when considering the specific techniques used by current generation of AI tools. Of course people can make new discovery any day and achieve AGI but it's a different discussion.

  • AI will not threaten humans due to sadism or boredom, but because it takes jobs and makes people jobless.

    When there is lower demand for human labor, according to the rule of supply and demand, prices (aka. wages) for human labor go down.

    The real crisis is one of sinking wages, lack of social safety nets, and lack of future perspective for workers. That's what should actually be discussed.

    Not sure if we will even really notice that in our lifetime, it is taking decades to get things like invoice processing to automate. Heck in the US they can't even get proper bank connections made.

    Also, tractors have replaced a lot of workers on the land, computers have both lost a lot of jobs in offices and created a lot at the same time.

    Jobs will change, that's for sure and I think most of the heavy labour jobs will become more expensive since they are harder to replace.

  • This post did not contain any content.

    Human level? That’s not setting the bar very high. Surely the aim would be to surpass human, or why bother?

  • The path to AGI seems inevitable - not because it’s around the corner, but because of the nature of technological progress itself. Unless one of two things stops us, we’ll get there eventually:

    1. Either there’s something fundamentally unique about how the biological brain processes information - something that cannot, even in principle, be replicated in silicon,

    2. Or we wipe ourselves out before we get the chance.

    Barring those, the outcome is just a matter of time. This argument makes no claim about timelines - only trajectory. Even if we stopped AI research for a thousand years, it’s hard to imagine a future where we wouldn’t eventually resume it. That's what humans do; improve our technology.

    The article points to cloning as a counterexample but that’s not a technological dead end, that’s a moral boundary. If one thinks we’ll hold that line forever, I’d call that naïve. When it comes to AGI, there’s no moral firewall strong enough to hold back the drive toward it. Not permanently.

    something that cannot, even in principle, be replicated in silicon

    As if silicon were the only technology we have to build computers.

  • something that cannot, even in principle, be replicated in silicon

    As if silicon were the only technology we have to build computers.

    Did you genuinely not understand the point I was making, or are you just being pedantic? "Silicon" obviously refers to current computing substrates, not a literal constraint on all future hardware. If you’d prefer I rewrite it as "in non-biological substrates," I’m happy to oblige - but I have a feeling you already knew that.

  • Did you genuinely not understand the point I was making, or are you just being pedantic? "Silicon" obviously refers to current computing substrates, not a literal constraint on all future hardware. If you’d prefer I rewrite it as "in non-biological substrates," I’m happy to oblige - but I have a feeling you already knew that.

    And why is "non-biological" a limitation?

  • Elon Musk's X slams French criminal investigation

    Technology technology
    10
    1
    51 Stimmen
    10 Beiträge
    13 Aufrufe
    B
    Actually there was just yesterday a story about Corning (The maker of Gorilla glass), that was accused by EU for anti competitive behavior, where Corning entered in positive dialogue, and stated they intended to work fully within regulation. https://lemmy.world/post/33255689 Corning, the US-based manufacturer of Gorilla Glass, has successfully avoided potential European Union antitrust fines of up to $1.25 billion by agreeing to a set of legally binding commitments that address concerns over its exclusive supply agreements for specialty glass used in smartphones and other handheld devices. So yes Musk is an ass, also compared to other companies. And his reaction is confrontational, which is not normal behavior.
  • SHUT THE FUCK UP!

    Technology technology
    20
    2
    94 Stimmen
    20 Beiträge
    143 Aufrufe
    teft@piefed.worldT
    Why censor fucking but not fuck?
  • When A Face Scan Decides Who Eats And Who Keeps Their Job

    Technology technology
    2
    1
    23 Stimmen
    2 Beiträge
    21 Aufrufe
    R
    Someone heard about IBM providing punch card machines for Auschwitz and thought that was insufficiently banal.
  • The Complete History of Honda Acty: From Classic to Contemporary

    Technology technology
    1
    2
    1 Stimmen
    1 Beiträge
    14 Aufrufe
    Niemand hat geantwortet
  • 90 Stimmen
    20 Beiträge
    116 Aufrufe
    W
    At least with AI it's easy to see how shitty it gets as the codebase grows working on even a toy project over a week. Then again, if you have no frame of reference maybe that doesn't feel as awful as it should.
  • Inside the face scanning tech behind social media age limits

    Technology technology
    1
    1
    25 Stimmen
    1 Beiträge
    11 Aufrufe
    Niemand hat geantwortet
  • Resurrecting a dead torrent tracker and finding 3 million peers

    Technology technology
    59
    321 Stimmen
    59 Beiträge
    300 Aufrufe
    I
    Yeah i suppose any form of payment that you have to keep secret for some reason is a reason to use crypto, though I struggle to imagine needing that if you're not doing something dodgy imagine you’re a YouTuber and want to accept donations: that will force you to give out your name to them, which they could use to get your address and phone number. There’s always someone that hates you, and I rather not have them knowing my personal info Wat. Crypto is not good at solving that, it's in fact much much worse than traditional payment methods. There's a reason scammers always want to be paid in crypto if you’re the seller then it’s a lot better. With the traditional banking system, with enough knowledge you can cheat both sides: stolen cards, abusive chargebacks, bank accounts in other countries under fake name/fake ID… Crypto simplifies scamming when the seller, and pretty much makes it impossible for buyers What specifically are you boycotting? Card payments, international tranfers, national transfers taking days to complete, money being seizable at all times many banks lose money on them Their plans are basically all focused on the card you get. Pretty sure they make money with it, else many wouldn’t offer cash back (selling infos and getting a fee from card payments?) if you think the people that benefit from you using crypto (crypto exchange owners and billionaires that own crypto etc.) are less evil than goverment regulated banks, you're deluded. Banks are evil anyways, does it really change anything? The difference is that it technically helps everyone using crypto, not only the rich. Plus P2P exchanges are a thing You'll spend more money using crypto for that, not less That’s just factually false. Do you know the price of a swift transfer? Now compare it to crypto tx fees, with many being under $0.01
  • My character isn't answering me

    Technology technology
    1
    0 Stimmen
    1 Beiträge
    14 Aufrufe
    Niemand hat geantwortet