Skip to content

Oncoliruses: LLM Viruses are the future and will be a pest, say good bye to decent tech.

Technology
33 11 0
  • They don't need to outcompete one another. Just outcompete our security.

    The issue is once we have a model good enough to do that task, the rest is natural selection and will evolve.

    Basically, endless training against us.

    The first model might be relatively shite, but it'll improve quickly. Probably reaching a plateau, and not a Sci fi singularity.

    I compared it to cancer because they are practicality the same thing. A cancer cell isn't intelligent, it just spreads and evolves to avoid being killed, not because it has emotions or desires, but because of natural selection.

  • Eh, no. The ability to generate text that mimics human working does not mean they are intelligent. And AI is a misnomer. It has been from the beginning. Now, from a technical perspective, sure, call em AI if you want. But using that as an excuse to skip right past the word "artificial" is disingenuous in the extreme.

    On the other hand, the way the term AI is generally used technically would be called GAI, or General Artificial Intelligence, which does not exist (and may or may not ever exist).

    Bottom line, a finely tuned statistical engine is not intelligent. And that's all LLM or any other generative "AI" is at the end of the day. The lack of actual intelligence is evidenced by the way they create statements that are factually incorrect at such a high rate. So, if you use the most common definition for AI, no, LLMs absolutely are not AI.

    I don’t think you even know what you’re talking about.

    You can define intelligence however you like, but if you come into a discussion using your own private definitions, all you get is people talking past each other and thinking they’re disagreeing when they’re not. Terms like this have a technical meaning for a reason. Sure, you can simplify things in a one-on-one conversation with someone who doesn’t know the jargon - but dragging those made-up definitions into an online discussion just muddies the water.

    The correct term here is “AI,” and it doesn’t somehow skip over the word “artificial.” What exactly do you think AI stands for? The fact that normies don’t understand what AI actually means and assume it implies general intelligence doesn’t suddenly make LLMs “not AI” - it just means normies don’t know what they’re talking about either.

    And for the record, the term is Artificial General Intelligence (AGI), not GAI.

  • So they are not intelligent, they just sound like they're intelligent... Look, I get it, if we don't define these words, it's really hard to communicate.

    It’s a system designed to generate natural-sounding language, not to provide factual information. Complaining that it sometimes gets facts wrong is like saying a calculator is “stupid” because it can’t write text. How could it? That was never what it was built for. You’re expecting general intelligence from a narrowly intelligent system. That’s not a failure on the LLM’s part - it’s a failure of your expectations.

  • I obviously understand that they are AI in the original computer science sense. But that is a very specific definition and a very specific context. "Intelligence" as it's used in natural language requires cognition, which is something that no computer is capable of. It implies an intellect and decision-making ability. None of which computers posses.

    We absolutely need to dispel this notion because it is already doing a great deal of harm all over. This language absolutely contributed to the scores of people that misuse and misunderstand it.

    It’s actually the opposite of a very specific definition - it’s an extremely broad one. “AI” is the parent category that contains all the different subcategories, from the chess opponent on an old Atari console all the way up to a hypothetical Artificial Superintelligence, even though those systems couldn’t be more different from one another.

  • Sorry, no LLM is ever going to spontaneously gain the abilities self-replicate. This is completely beyond the scope of generative AI.

    This whole hype around AI and LLMs is ridiculous, not to mention completely unjustified. The appearance of a vast leap forward in this field is an illusion. They're just linking more and more processor cores together, until a glorified chatbot can be made to appear intelligent. But this is struggling actual research and innovation in the field, instead turning the market into a costly, and destructive, arms race.

    The current algorithms will never "be good enough to copy themselves". No matter what a conman like Altman says.

    It's a computer program, give it access to a terminal and it can "cp" itself to anywhere in the filesystem or through a network.

    "a program cannot copy itself" have you heard of a fork bomb? Or any computer virus?

  • If you know that it's fancy autocomplete then why do you think it could "copy itself"?

    The output of an LLM is a different thing from the model itself. The output is a stream of tokens. It doesn't have access to the file systems it runs on, and certainly not the LLM's own compiled binaries (or even less source code) - it doesn't have access to the LLM's weights either.
    (Of course it would hallucinate that it does if asked)

    This is like worrying that the music coming from a player piano might copy itself to another piano.

    Give it access to the terminal and copying itself is trivial.

    And your example doesn't work, because that is the literal original definition of a meme and if you read the original meaning, they are sort of alive and can evolve by dispersal.

  • Give it access to the terminal and copying itself is trivial.

    And your example doesn't work, because that is the literal original definition of a meme and if you read the original meaning, they are sort of alive and can evolve by dispersal.

    Why would someone direct the output of an LLM to a terminal on its own machine like that? That just sounds like an invitation to an ordinary disaster with all the 'rm -rf' content on the Internet (aka training data). That still wouldn't be access on a second machine though, and also even if it could make a copy, it would be an exact copy, or an incomplete (broken) copy. There's no reasonable way it could 'mutate' and still work using terminal commands.

    And to be a meme requires minds. There were no humans or other minds in my analogy. Nor in your question.

  • Why would someone direct the output of an LLM to a terminal on its own machine like that? That just sounds like an invitation to an ordinary disaster with all the 'rm -rf' content on the Internet (aka training data). That still wouldn't be access on a second machine though, and also even if it could make a copy, it would be an exact copy, or an incomplete (broken) copy. There's no reasonable way it could 'mutate' and still work using terminal commands.

    And to be a meme requires minds. There were no humans or other minds in my analogy. Nor in your question.

    It is so funny that you are all like "that would never work, because there are no such things as vulnerabilities on any system"

    Why would I? the whole point is to create a LLM virus, and if the model is good enough, then it is not that hard to create.

  • It is so funny that you are all like "that would never work, because there are no such things as vulnerabilities on any system"

    Why would I? the whole point is to create a LLM virus, and if the model is good enough, then it is not that hard to create.

    Of course vulnerabilities exist. And creating a major one like this for an LLM would likely lead to it destroying things like a toddler (in fact this has already happened to a company run by idiots)

    But what it didn't do was copy-with-changes as would be required to 'evolve' like a virus. Because training these models requires intense resources and isn't just a terminal command.

  • Of course vulnerabilities exist. And creating a major one like this for an LLM would likely lead to it destroying things like a toddler (in fact this has already happened to a company run by idiots)

    But what it didn't do was copy-with-changes as would be required to 'evolve' like a virus. Because training these models requires intense resources and isn't just a terminal command.

    Who said they need to retrain? A small modification to their weights in each copy is enough. That's basically training with extra steps.

  • You should block Censys from scanning your network

    Technology technology
    4
    1
    20 Stimmen
    4 Beiträge
    4 Aufrufe
    S
    I mean, you can, or you can use it to assure your firewall is configured correctly. The entire Internet is scanning you at all times, why would you focus your attention on one of the services who is willing to share their results with you? Believe me, you probably have lower hanging fruit to pick.
  • 4 Stimmen
    2 Beiträge
    2 Aufrufe
    K
    You made this site, you say? What an odd coincidence! Were you inspired by the site you say you "stumbled upon" here? https://lemmy.world/post/33395761 Because it sure seems like the exact same site.
  • 738 Stimmen
    67 Beiträge
    393 Aufrufe
    K
    That has always been the two big problems with AI. Biases in the training, intentional or not, will always bias the output. And AI is incapable of saying "I do not have suffient training on this subject or reliable sources for it to give you a confident answer". It will always give you its best guess, even if it is completely hallucinating much of the data. The only way to identify the hallucinations if it isn't just saying absurd stuff on the face of it, it to do independent research to verify it, at which point you may as well have just researched it yourself in the first place. AI is a tool, and it can be a very powerful tool with the right training and use cases. For example, I use it at a software engineer to help me parse error codes when googling working or to give me code examples for modules I've never used. There is no small number of times it has been completely wrong, but in my particular use case, that is pretty easy to confirm very quickly. The code either works as expected or it doesn't, and code is always tested before releasing it anyway. In research, it is great at helping you find a relevant source for your research across the internet or in a specific database. It is usually very good at summarizing a source for you to get a quick idea about it before diving into dozens of pages. It CAN be good at helping you write your own papers in a LIMITED capacity, such as cleaning up your writing in your writing to make it clearer, correctly formatting your bibliography (with actual sources you provide or at least verify), etc. But you have to remember that it doesn't "know" anything at all. It isn't sentient, intelligent, thoughtful, or any other personification placed on AI. None of the information it gives you is trustworthy without verification. It can and will fabricate entire studies that do not exist even while attributed to real researcher. It can mix in unreliable information with reliable information becuase there is no difference to it. Put simply, it is not a reliable source of information... ever. Make sure you understand that.
  • 0 Stimmen
    1 Beiträge
    12 Aufrufe
    Niemand hat geantwortet
  • 586 Stimmen
    100 Beiträge
    557 Aufrufe
    B
    No, LCOE is an aggregated sum of all the cash flows, with the proper discount rates applied based on when that cash flow happens, complete with the cost of borrowing (that is, interest) and the changes in prices (that is, inflation). The rates charged to the ratepayers (approved by state PUCs) are going to go up over time, with inflation, but the effect of that on the overall economics will also be blunted by the time value of money and the interest paid on the up-front costs in the meantime. When you have to pay up front for the construction of a power plant, you have to pay interest on those borrowed funds for the entire life cycle, so that steadily increasing prices over time is part of the overall cost modeling.
  • How the Rubin Observatory Will Reinvent Astronomy

    Technology technology
    2
    1
    53 Stimmen
    2 Beiträge
    24 Aufrufe
    M
    Giant twice-reflecting mirror of low-expansion borrosilicate covered in pure silver and a giant digital camera with filters.
  • Iran asks its people to delete WhatsApp from their devices

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    12 Aufrufe
    Niemand hat geantwortet
  • the illusion of human thinking

    Technology technology
    2
    0 Stimmen
    2 Beiträge
    22 Aufrufe
    H
    Can we get more than just a picture of an Abstract?