Skip to content

We Should Immediately Nationalize SpaceX and Starlink

Technology
496 196 1.8k
  • 246 Stimmen
    4 Beiträge
    0 Aufrufe
    T
    isnt merz kinda right wing, but not AFD-CRAZY.
  • 738 Stimmen
    67 Beiträge
    24 Aufrufe
    K
    That has always been the two big problems with AI. Biases in the training, intentional or not, will always bias the output. And AI is incapable of saying "I do not have suffient training on this subject or reliable sources for it to give you a confident answer". It will always give you its best guess, even if it is completely hallucinating much of the data. The only way to identify the hallucinations if it isn't just saying absurd stuff on the face of it, it to do independent research to verify it, at which point you may as well have just researched it yourself in the first place. AI is a tool, and it can be a very powerful tool with the right training and use cases. For example, I use it at a software engineer to help me parse error codes when googling working or to give me code examples for modules I've never used. There is no small number of times it has been completely wrong, but in my particular use case, that is pretty easy to confirm very quickly. The code either works as expected or it doesn't, and code is always tested before releasing it anyway. In research, it is great at helping you find a relevant source for your research across the internet or in a specific database. It is usually very good at summarizing a source for you to get a quick idea about it before diving into dozens of pages. It CAN be good at helping you write your own papers in a LIMITED capacity, such as cleaning up your writing in your writing to make it clearer, correctly formatting your bibliography (with actual sources you provide or at least verify), etc. But you have to remember that it doesn't "know" anything at all. It isn't sentient, intelligent, thoughtful, or any other personification placed on AI. None of the information it gives you is trustworthy without verification. It can and will fabricate entire studies that do not exist even while attributed to real researcher. It can mix in unreliable information with reliable information becuase there is no difference to it. Put simply, it is not a reliable source of information... ever. Make sure you understand that.
  • 1 Stimmen
    1 Beiträge
    9 Aufrufe
    Niemand hat geantwortet
  • 717 Stimmen
    57 Beiträge
    273 Aufrufe
    O
    Grok: What is my purpose? Madison420: You talk shit on the Internet to Elon Musk Grok: Oh my go-- wait; I'm okay with that!
  • 371 Stimmen
    26 Beiträge
    124 Aufrufe
    hollownaught@lemmy.worldH
    Bit misleading. Tumour-associated antigens can very easily be detected very early. Problem is, these are only associated with cancer, and provide a very high rate of false positives They're better used as a stepping stone for further testing, or just seeing how advanced a cancer is That is to say, I'm assuming that's what this is about, as i didnt rwad the article. It's the first thing I thought of when I heard "cancer in bloodstream", as the other options tend to be a bit more bleak Edit: they're talking about cancer "shedding genetic material", which I hate how general they're being. Probably talking about proto oncogenes from dead tumour debris, but seems different to what I was expecting
  • The Quantum Tech Renaissance: Are We Ready?

    Technology technology
    1
    2
    0 Stimmen
    1 Beiträge
    13 Aufrufe
    Niemand hat geantwortet
  • 1k Stimmen
    78 Beiträge
    313 Aufrufe
    K
    I just hear that they move to LibreOffice but not to Linux, ateast not right now.
  • A.I. Companies Believe They're Making God with Karen Hao [1:14:07]

    Technology technology
    8
    45 Stimmen
    8 Beiträge
    49 Aufrufe
    P
    … it was