Skip to content

Reddit will block the Internet Archive

Technology
156 101 1.6k
  • 30 Stimmen
    3 Beiträge
    18 Aufrufe
    B
    Now all we have to do is decrease the fidelity of the actual world until it matches that of the AI's world model, and just like that you've got general purpose robots able to do everything that needs done.
  • 609 Stimmen
    95 Beiträge
    609 Aufrufe
    hiramfromthechi@lemmy.worldH
    That's dope, glad it could help 🤟
  • 606 Stimmen
    417 Beiträge
    16k Aufrufe
    I
    why do allow a romantic partner to set boundaries on the potential relationships I could form with others it also just hurt to imagine him being with someone else and preferring them over me My problem is exclusivity being the standard or default requirement for almost everyone, in many case just because that's what everyone else is doing. This deletes, say 95% of the population. It's already a very improbable thing to hook up with someone compatible and have that requirement, unless you have a very high "hook up attempt" rate than you can just forget the whole thing as unrealistic, which I did a long time ago. It's just not going to happen, no interested, the terms are unacceptable I'm not even going to waste any time trying.
  • 0 Stimmen
    1 Beiträge
    33 Aufrufe
    Niemand hat geantwortet
  • 737 Stimmen
    67 Beiträge
    1k Aufrufe
    K
    That has always been the two big problems with AI. Biases in the training, intentional or not, will always bias the output. And AI is incapable of saying "I do not have suffient training on this subject or reliable sources for it to give you a confident answer". It will always give you its best guess, even if it is completely hallucinating much of the data. The only way to identify the hallucinations if it isn't just saying absurd stuff on the face of it, it to do independent research to verify it, at which point you may as well have just researched it yourself in the first place. AI is a tool, and it can be a very powerful tool with the right training and use cases. For example, I use it at a software engineer to help me parse error codes when googling working or to give me code examples for modules I've never used. There is no small number of times it has been completely wrong, but in my particular use case, that is pretty easy to confirm very quickly. The code either works as expected or it doesn't, and code is always tested before releasing it anyway. In research, it is great at helping you find a relevant source for your research across the internet or in a specific database. It is usually very good at summarizing a source for you to get a quick idea about it before diving into dozens of pages. It CAN be good at helping you write your own papers in a LIMITED capacity, such as cleaning up your writing in your writing to make it clearer, correctly formatting your bibliography (with actual sources you provide or at least verify), etc. But you have to remember that it doesn't "know" anything at all. It isn't sentient, intelligent, thoughtful, or any other personification placed on AI. None of the information it gives you is trustworthy without verification. It can and will fabricate entire studies that do not exist even while attributed to real researcher. It can mix in unreliable information with reliable information becuase there is no difference to it. Put simply, it is not a reliable source of information... ever. Make sure you understand that.
  • 88 Stimmen
    21 Beiträge
    347 Aufrufe
    J
    The self hosted model has hard coded censored content.
  • 48 Stimmen
    15 Beiträge
    142 Aufrufe
    evkob@lemmy.caE
    Their Bionic Eyes Are Now Obsolete and Unsupported
  • 317 Stimmen
    45 Beiträge
    412 Aufrufe
    F
    By giving us the choice of whether someone else should profit by our data. Same as I don't want someone looking over my shoulder and copying off my test answers.