Skip to content

Amazonian tribe that received Starlink satellite internet sues The New York Times, TMZ, and Yahoo for $180M over defamation and more, claiming a viral 2024 NYT story smeared members as porn addicts.

Technology
30 21 80
  • Elon Musk Floats a New Source of Funding for xAI: Tesla

    Technology technology
    11
    89 Stimmen
    11 Beiträge
    0 Aufrufe
    S
    What do I call it, kif? Ugh..... Sex-lexia
  • 738 Stimmen
    67 Beiträge
    215 Aufrufe
    K
    That has always been the two big problems with AI. Biases in the training, intentional or not, will always bias the output. And AI is incapable of saying "I do not have suffient training on this subject or reliable sources for it to give you a confident answer". It will always give you its best guess, even if it is completely hallucinating much of the data. The only way to identify the hallucinations if it isn't just saying absurd stuff on the face of it, it to do independent research to verify it, at which point you may as well have just researched it yourself in the first place. AI is a tool, and it can be a very powerful tool with the right training and use cases. For example, I use it at a software engineer to help me parse error codes when googling working or to give me code examples for modules I've never used. There is no small number of times it has been completely wrong, but in my particular use case, that is pretty easy to confirm very quickly. The code either works as expected or it doesn't, and code is always tested before releasing it anyway. In research, it is great at helping you find a relevant source for your research across the internet or in a specific database. It is usually very good at summarizing a source for you to get a quick idea about it before diving into dozens of pages. It CAN be good at helping you write your own papers in a LIMITED capacity, such as cleaning up your writing in your writing to make it clearer, correctly formatting your bibliography (with actual sources you provide or at least verify), etc. But you have to remember that it doesn't "know" anything at all. It isn't sentient, intelligent, thoughtful, or any other personification placed on AI. None of the information it gives you is trustworthy without verification. It can and will fabricate entire studies that do not exist even while attributed to real researcher. It can mix in unreliable information with reliable information becuase there is no difference to it. Put simply, it is not a reliable source of information... ever. Make sure you understand that.
  • Video game actors' strike officially ends after AI deal

    Technology technology
    11
    1
    121 Stimmen
    11 Beiträge
    59 Aufrufe
    paraphrand@lemmy.worldP
    huh, interesting! It’s The Mythical Man-Month! That book was published back in 1975. They definitely know better, but must be in quite a pickle.
  • 41 Stimmen
    3 Beiträge
    27 Aufrufe
    M
    Does anybody know of a resource that's compiled known to be affected system or motherboard models using this specific BMC? Eclypsium said the line of vulnerable AMI MegaRAC devices uses an interface known as Redfish. Server makers known to use these products include AMD, Ampere Computing, ASRock, ARM, Fujitsu, Gigabyte, Huawei, Nvidia, Supermicro, and Qualcomm. Some, but not all, of these vendors have released patches for their wares.
  • How not to lose your job to AI

    Technology technology
    16
    1
    9 Stimmen
    16 Beiträge
    76 Aufrufe
    rikudou@lemmings.worldR
    A nice "trick": After 4 or so responses where you can't get anywhere, start a new chat without the wrong context. Of course refine your question with whatever you have found out in the previous chat.
  • 271 Stimmen
    77 Beiträge
    80 Aufrufe
    S
    I don't believe the idea of aggregating information is bad, moreso the ability to properly vet your sources yourself. I don't know what sources an AI chatbot could be pulling from. It could be a lot of sources, or it could be one source. Does it know which sources are reliable? Not really. AI has been infamous for hallucinating even with simple prompts. Being able to independently check where your info comes from is an important part of stopping the spread of misinfo. AI can't do that, and, in it's current state, I wouldn't want it to try. Convenience is a rat race of cutting corners. What is convenient isn't always what is best in the long run.
  • 668 Stimmen
    122 Beiträge
    120 Aufrufe
    T
    It's something Americans say.
  • 502 Stimmen
    133 Beiträge
    527 Aufrufe
    J
    Headlines have length constraints