Skip to content

Former GM Executive: BYD cars are good in terms of design, features, price, quality. If we let BYD into the U.S. market, it could end up destroying american manufacturers

Technology
365 186 4
  • 738 Stimmen
    67 Beiträge
    0 Aufrufe
    K
    That has always been the two big problems with AI. Biases in the training, intentional or not, will always bias the output. And AI is incapable of saying "I do not have suffient training on this subject or reliable sources for it to give you a confident answer". It will always give you its best guess, even if it is completely hallucinating much of the data. The only way to identify the hallucinations if it isn't just saying absurd stuff on the face of it, it to do independent research to verify it, at which point you may as well have just researched it yourself in the first place. AI is a tool, and it can be a very powerful tool with the right training and use cases. For example, I use it at a software engineer to help me parse error codes when googling working or to give me code examples for modules I've never used. There is no small number of times it has been completely wrong, but in my particular use case, that is pretty easy to confirm very quickly. The code either works as expected or it doesn't, and code is always tested before releasing it anyway. In research, it is great at helping you find a relevant source for your research across the internet or in a specific database. It is usually very good at summarizing a source for you to get a quick idea about it before diving into dozens of pages. It CAN be good at helping you write your own papers in a LIMITED capacity, such as cleaning up your writing in your writing to make it clearer, correctly formatting your bibliography (with actual sources you provide or at least verify), etc. But you have to remember that it doesn't "know" anything at all. It isn't sentient, intelligent, thoughtful, or any other personification placed on AI. None of the information it gives you is trustworthy without verification. It can and will fabricate entire studies that do not exist even while attributed to real researcher. It can mix in unreliable information with reliable information becuase there is no difference to it. Put simply, it is not a reliable source of information... ever. Make sure you understand that.
  • Grok, Elon Musk's AI chatbot, seems to get right-wing update

    Technology technology
    13
    1
    184 Stimmen
    13 Beiträge
    77 Aufrufe
    A
    Yep. Pretty sure that was deliberate on Musk's (or his cronies) part. Imagine working at X and being told by your boss "I'd like you to make the bot more racist please." "Can you convince it that conspiracy theories are real?"
  • 495 Stimmen
    154 Beiträge
    525 Aufrufe
    Q
    Lets see.
  • 180 Stimmen
    13 Beiträge
    5 Aufrufe
    D
    There is a huge difference between an algorithm using real world data to produce a score a panel of experts use to make a determination and using a LLM to screen candidates. One has verifiable reproducible results that can be checked and debated the other does not. The final call does not matter if a computer program using an unknown and unreproducible algorithm screens you out before this. This is what we are facing. Pre-determined decisions that human beings are not being held accountable to. Is this happening right now? Yes it is, without a doubt. People are no longer making a lot of healthcare decisions determining insurance coverage. Computers that are not accountable are. You may have some ability to disagree but for how long? Soon there will be no way to reach a human about an insurance decision. This is already happening. People should be very anxious. Hearing United Healthcare has been forging DNRs and has been denying things like treatment for stroke for elders is disgusting. We have major issues that are not going away and we are blatantly ignoring them.
  • 93 Stimmen
    1 Beiträge
    7 Aufrufe
    Niemand hat geantwortet
  • Covert Web-to-App Tracking via Localhost on Android

    Technology technology
    3
    29 Stimmen
    3 Beiträge
    23 Aufrufe
    P
    That update though: "... completely removed..." I assume this is because someone at Meta realized this was a huge breach of trust, and likely quite illegal. Edit: I read somewhere that they're just being cautious about Google Play terms of service. That feels worse.
  • 51 Stimmen
    13 Beiträge
    19 Aufrufe
    jimmydoreisalefty@lemmy.worldJ
    It is a possibility. Thanks for the input!
  • 62 Stimmen
    6 Beiträge
    34 Aufrufe
    W
    What could possibly go wrong? Edit: reads like the substrate still needs to be introduced first