Skip to content

Former GM Executive: BYD cars are good in terms of design, features, price, quality. If we let BYD into the U.S. market, it could end up destroying american manufacturers

Technology
302 157 2
  • 737 Stimmen
    67 Beiträge
    0 Aufrufe
    K
    That has always been the two big problems with AI. Biases in the training, intentional or not, will always bias the output. And AI is incapable of saying "I do not have suffient training on this subject or reliable sources for it to give you a confident answer". It will always give you its best guess, even if it is completely hallucinating much of the data. The only way to identify the hallucinations if it isn't just saying absurd stuff on the face of it, it to do independent research to verify it, at which point you may as well have just researched it yourself in the first place. AI is a tool, and it can be a very powerful tool with the right training and use cases. For example, I use it at a software engineer to help me parse error codes when googling working or to give me code examples for modules I've never used. There is no small number of times it has been completely wrong, but in my particular use case, that is pretty easy to confirm very quickly. The code either works as expected or it doesn't, and code is always tested before releasing it anyway. In research, it is great at helping you find a relevant source for your research across the internet or in a specific database. It is usually very good at summarizing a source for you to get a quick idea about it before diving into dozens of pages. It CAN be good at helping you write your own papers in a LIMITED capacity, such as cleaning up your writing in your writing to make it clearer, correctly formatting your bibliography (with actual sources you provide or at least verify), etc. But you have to remember that it doesn't "know" anything at all. It isn't sentient, intelligent, thoughtful, or any other personification placed on AI. None of the information it gives you is trustworthy without verification. It can and will fabricate entire studies that do not exist even while attributed to real researcher. It can mix in unreliable information with reliable information becuase there is no difference to it. Put simply, it is not a reliable source of information... ever. Make sure you understand that.
  • 372 Stimmen
    172 Beiträge
    914 Aufrufe
    swelter_spark@reddthat.comS
    No problem. If that doesn't work for you, ComfyUI is also a popular option, but it's more complicated.
  • UK police are being told to hide their work with Palantir

    Technology technology
    5
    1
    277 Stimmen
    5 Beiträge
    28 Aufrufe
    M
    This is really fucking dark for multiple reasons
  • OpenAI wins $200m contract with US military for ‘warfighting’

    Technology technology
    42
    1
    283 Stimmen
    42 Beiträge
    147 Aufrufe
    gadgetboy@lemmy.mlG
    [image: 8aff8b12-7ed7-4df5-b40d-9d9d14708dbf.gif]
  • Meta publishes V-Jepa 2 – an AI world model

    Technology technology
    3
    1
    9 Stimmen
    3 Beiträge
    26 Aufrufe
    K
    Yay more hype. Just what we needed more of, it's hype, at last
  • 353 Stimmen
    40 Beiträge
    27 Aufrufe
    L
    If AI constantly refined its own output, sure, unless it hits a wall eventually or starts spewing bullshit because of some quirk of training. But I doubt it could learn to summarise better without external input, just like a compiler won't produce a more optimised version of itself without human development work.
  • 512 Stimmen
    58 Beiträge
    222 Aufrufe
    C
    Eh, I kinda like the ephemeral nature of most tiktoks, having things go viral within a group of like 10,000 people, to the extent that if you're tangentially connected to the group, you and everyone you know has seen it, but nobody outside that group ever sees and it vanishes into the ether like a month later makes it a little more personal.
  • 0 Stimmen
    2 Beiträge
    19 Aufrufe
    V
    Here's how you know it's not ready: AI hasn't replaced a single CEO.