Skip to content

We Should Immediately Nationalize SpaceX and Starlink

Technology
496 196 1.9k
  • 738 Stimmen
    67 Beiträge
    85 Aufrufe
    K
    That has always been the two big problems with AI. Biases in the training, intentional or not, will always bias the output. And AI is incapable of saying "I do not have suffient training on this subject or reliable sources for it to give you a confident answer". It will always give you its best guess, even if it is completely hallucinating much of the data. The only way to identify the hallucinations if it isn't just saying absurd stuff on the face of it, it to do independent research to verify it, at which point you may as well have just researched it yourself in the first place. AI is a tool, and it can be a very powerful tool with the right training and use cases. For example, I use it at a software engineer to help me parse error codes when googling working or to give me code examples for modules I've never used. There is no small number of times it has been completely wrong, but in my particular use case, that is pretty easy to confirm very quickly. The code either works as expected or it doesn't, and code is always tested before releasing it anyway. In research, it is great at helping you find a relevant source for your research across the internet or in a specific database. It is usually very good at summarizing a source for you to get a quick idea about it before diving into dozens of pages. It CAN be good at helping you write your own papers in a LIMITED capacity, such as cleaning up your writing in your writing to make it clearer, correctly formatting your bibliography (with actual sources you provide or at least verify), etc. But you have to remember that it doesn't "know" anything at all. It isn't sentient, intelligent, thoughtful, or any other personification placed on AI. None of the information it gives you is trustworthy without verification. It can and will fabricate entire studies that do not exist even while attributed to real researcher. It can mix in unreliable information with reliable information becuase there is no difference to it. Put simply, it is not a reliable source of information... ever. Make sure you understand that.
  • (LLM) A language model built for the public good

    Technology technology
    17
    1
    131 Stimmen
    17 Beiträge
    137 Aufrufe
    cabbage@piefed.socialC
    Large language models and "generative AI" such as Stable Diffusion, Midjourney, and DALL-E are all just machine learning models. We do not currently have a real "AI branch" of computer science, we have a branch of machine learning that poses as AI. No matter how good a machine gets at recognizing and predicting patterns, it will not constitute AI, as intelligence is different from pattern recognition and prediction. Even if LLMs can sometimes appear to be reasoning, they importantly are not.
  • 376 Stimmen
    51 Beiträge
    47 Aufrufe
    L
    I believe that's what a write down generally reflects: The asset is now worth less than its previous book value. Resale value isn't the most accurate way to look at it, but it generally works for explaining it: If I bought a tool for 100€, I'd book it as 100€ worth of tools. If I wanted to sell it again after using it for a while, I'd get less than those 100€ back for it, so I'd write down that difference as a loss. With buying / depreciating / selling companies instead of tools, things become more complex, but the basic idea still holds: If the whole of the company's value goes down, you write down the difference too. So unless these guys bought it for five times its value, they'll have paid less for it than they originally got.
  • 1 Stimmen
    1 Beiträge
    14 Aufrufe
    Niemand hat geantwortet
  • 8 Stimmen
    2 Beiträge
    21 Aufrufe
    roofuskit@lemmy.worldR
    Meta? Isn't that owned by alleged pedophile Mark Zuckerberg? I heard he was a pedo on Facebook.
  • 13 Stimmen
    6 Beiträge
    41 Aufrufe
    rinse@lemmy.worldR
    Protocol implementation plebbit-js is separated from client like Seedit
  • 44 Stimmen
    3 Beiträge
    26 Aufrufe
    V
    I use it for my self hosted apps, but yeah, it's rarely useful for websites in the wild.
  • Reddit will tighten verification to keep out human-like AI bots

    Technology technology
    24
    1
    84 Stimmen
    24 Beiträge
    115 Aufrufe
    O
    While I completely agree with you about the absence of one-liners and meme comments, and even more left leaning community, there's still that strong element of "gotcha" in discussions. Also tonnes of people not reading an article before commenting (at a better rate than Reddit probably), and a generally even more doomer attitude is common here.