Skip to content

RFK Jr. Wants Every American to Be Sporting a Wearable Within Four Years

Technology
143 96 862
  • 738 Stimmen
    67 Beiträge
    214 Aufrufe
    K
    That has always been the two big problems with AI. Biases in the training, intentional or not, will always bias the output. And AI is incapable of saying "I do not have suffient training on this subject or reliable sources for it to give you a confident answer". It will always give you its best guess, even if it is completely hallucinating much of the data. The only way to identify the hallucinations if it isn't just saying absurd stuff on the face of it, it to do independent research to verify it, at which point you may as well have just researched it yourself in the first place. AI is a tool, and it can be a very powerful tool with the right training and use cases. For example, I use it at a software engineer to help me parse error codes when googling working or to give me code examples for modules I've never used. There is no small number of times it has been completely wrong, but in my particular use case, that is pretty easy to confirm very quickly. The code either works as expected or it doesn't, and code is always tested before releasing it anyway. In research, it is great at helping you find a relevant source for your research across the internet or in a specific database. It is usually very good at summarizing a source for you to get a quick idea about it before diving into dozens of pages. It CAN be good at helping you write your own papers in a LIMITED capacity, such as cleaning up your writing in your writing to make it clearer, correctly formatting your bibliography (with actual sources you provide or at least verify), etc. But you have to remember that it doesn't "know" anything at all. It isn't sentient, intelligent, thoughtful, or any other personification placed on AI. None of the information it gives you is trustworthy without verification. It can and will fabricate entire studies that do not exist even while attributed to real researcher. It can mix in unreliable information with reliable information becuase there is no difference to it. Put simply, it is not a reliable source of information... ever. Make sure you understand that.
  • 17 Stimmen
    2 Beiträge
    26 Aufrufe
    T
    Yeah, sure. Like the police need extra help with racial profiling and "probable cause." Fuck this, and fuck the people who think this is a good idea. I'm sure the authoritarians in power right now will get right on those proposed "safeguards," right after they install backdoors into encryption, to which Only They Have The Key, to "protect" everyone from the scary "criminals."
  • 195 Stimmen
    31 Beiträge
    110 Aufrufe
    isveryloud@lemmy.caI
    It's a loaded term that should be replaced with a more nimble definition. A dog whistle is the name for a loaded term that is used to tag a specific target with a large baggage of information, but in a way where only people who are part of the "in group" can understand the baggage of the word, hence "dog whistle", only heard by dogs. In the case of the word "degeneracy", it's a vague word that has been often used to attack, among other things, LGBTQ and their allies as well as non-religious people. The term is vague enough that the user can easily weasel their way out of criticism for its usage, but the target audience gets the message loud and clear: "[target] should be attacked for being [thing]." Another example of such a word would be "woke".
  • 64 Stimmen
    13 Beiträge
    72 Aufrufe
    semperverus@lemmy.worldS
    You want abliterated models, not distilled.
  • The Universal Tech Tree

    Technology technology
    1
    1
    21 Stimmen
    1 Beiträge
    10 Aufrufe
    Niemand hat geantwortet
  • 93 Stimmen
    1 Beiträge
    11 Aufrufe
    Niemand hat geantwortet
  • OpenAI plans massive UAE data center project

    Technology technology
    4
    1
    0 Stimmen
    4 Beiträge
    31 Aufrufe
    V
    TD Cowen (which is basically the US arm of one of the largest Canadian investment banks) did an extensive report on the state of AI investment. What they found was that despite all their big claims about the future of AI, Microsoft were quietly allowing letters of intent for billions of dollars worth of new compute capacity to expire. Basically, scrapping future plans for expansion, but in a way that's not showy and doesn't require any kind of big announcement. The equivalent of promising to be at the party and then just not showing up. Not long after this reporting came out, it got confirmed by Microsoft, and not long after it came out that Amazon was doing the same thing. Ed Zitron has a really good write up on it; https://www.wheresyoured.at/power-cut/ Amazon isn't the big surprise, they've always been the most cautious of the big players on the whole AI thing. Microsoft on the other hand are very much trying to play things both ways. They know AI is fucked, which is why they're scaling back, but they've also invested a lot of money into their OpenAI partnership so now they have to justify that expenditure which means convincing investors that consumers absolutely love their AI products and are desparate for more. As always, follow the money. Stuff like the three mile island thing is mostly just applying for permits and so on at this point. Relatively small investments. As soon as it comes to big money hitting the table, they're pulling back. That's how you know how they really feel.
  • 62 Stimmen
    6 Beiträge
    40 Aufrufe
    W
    What could possibly go wrong? Edit: reads like the substrate still needs to be introduced first