Skip to content

Kids are making deepfakes of each other, and laws aren’t keeping up

Technology
152 73 0
  • 282 Stimmen
    27 Beiträge
    11 Aufrufe
    F
    it becomes a form of censorship when snall websites and forums shut down because they don’t have the capacity to comply. In this scenario that's not a consideration. We're talking about algorithmically-driven content, which wouldn't apply to Lemmy, Mastodon, or many mom-and-pop sized pages and forums. Those have human moderation anyway, which the big sites don't. If you're making editorial decisions by weighting algorithmically-driven content, it's not censorship to hold you accountable for the consequences of your editorial decisions. (Just as we would any major media outlet.)
  • 138 Stimmen
    15 Beiträge
    16 Aufrufe
    toastedravioli@midwest.socialT
    ChatGPT is not a doctor. But models trained on imaging can actually be a very useful tool for them to utilize. Even years ago, just before the AI “boom”, they were asking doctors for details on how they examine patient images and then training models on that. They found that the AI was “better” than doctors specifically because it followed the doctor’s advice 100% of the time; thereby eliminating any kind of bias from the doctor that might interfere with following their own training. Of course, the splashy headline “AI better than doctors” was ridiculous. But it does show the benefit of having a neutral tool for doctors to utilize, especially when looking at images for people who are outside of the typical demographics that much medical training is based on. (As in mostly just white men. For example, everything they train doctors on regarding knee imagining comes from images of the knees of coal miners in the UK some decades ago)
  • 92 Stimmen
    5 Beiträge
    6 Aufrufe
    H
    This is interesting to me as I like to say the llms are basically another abstraction of search. Initially it was links with no real weight that had to be gone through and then various algorithms weighted the return, then the results started giving a small blurb so one did not have to follow every link, and now your basically getting a report which should have references to the sources. I would like to see this looking at how folks engage with an llm. Basically my guess is if one treats the llm as a helper and collaborates to create the product that they will remember more than if they treat it as a servant and just instructs them to do it and takes the output as is.
  • Researchers develop recyclable, healable electronics

    Technology technology
    3
    1
    15 Stimmen
    3 Beiträge
    13 Aufrufe
    T
    Isn't the most common failure modes of electronics capacitors dying, followed closely by heat in chips? This research sounds cool and all.
  • 21 Stimmen
    6 Beiträge
    15 Aufrufe
    sentient_loom@sh.itjust.worksS
    I want to read his "Meaning of the City" because I just like City theory, but I keep postponing in case it's just Christian morality lessons. The anarchist Christian angle makes this sound more interesting.
  • 121 Stimmen
    58 Beiträge
    32 Aufrufe
    D
    I bet every company has at least one employee with right-wing political views. Choosing a product based on some random quotes by employees is stupid.
  • 0 Stimmen
    3 Beiträge
    12 Aufrufe
    thehatfox@lemmy.worldT
    The platform owners don’t consider engagement to me be participation in meaningful discourse. Engagement to them just means staying on the platform while seeing ads. If bots keep people doing that those platforms will keep letting them in.
  • *deleted by creator*

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    8 Aufrufe
    Niemand hat geantwortet