Skip to content

60% of Teachers Used AI This Year and Saved up to 6 Hours of Work a Week

Technology
28 19 0
  • 229 Stimmen
    47 Beiträge
    121 Aufrufe
    D
    Oh it's Towers of Hanoi. I have a screensaver that does this.
  • 28 Stimmen
    7 Beiträge
    36 Aufrufe
    J
    Just keep in mind they are considered a crime in the US and can be located. Use with caution.
  • Hacker Tactic: ESD Diodes

    Technology technology
    1
    1
    24 Stimmen
    1 Beiträge
    8 Aufrufe
    Niemand hat geantwortet
  • Is AI Apocalypse Inevitable? - Tristan Harris

    Technology technology
    11
    1
    121 Stimmen
    11 Beiträge
    45 Aufrufe
    V
    Define AGI, because recently the definition is shifting down to match LLM. In fact we can say we achieved AGI now because we have machine that answers questions. The problem will be when the number of questions will start shrinking not because of number of problems but number of people that understand those problems. That is what is happening now. Don't believe me, read the statistics about age and workforce. Now put it into urgent need to something to replace those people. After that think what will happen when all those attempts fail.
  • 195 Stimmen
    31 Beiträge
    25 Aufrufe
    isveryloud@lemmy.caI
    It's a loaded term that should be replaced with a more nimble definition. A dog whistle is the name for a loaded term that is used to tag a specific target with a large baggage of information, but in a way where only people who are part of the "in group" can understand the baggage of the word, hence "dog whistle", only heard by dogs. In the case of the word "degeneracy", it's a vague word that has been often used to attack, among other things, LGBTQ and their allies as well as non-religious people. The term is vague enough that the user can easily weasel their way out of criticism for its usage, but the target audience gets the message loud and clear: "[target] should be attacked for being [thing]." Another example of such a word would be "woke".
  • 257 Stimmen
    67 Beiträge
    16 Aufrufe
    L
    Maybe you're right: is there verification? Neither content policy (youtube or tiktok) clearly lays out rules on those words. I only find unverified claims: some write it started at YouTube, others claim TikTok. They claim YouTube demonetizes & TikTok shadowbans. They generally agree content restrictions by these platforms led to the propagation of circumspect shit like unalive & SA. TikTok policy outlines their moderation methods, which include removal and ineligibility to the for you feed. Given their policy on self-harm & automated removal of potential violations, their policy is to effectively & recklessly censor such language. Generally, censorship is suppression of expression. Censorship doesn't exclusively mean content removal, though they're doing that, too. (Digression: revisionism & whitewashing are forms of censorship.) Regardless of how they censor or induce self-censorship, they're chilling inoffensive language pointlessly. While as private entities they are free to moderate as they please, it's unnecessary & the effect is an obnoxious affront on self-expression that's contorting language for the sake of avoiding idiotic restrictions.
  • 56 Stimmen
    4 Beiträge
    23 Aufrufe
    cupcakezealot@lemmy.blahaj.zoneC
    !upliftingnews@lemmy.world
  • 2 Stimmen
    8 Beiträge
    38 Aufrufe
    F
    IMO stuff like that is why a good trainer is important. IMO it's stronger evidence that proper user-centered design should be done and a usable and intuitive UX and set of APIs developed. But because the buyer of this heap of shit is some C-level, there is no incentive to actually make it usable for the unfortunate peons who are forced to interact with it. See also SFDC and every ERP solution in existence.