Skip to content

Scientists in Japan develop plastic that dissolves in seawater within hours

Technology
89 65 362
  • 43 Stimmen
    10 Beiträge
    58 Aufrufe
    D
    Deserved it. Shouldn't have beem a racist xenophobe. Hate speech and incitement of violence is not legally protected in the UK. All those far-right rioters deserves prison.
  • What Does a Post-Google Internet Look Like

    Technology technology
    42
    92 Stimmen
    42 Beiträge
    212 Aufrufe
    blisterexe@lemmy.zipB
    I'm just sad I'm too young to have ever seen that old internet, and what it was like... Makes me more determined to try and steer the current internet back in that direction though.
  • 9 Stimmen
    6 Beiträge
    36 Aufrufe
    F
    You said it yourself: extra places that need human attention ... those need ... humans, right? It's easy to say "let AI find the mistakes". But that tells us nothing at all. There's no substance. It's just a sales pitch for snake oil. In reality, there are various ways one can leverage technology to identify various errors, but that only happens through the focused actions of people who actually understand the details of what's happening. And think about it here. We already have computer systems that monitor patients' real-time data when they're hospitalized. We already have systems that check for allergies in prescribed medication. We already have systems for all kinds of safety mechanisms. We're already using safety tech in hospitals, so what can be inferred from a vague headline about AI doing something that's ... checks notes ... already being done? ... Yeah, the safe money is that it's just a scam.
  • 299 Stimmen
    17 Beiträge
    63 Aufrufe
    P
    Unfortunately, pouring sugar into a gas tank will do just about zero damage to an engine. It might clog up the fuel filter, or maybe the pump, but the engine would be fine. Bleach on the other hand….
  • Microsoft Tests Removing Its Name From Bing Search Box

    Technology technology
    11
    1
    52 Stimmen
    11 Beiträge
    58 Aufrufe
    alphapuggle@programming.devA
    Worse. Office.com now takes me to m365.cloud.microsoft which as of today now takes me to a fucking Copilot chat window. Ofc no way to disable it because gee why would anyone want to do that?
  • 256 Stimmen
    67 Beiträge
    283 Aufrufe
    L
    Maybe you're right: is there verification? Neither content policy (youtube or tiktok) clearly lays out rules on those words. I only find unverified claims: some write it started at YouTube, others claim TikTok. They claim YouTube demonetizes & TikTok shadowbans. They generally agree content restrictions by these platforms led to the propagation of circumspect shit like unalive & SA. TikTok policy outlines their moderation methods, which include removal and ineligibility to the for you feed. Given their policy on self-harm & automated removal of potential violations, their policy is to effectively & recklessly censor such language. Generally, censorship is suppression of expression. Censorship doesn't exclusively mean content removal, though they're doing that, too. (Digression: revisionism & whitewashing are forms of censorship.) Regardless of how they censor or induce self-censorship, they're chilling inoffensive language pointlessly. While as private entities they are free to moderate as they please, it's unnecessary & the effect is an obnoxious affront on self-expression that's contorting language for the sake of avoiding idiotic restrictions.
  • 180 Stimmen
    13 Beiträge
    54 Aufrufe
    D
    There is a huge difference between an algorithm using real world data to produce a score a panel of experts use to make a determination and using a LLM to screen candidates. One has verifiable reproducible results that can be checked and debated the other does not. The final call does not matter if a computer program using an unknown and unreproducible algorithm screens you out before this. This is what we are facing. Pre-determined decisions that human beings are not being held accountable to. Is this happening right now? Yes it is, without a doubt. People are no longer making a lot of healthcare decisions determining insurance coverage. Computers that are not accountable are. You may have some ability to disagree but for how long? Soon there will be no way to reach a human about an insurance decision. This is already happening. People should be very anxious. Hearing United Healthcare has been forging DNRs and has been denying things like treatment for stroke for elders is disgusting. We have major issues that are not going away and we are blatantly ignoring them.
  • 18 Stimmen
    10 Beiträge
    55 Aufrufe
    M
    Business Insider was founded in 2007.