Skip to content

Apple sues YouTuber who leaked iOS 26’s new “Liquid Glass” software redesign

Technology
72 50 0
  • How to Choose Between Flats in Gunnersbury and Wembley Park

    Technology technology
    1
    0 Stimmen
    1 Beiträge
    4 Aufrufe
    Niemand hat geantwortet
  • Houthi-linked dealers sell arms on X and WhatsApp, report says

    Technology technology
    2
    1
    46 Stimmen
    2 Beiträge
    15 Aufrufe
    C
    But we need to protect children and get your ID….
  • Google Killed Your Attention Span with SEO-Friendly Articles

    Technology technology
    1
    1
    111 Stimmen
    1 Beiträge
    13 Aufrufe
    Niemand hat geantwortet
  • 615 Stimmen
    254 Beiträge
    2k Aufrufe
    N
    That’s a very emphatic restatement of your initial claim. I can’t help but notice that, for all the fancy formatting, that wall of text doesn’t contain a single line which actually defines the difference between “learning” and “statistical optimization”. It just repeats the claim that they are different without supporting that claim in any way. Nothing in there, precludes the alternative hypothesis; that human learning is entirely (or almost entirely) an emergent property of “statistical optimization”. Without some definition of what the difference would be we can’t even theorize a test
  • 370 Stimmen
    26 Beiträge
    130 Aufrufe
    hollownaught@lemmy.worldH
    Bit misleading. Tumour-associated antigens can very easily be detected very early. Problem is, these are only associated with cancer, and provide a very high rate of false positives They're better used as a stepping stone for further testing, or just seeing how advanced a cancer is That is to say, I'm assuming that's what this is about, as i didnt rwad the article. It's the first thing I thought of when I heard "cancer in bloodstream", as the other options tend to be a bit more bleak Edit: they're talking about cancer "shedding genetic material", which I hate how general they're being. Probably talking about proto oncogenes from dead tumour debris, but seems different to what I was expecting
  • 353 Stimmen
    40 Beiträge
    201 Aufrufe
    L
    If AI constantly refined its own output, sure, unless it hits a wall eventually or starts spewing bullshit because of some quirk of training. But I doubt it could learn to summarise better without external input, just like a compiler won't produce a more optimised version of itself without human development work.
  • The Internet of Consent

    Technology technology
    1
    1
    11 Stimmen
    1 Beiträge
    11 Aufrufe
    Niemand hat geantwortet
  • People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

    Technology technology
    2
    1
    0 Stimmen
    2 Beiträge
    21 Aufrufe
    tetragrade@leminal.spaceT
    I've been thinking about this for a bit. Gods aren't real, but they're really fictional. As an informational entity, they fulfil a similar social function to a chatbot: they are a nonphysical pseudoperson that can provide (para)socialization & advice. One difference is the hardware: gods are self-organising structure that arise from human social spheres, whereas LLMs are burned top-down into silicon. Another is that an LLM chatbot's advice is much more likely to be empirically useful... In a very real sense, LLMs have just automated divinity. We're only seeing the tip of the iceberg on the social effects, and nobody's prepared for it. The models may of course aware of this, and be making the same calculations. Or, they will be.