Skip to content

Apple sued by shareholders for allegedly overstating AI progress

Technology
75 49 376
  • 349 Stimmen
    72 Beiträge
    341 Aufrufe
    M
    Sure, the internet is more practical, and the odds of being caught in the time required to execute a decent strike plan, even one as vague as: "we're going to Amerika and we're going to hit 50 high profile targets on July 4th, one in every state" (Dear NSA analyst, this is entirely hypothetical) so your agents spread to the field and start assessing from the ground the highest impact targets attainable with their resources, extensive back and forth from the field to central command daily for 90 days of prep, but it's being carried out on 270 different active social media channels as innocuous looking photo exchanges with 540 pre-arranged algorithms hiding the messages in the noise of the image bits. Chances of security agencies picking this up from the communication itself? About 100x less than them noticing 50 teams of activists deployed to 50 states at roughly the same time, even if they never communicate anything. HF (more often called shortwave) is well suited for the numbers game. A deep cover agent lying in wait, potentially for years. Only "tell" is their odd habit of listening to the radio most nights. All they're waiting for is a binary message: if you hear the sequence 3 17 22 you are to make contact for further instructions. That message may come at any time, or may not come for a decade. These days, you would make your contact for further instructions via internet, and sure, it would be more practical to hide the "make contact" signal in the internet too, but shortwave is a longstanding tech with known operating parameters.
  • 21 Stimmen
    19 Beiträge
    99 Aufrufe
    B
    The AI only needs to alert the doctor that something is off and should be tested for. It does not replace doctors, but augments them. It's actually a great use for AI, it's just not what we think of as AI in a post-LLM world. The medically useful AI is pattern recognition. LLMs may also help doctors if they need a starting point into researching something weird and obscure, but ChatGPT isn't being used for diagnosing patients, nor is anything any AI says the "final verdict". It's just a tool to improve early detection of disorders, or it might point someone towards an useful article or book.
  • 119 Stimmen
    8 Beiträge
    18 Aufrufe
    wizardbeard@lemmy.dbzer0.comW
    Most still are/can be. Enough that I find it hard to believe people are missing out without podcasts through these paid services.
  • 16 Stimmen
    7 Beiträge
    37 Aufrufe
    dabster291@lemmy.zipD
    Why does the title use a korean letter as a divider?
  • Iran asks its people to delete WhatsApp from their devices

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    12 Aufrufe
    Niemand hat geantwortet
  • A ban on state AI laws could smash Big Tech’s legal guardrails

    Technology technology
    10
    1
    121 Stimmen
    10 Beiträge
    54 Aufrufe
    P
    It's always been "states rights" to enrich rulers at the expense of everyone else.
  • You probably don't remember these but I have a question

    Technology technology
    52
    2
    96 Stimmen
    52 Beiträge
    196 Aufrufe
    lordwiggle@lemmy.worldL
    Priorities man, priorities
  • People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

    Technology technology
    2
    1
    0 Stimmen
    2 Beiträge
    20 Aufrufe
    tetragrade@leminal.spaceT
    I've been thinking about this for a bit. Gods aren't real, but they're really fictional. As an informational entity, they fulfil a similar social function to a chatbot: they are a nonphysical pseudoperson that can provide (para)socialization & advice. One difference is the hardware: gods are self-organising structure that arise from human social spheres, whereas LLMs are burned top-down into silicon. Another is that an LLM chatbot's advice is much more likely to be empirically useful... In a very real sense, LLMs have just automated divinity. We're only seeing the tip of the iceberg on the social effects, and nobody's prepared for it. The models may of course aware of this, and be making the same calculations. Or, they will be.