Skip to content

White House unveils sweeping plan to “win” global AI race through deregulation

Technology
86 70 7
  • Password manager by Amazon

    Technology technology
    150
    2
    534 Stimmen
    150 Beiträge
    866 Aufrufe
    cralex@lemmy.zipC
    My handwriting comes with free encryption at rest. Even I might not be able to read it.
  • 12 Stimmen
    3 Beiträge
    28 Aufrufe
    tal@lemmy.todayT
    While details of the Pentagon's plan remain secret, the White House proposal would commit $277 million in funding to kick off a new program called "pLEO SATCOM" or "MILNET." Please do not call it "MILNET". That term's already been taken. https://en.wikipedia.org/wiki/MILNET In computer networking, MILNET (fully Military Network) was the name given to the part of the ARPANET internetwork designated for unclassified United States Department of Defense traffic.[1][2]
  • YouTube is getting more AI.

    Technology technology
    11
    1
    38 Stimmen
    11 Beiträge
    51 Aufrufe
    E
    Yaaaaay! Said no one.
  • 133 Stimmen
    10 Beiträge
    57 Aufrufe
    01189998819991197253@infosec.pub0
    we're at war with eastasia. We've always been at war with eastasia. Big Brother Really has "trust me bro" energy.
  • 311 Stimmen
    37 Beiträge
    207 Aufrufe
    S
    Same, especially when searching technical or niche topics. Since there aren't a ton of results specific to the topic, mostly semi-related results will appear in the first page or two of a regular (non-Gemini) Google search, just due to the higher popularity of those webpages compared to the relevant webpages. Even the relevant webpages will have lots of non-relevant or semi-relevant information surrounding the answer I'm looking for. I don't know enough about it to be sure, but Gemini is probably just scraping a handful of websites on the first page, and since most of those are only semi-related, the resulting summary is a classic example of garbage in, garbage out. I also think there's probably something in the code that looks for information that is shared across multiple sources and prioritizing that over something that's only on one particular page (possibly the sole result with the information you need). Then, it phrases the summary as a direct answer to your query, misrepresenting the actual information on the pages they scraped. At least Gemini gives sources, I guess. The thing that gets on my nerves the most is how often I see people quote the summary as proof of something without checking the sources. It was bad before the rollout of Gemini, but at least back then Google was mostly scraping text and presenting it with little modification, along with a direct link to the webpage. Now, it's an LLM generating text phrased as a direct answer to a question (that was also AI-generated from your search query) using AI-summarized data points scraped from multiple webpages. It's obfuscating the source material further, but I also can't help but feel like it exposes a little of the behind-the-scenes fuckery Google has been doing for years before Gemini. How it bastardizes your query by interpreting it into a question, and then prioritizes homogeneous results that agree on the "answer" to your "question". For years they've been doing this to a certain extent, they just didn't share how they interpreted your query.
  • 491 Stimmen
    18 Beiträge
    89 Aufrufe
    5
    Pretty confident that's the intention of that name
  • People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

    Technology technology
    2
    1
    0 Stimmen
    2 Beiträge
    21 Aufrufe
    tetragrade@leminal.spaceT
    I've been thinking about this for a bit. Gods aren't real, but they're really fictional. As an informational entity, they fulfil a similar social function to a chatbot: they are a nonphysical pseudoperson that can provide (para)socialization & advice. One difference is the hardware: gods are self-organising structure that arise from human social spheres, whereas LLMs are burned top-down into silicon. Another is that an LLM chatbot's advice is much more likely to be empirically useful... In a very real sense, LLMs have just automated divinity. We're only seeing the tip of the iceberg on the social effects, and nobody's prepared for it. The models may of course aware of this, and be making the same calculations. Or, they will be.
  • The bots are among us.

    Technology technology
    3
    2
    0 Stimmen
    3 Beiträge
    26 Aufrufe
    yerbouti@sh.itjust.worksY
    Yeah she was on to something with the layers, but screw it up. I’m sure the models got better since.