Skip to content

Founder of 23andMe buys back company out of bankruptcy auction

Technology
60 42 201
  • AI agents wrong ~70% of time: Carnegie Mellon study

    Technology technology
    276
    1
    959 Stimmen
    276 Beiträge
    85 Aufrufe
    D
    You're better off asking one human to do the same task ten times. Humans get better and faster at things as they go along. Always slower than an LLM, but LLMs get more and more likely to veer off on some flight of fancy, further and further from reality, the more it says to you. The chances of it staying factual in the long term are really low. It's a born bullshitter. It knows a little about a lot, but it has no clue what's real and what's made up, or it doesn't care. If you want some text quickly, that sounds right, but you genuinely don't care whether it is right at all, go for it, use an LLM. It'll be great at that.
  • 803 Stimmen
    109 Beiträge
    161 Aufrufe
    the_decryptor@aussie.zoneT
    PNG gets you the best compatibility and features, at the expense of file size. But I probably wouldn't use it for uploading photographs to the web of course.
  • 1 Stimmen
    2 Beiträge
    6 Aufrufe
    X
    How many times is this putz going to post this article under new titles before they are banned?
  • 4 Stimmen
    1 Beiträge
    9 Aufrufe
    Niemand hat geantwortet
  • 17 Stimmen
    10 Beiträge
    39 Aufrufe
    T
    That's why it's not brute force anymore.
  • 311 Stimmen
    37 Beiträge
    65 Aufrufe
    S
    Same, especially when searching technical or niche topics. Since there aren't a ton of results specific to the topic, mostly semi-related results will appear in the first page or two of a regular (non-Gemini) Google search, just due to the higher popularity of those webpages compared to the relevant webpages. Even the relevant webpages will have lots of non-relevant or semi-relevant information surrounding the answer I'm looking for. I don't know enough about it to be sure, but Gemini is probably just scraping a handful of websites on the first page, and since most of those are only semi-related, the resulting summary is a classic example of garbage in, garbage out. I also think there's probably something in the code that looks for information that is shared across multiple sources and prioritizing that over something that's only on one particular page (possibly the sole result with the information you need). Then, it phrases the summary as a direct answer to your query, misrepresenting the actual information on the pages they scraped. At least Gemini gives sources, I guess. The thing that gets on my nerves the most is how often I see people quote the summary as proof of something without checking the sources. It was bad before the rollout of Gemini, but at least back then Google was mostly scraping text and presenting it with little modification, along with a direct link to the webpage. Now, it's an LLM generating text phrased as a direct answer to a question (that was also AI-generated from your search query) using AI-summarized data points scraped from multiple webpages. It's obfuscating the source material further, but I also can't help but feel like it exposes a little of the behind-the-scenes fuckery Google has been doing for years before Gemini. How it bastardizes your query by interpreting it into a question, and then prioritizes homogeneous results that agree on the "answer" to your "question". For years they've been doing this to a certain extent, they just didn't share how they interpreted your query.
  • 43 Stimmen
    1 Beiträge
    11 Aufrufe
    Niemand hat geantwortet
  • WhatsApp provides no cryptographic management for group messages

    Technology technology
    3
    1
    17 Stimmen
    3 Beiträge
    21 Aufrufe
    S
    Just be sure to add only the people you want to be there. I've heard some people add others and it's a bit messy