Skip to content

You can still enable uBlock Origin in Chrome, here is how

Technology
64 54 0
  • Microsoft Soars as AI Cloud Boom Drives $595 Price Target

    Technology technology
    2
    1
    5 Stimmen
    2 Beiträge
    0 Aufrufe
    isaamoonkhgdt_6143@lemmy.zipI
    I wonder if Microsoft will do a stock split in the future?
  • Inside the face scanning tech behind social media age limits

    Technology technology
    1
    1
    25 Stimmen
    1 Beiträge
    10 Aufrufe
    Niemand hat geantwortet
  • Firefox 140 Brings Tab Unload, Custom Search & New ESR

    Technology technology
    41
    1
    234 Stimmen
    41 Beiträge
    192 Aufrufe
    S
    Read again. I quoted something along the lines of "just as much a development decision as a marketing one" and I said, it wasn't a development decision, so what's left? Firefox released just as frequently before, just that they didn’t increase the major version that often. This does not appear to be true. Why don't you take a look at the version history instead of some marketing blog post? https://www.mozilla.org/en-US/firefox/releases/ Version 2 had 20 releases within 730 days, averaging one release every 36.5 days. Version 3 had 19 releases within 622 days, averaging 32.7 days per release. But these releases were unscheduled, so they were released when they were done. Now they are on a fixed 90-day schedule, no matter if anything worthwhile was complete or not, plus hotfix releases whenever they are necessary. That's not faster, but instead scheduled, and also they are incrementing the major version even if no major change was included. That's what the blog post was alluding to. In the before times, a major version number increase indicated major changes. Now it doesn't anymore, which means sysadmins still need to consider each release a major release, even if it doesn't contain major changes because it might contain them and the version name doesn't say anything about whether it does or not. It's nothing but a marketing change, moving from "version numbering means something" to "big number go up".
  • 311 Stimmen
    37 Beiträge
    159 Aufrufe
    S
    Same, especially when searching technical or niche topics. Since there aren't a ton of results specific to the topic, mostly semi-related results will appear in the first page or two of a regular (non-Gemini) Google search, just due to the higher popularity of those webpages compared to the relevant webpages. Even the relevant webpages will have lots of non-relevant or semi-relevant information surrounding the answer I'm looking for. I don't know enough about it to be sure, but Gemini is probably just scraping a handful of websites on the first page, and since most of those are only semi-related, the resulting summary is a classic example of garbage in, garbage out. I also think there's probably something in the code that looks for information that is shared across multiple sources and prioritizing that over something that's only on one particular page (possibly the sole result with the information you need). Then, it phrases the summary as a direct answer to your query, misrepresenting the actual information on the pages they scraped. At least Gemini gives sources, I guess. The thing that gets on my nerves the most is how often I see people quote the summary as proof of something without checking the sources. It was bad before the rollout of Gemini, but at least back then Google was mostly scraping text and presenting it with little modification, along with a direct link to the webpage. Now, it's an LLM generating text phrased as a direct answer to a question (that was also AI-generated from your search query) using AI-summarized data points scraped from multiple webpages. It's obfuscating the source material further, but I also can't help but feel like it exposes a little of the behind-the-scenes fuckery Google has been doing for years before Gemini. How it bastardizes your query by interpreting it into a question, and then prioritizes homogeneous results that agree on the "answer" to your "question". For years they've been doing this to a certain extent, they just didn't share how they interpreted your query.
  • 353 Stimmen
    40 Beiträge
    72 Aufrufe
    L
    If AI constantly refined its own output, sure, unless it hits a wall eventually or starts spewing bullshit because of some quirk of training. But I doubt it could learn to summarise better without external input, just like a compiler won't produce a more optimised version of itself without human development work.
  • IRS tax filing software released to the people as free software

    Technology technology
    14
    288 Stimmen
    14 Beiträge
    57 Aufrufe
    P
    Only if you're a scumbag/useful idiot.
  • Napster/BitTorrent for machine learning?

    Technology technology
    3
    1
    27 Stimmen
    3 Beiträge
    27 Aufrufe
    G
    What would a use case look like? I assume that the latency will make it impractical to train something that's LLM-sized. But even for something small, wouldn't a data center be more efficient?
  • Selling Surveillance as Convenience

    Technology technology
    13
    1
    112 Stimmen
    13 Beiträge
    58 Aufrufe
    E
    Trying to get my peers to care about their own privacy is exhausting. I wish their choices don't effect me, but like this article states.. They do in the long run. I will remain stubborn and only compromise rather than give in.