Skip to content

The Guardian and Cambridge University's Department of Computer Science unveil new secure technology to protect sources

Technology
64 23 303
  • Microsoft sued by authors over use of books in AI training

    Technology technology
    4
    1
    114 Stimmen
    4 Beiträge
    32 Aufrufe
    isaamoonkhgdt_6143@lemmy.zipI
    The writers alleged in the complaint that Microsoft used a collection of nearly 200,000 pirated books to train Megatron, an algorithm that gives text responses to user prompts. Which Megatron are we referring to? This [image: c747568b-0dd5-431e-bd19-2fbfdf5d372c.webp] Or This [image: 735a9693-ec67-489c-92f6-addb803291a4.webp]
  • 73 Stimmen
    15 Beiträge
    22 Aufrufe
    L
    same, i however dont subscribe to thier "contact you by recruiters, since you get flooded with indian recruiters of questionable positions, and jobs im not eligible for. unfortunately for the field i was trying to get into, wasnt helping so i found just a regular job in the mean time.
  • Role of Email Deliverability Consulting in ROI

    Technology technology
    1
    2
    0 Stimmen
    1 Beiträge
    15 Aufrufe
    Niemand hat geantwortet
  • 195 Stimmen
    31 Beiträge
    96 Aufrufe
    isveryloud@lemmy.caI
    It's a loaded term that should be replaced with a more nimble definition. A dog whistle is the name for a loaded term that is used to tag a specific target with a large baggage of information, but in a way where only people who are part of the "in group" can understand the baggage of the word, hence "dog whistle", only heard by dogs. In the case of the word "degeneracy", it's a vague word that has been often used to attack, among other things, LGBTQ and their allies as well as non-religious people. The term is vague enough that the user can easily weasel their way out of criticism for its usage, but the target audience gets the message loud and clear: "[target] should be attacked for being [thing]." Another example of such a word would be "woke".
  • 311 Stimmen
    37 Beiträge
    159 Aufrufe
    S
    Same, especially when searching technical or niche topics. Since there aren't a ton of results specific to the topic, mostly semi-related results will appear in the first page or two of a regular (non-Gemini) Google search, just due to the higher popularity of those webpages compared to the relevant webpages. Even the relevant webpages will have lots of non-relevant or semi-relevant information surrounding the answer I'm looking for. I don't know enough about it to be sure, but Gemini is probably just scraping a handful of websites on the first page, and since most of those are only semi-related, the resulting summary is a classic example of garbage in, garbage out. I also think there's probably something in the code that looks for information that is shared across multiple sources and prioritizing that over something that's only on one particular page (possibly the sole result with the information you need). Then, it phrases the summary as a direct answer to your query, misrepresenting the actual information on the pages they scraped. At least Gemini gives sources, I guess. The thing that gets on my nerves the most is how often I see people quote the summary as proof of something without checking the sources. It was bad before the rollout of Gemini, but at least back then Google was mostly scraping text and presenting it with little modification, along with a direct link to the webpage. Now, it's an LLM generating text phrased as a direct answer to a question (that was also AI-generated from your search query) using AI-summarized data points scraped from multiple webpages. It's obfuscating the source material further, but I also can't help but feel like it exposes a little of the behind-the-scenes fuckery Google has been doing for years before Gemini. How it bastardizes your query by interpreting it into a question, and then prioritizes homogeneous results that agree on the "answer" to your "question". For years they've been doing this to a certain extent, they just didn't share how they interpreted your query.
  • 119 Stimmen
    10 Beiträge
    57 Aufrufe
    S
    Active ISA would be a disaster. My fairly modern car is unable to reliably detect posted or implied speed limits. Sometimes it overshoots by more than double and sometimes it mandates more than 3/4 slower. The problem is the way it is and will have to be done is by means of optical detection. GPS speed measurement can also be surprisingly unreliable. Especially in underground settings like long pass-unders and tunnels. If the system would be based on something reliable like local wireless communications between speed limit postings it would be a different issue - would also come with a significant risc of abuse though. Also the passive ISA was the first thing I disabled. And I abide by posted speed limits.
  • Anthropic's AI is Writing Its Own Blog - Oh Wait. No It's Not

    Technology technology
    4
    67 Stimmen
    4 Beiträge
    31 Aufrufe
    mrjgyfly@lemmy.worldM
    They absolutely will. AI is great if you drastically lower your standards.
  • 478 Stimmen
    81 Beiträge
    314 Aufrufe
    douglasg14b@lemmy.worldD
    Did I say that it did? No? Then why the rhetorical question for something that I never stated? Now that we're past that, I'm not sure if I think it's okay, but I at least recognize that it's normalized within society. And has been for like 70+ years now. The problem happens with how the data is used, and particularly abused. If you walk into my store, you expect that I am monitoring you. You expect that you are on camera and that your shopping patterns, like all foot traffic, are probably being analyzed and aggregated. What you buy is tracked, at least in aggregate, by default really, that's just volume tracking and prediction. Suffice to say that broad customer behavior analysis has been a thing for a couple generations now, at least. When you go to a website, why would you think that it is not keeping track of where you go and what you click on in the same manner? Now that I've stated that I do want to say that the real problems that we experience come in with how this data is misused out of what it's scope should be. And that we should have strong regulatory agencies forcing compliance of how this data is used and enforcing the right to privacy for people that want it removed.