Skip to content

AI agents wrong ~70% of time: Carnegie Mellon study

Technology
260 101 5
  • 68 Stimmen
    5 Beiträge
    5 Aufrufe
    adespoton@lemmy.caA
    Most major content producers have agreements with YouTube such that as their content is discovered, monetization all goes to the rights holders. In general, this seems like a pretty good idea, and better than copyright maximalism. However, I’ve had original works of my own “monetized by rights holder” because they used my work (with permission) in one of their products, and so now have co-opted all expressions of my work on YouTube. So the system isn’t perfect.
  • 0 Stimmen
    1 Beiträge
    7 Aufrufe
    Niemand hat geantwortet
  • FairPhone AMA

    Technology technology
    5
    14 Stimmen
    5 Beiträge
    28 Aufrufe
    alcan@lemmy.worldA
    Ask Me Anything
  • 349 Stimmen
    72 Beiträge
    141 Aufrufe
    M
    Sure, the internet is more practical, and the odds of being caught in the time required to execute a decent strike plan, even one as vague as: "we're going to Amerika and we're going to hit 50 high profile targets on July 4th, one in every state" (Dear NSA analyst, this is entirely hypothetical) so your agents spread to the field and start assessing from the ground the highest impact targets attainable with their resources, extensive back and forth from the field to central command daily for 90 days of prep, but it's being carried out on 270 different active social media channels as innocuous looking photo exchanges with 540 pre-arranged algorithms hiding the messages in the noise of the image bits. Chances of security agencies picking this up from the communication itself? About 100x less than them noticing 50 teams of activists deployed to 50 states at roughly the same time, even if they never communicate anything. HF (more often called shortwave) is well suited for the numbers game. A deep cover agent lying in wait, potentially for years. Only "tell" is their odd habit of listening to the radio most nights. All they're waiting for is a binary message: if you hear the sequence 3 17 22 you are to make contact for further instructions. That message may come at any time, or may not come for a decade. These days, you would make your contact for further instructions via internet, and sure, it would be more practical to hide the "make contact" signal in the internet too, but shortwave is a longstanding tech with known operating parameters.
  • 117 Stimmen
    4 Beiträge
    19 Aufrufe
    V
    encourage innovation in the banking and financial system What "innovation" do we need in the banking system?
  • 83 Stimmen
    19 Beiträge
    69 Aufrufe
    E
    The cost of consuming media doesn’t match its worth. I never used ad blockers until they became invasive and disruptive.
  • 311 Stimmen
    37 Beiträge
    41 Aufrufe
    S
    Same, especially when searching technical or niche topics. Since there aren't a ton of results specific to the topic, mostly semi-related results will appear in the first page or two of a regular (non-Gemini) Google search, just due to the higher popularity of those webpages compared to the relevant webpages. Even the relevant webpages will have lots of non-relevant or semi-relevant information surrounding the answer I'm looking for. I don't know enough about it to be sure, but Gemini is probably just scraping a handful of websites on the first page, and since most of those are only semi-related, the resulting summary is a classic example of garbage in, garbage out. I also think there's probably something in the code that looks for information that is shared across multiple sources and prioritizing that over something that's only on one particular page (possibly the sole result with the information you need). Then, it phrases the summary as a direct answer to your query, misrepresenting the actual information on the pages they scraped. At least Gemini gives sources, I guess. The thing that gets on my nerves the most is how often I see people quote the summary as proof of something without checking the sources. It was bad before the rollout of Gemini, but at least back then Google was mostly scraping text and presenting it with little modification, along with a direct link to the webpage. Now, it's an LLM generating text phrased as a direct answer to a question (that was also AI-generated from your search query) using AI-summarized data points scraped from multiple webpages. It's obfuscating the source material further, but I also can't help but feel like it exposes a little of the behind-the-scenes fuckery Google has been doing for years before Gemini. How it bastardizes your query by interpreting it into a question, and then prioritizes homogeneous results that agree on the "answer" to your "question". For years they've been doing this to a certain extent, they just didn't share how they interpreted your query.
  • Reddit will help advertisers turn ‘positive’ posts into ads

    Technology technology
    61
    1
    366 Stimmen
    61 Beiträge
    182 Aufrufe
    noodlesreborn@lemmy.worldN
    Mmmmmm I love not being on Reddit