Skip to content

Salt Lake City, plans to implement AI-assisted 911 call triaging to handle ~30% of about 450K non-emergency calls per year

Technology
143 77 3.0k
  • 236 Stimmen
    37 Beiträge
    181 Aufrufe
    K
    AI has some use but it always needs human oversight and the final decision must also be made by a human professional. If you use AI to speed up tasks and you know whether the output of the AI is valid or not, and you have the final decision, then you can safely use it. But if you let AI decide on and execute important tasks basically autonomously, then you have a recipe for disaster. Fully autonomous and mistake-free AI is a naive pipe dream which I don't see on the horizon at all.
  • Nyamerican Jacket – Where Modern America Meets Timeless Fashion

    Technology technology
    1
    2
    0 Stimmen
    1 Beiträge
    6 Aufrufe
    Niemand hat geantwortet
  • 671 Stimmen
    41 Beiträge
    369 Aufrufe
    patatahooligan@lemmy.worldP
    No, there's no way to automatically make something become law. A successful petition just forces the European Commission to discuss it and potentially propose legislation. Even though it's not forcing anything to happen, there is an incentive for the commission to seriously consider it as there is probably a political cost to officially denying a motion that has proven that it concerns a large amount of people.
  • Inside the face scanning tech behind social media age limits

    Technology technology
    1
    1
    25 Stimmen
    1 Beiträge
    15 Aufrufe
    Niemand hat geantwortet
  • 363 Stimmen
    8 Beiträge
    81 Aufrufe
    A
    No I don't think there really were many so your point is valid But the law works like that, things are in a grey area or in limbo until they are defined into law. That means the new law can be written to either protect consumer privacy, or make it legal to the letter to rape consumer privacy like this bill, or some weird inbetween where some shady stuff is still explicitly allowed but in general consumers are protected in specific ways from specific privacy abuses This bill being the second option is bad because typically when laws are written it then takes a loooong time to reverse them
  • 310 Stimmen
    37 Beiträge
    344 Aufrufe
    S
    Same, especially when searching technical or niche topics. Since there aren't a ton of results specific to the topic, mostly semi-related results will appear in the first page or two of a regular (non-Gemini) Google search, just due to the higher popularity of those webpages compared to the relevant webpages. Even the relevant webpages will have lots of non-relevant or semi-relevant information surrounding the answer I'm looking for. I don't know enough about it to be sure, but Gemini is probably just scraping a handful of websites on the first page, and since most of those are only semi-related, the resulting summary is a classic example of garbage in, garbage out. I also think there's probably something in the code that looks for information that is shared across multiple sources and prioritizing that over something that's only on one particular page (possibly the sole result with the information you need). Then, it phrases the summary as a direct answer to your query, misrepresenting the actual information on the pages they scraped. At least Gemini gives sources, I guess. The thing that gets on my nerves the most is how often I see people quote the summary as proof of something without checking the sources. It was bad before the rollout of Gemini, but at least back then Google was mostly scraping text and presenting it with little modification, along with a direct link to the webpage. Now, it's an LLM generating text phrased as a direct answer to a question (that was also AI-generated from your search query) using AI-summarized data points scraped from multiple webpages. It's obfuscating the source material further, but I also can't help but feel like it exposes a little of the behind-the-scenes fuckery Google has been doing for years before Gemini. How it bastardizes your query by interpreting it into a question, and then prioritizes homogeneous results that agree on the "answer" to your "question". For years they've been doing this to a certain extent, they just didn't share how they interpreted your query.
  • 323 Stimmen
    137 Beiträge
    1k Aufrufe
    F
    I think it would be best if that's a user setting, like dark mode. It would obviously be a popular setting to adjust. If they don't do that, there will doubtless be grease monkey and other scripts to hide it.
  • AI model collapse is not what we paid for

    Technology technology
    20
    1
    84 Stimmen
    20 Beiträge
    172 Aufrufe
    A
    I share your frustration. I went nuts about this the other day. It was in the context of searching on a discord server, rather than Google, but it was so aggravating because of the how the "I know better than you" is everywhere nowadays in tech. The discord server was a reading group, and I was searching for discussion regarding a recent book they'd studied, by someone named "Copi". At first, I didn't use quotation marks, and I found my results were swamped with messages that included the word "copy". At this point I was fairly chill and just added quotation marks to my query to emphasise that it definitely was "Copi" I wanted. I still was swamped with messages with "copy", and it drove me mad because there is literally no way to say "fucking use the terms I give you and not the ones you think I want". The software example you give is a great example of when it would be real great to be able to have this ability. TL;DR: Solidarity in rage