Skip to content

We Should Immediately Nationalize SpaceX and Starlink

Technology
496 196 1.9k
  • Bluesky is rolling out age verification in the UK

    Technology technology
    40
    1
    160 Stimmen
    40 Beiträge
    198 Aufrufe
    3dcadmin@lemmy.relayeasy.com3
    you know that the new online safety act mandates age verification for pretty much anything don't you?
  • 347 Stimmen
    17 Beiträge
    109 Aufrufe
    L
    Great interview! The whole proof-of-work approach is fascinating, and reminds me of a very old email concept he mentions in passing, where an email server would only accept a msg if the sender agreed to pay like a dollar. Then the user would accept the msg, which would refund the dollar. So this would end up costing legitimate senders nothing but would require spammers to front way too much money to make email spamming affordable. In his version the sender must do a processor-intensive computation, which is fine at the volume legitimate senders use but prohibitive for spammers.
  • 138 Stimmen
    15 Beiträge
    63 Aufrufe
    toastedravioli@midwest.socialT
    ChatGPT is not a doctor. But models trained on imaging can actually be a very useful tool for them to utilize. Even years ago, just before the AI “boom”, they were asking doctors for details on how they examine patient images and then training models on that. They found that the AI was “better” than doctors specifically because it followed the doctor’s advice 100% of the time; thereby eliminating any kind of bias from the doctor that might interfere with following their own training. Of course, the splashy headline “AI better than doctors” was ridiculous. But it does show the benefit of having a neutral tool for doctors to utilize, especially when looking at images for people who are outside of the typical demographics that much medical training is based on. (As in mostly just white men. For example, everything they train doctors on regarding knee imagining comes from images of the knees of coal miners in the UK some decades ago)
  • 311 Stimmen
    37 Beiträge
    159 Aufrufe
    S
    Same, especially when searching technical or niche topics. Since there aren't a ton of results specific to the topic, mostly semi-related results will appear in the first page or two of a regular (non-Gemini) Google search, just due to the higher popularity of those webpages compared to the relevant webpages. Even the relevant webpages will have lots of non-relevant or semi-relevant information surrounding the answer I'm looking for. I don't know enough about it to be sure, but Gemini is probably just scraping a handful of websites on the first page, and since most of those are only semi-related, the resulting summary is a classic example of garbage in, garbage out. I also think there's probably something in the code that looks for information that is shared across multiple sources and prioritizing that over something that's only on one particular page (possibly the sole result with the information you need). Then, it phrases the summary as a direct answer to your query, misrepresenting the actual information on the pages they scraped. At least Gemini gives sources, I guess. The thing that gets on my nerves the most is how often I see people quote the summary as proof of something without checking the sources. It was bad before the rollout of Gemini, but at least back then Google was mostly scraping text and presenting it with little modification, along with a direct link to the webpage. Now, it's an LLM generating text phrased as a direct answer to a question (that was also AI-generated from your search query) using AI-summarized data points scraped from multiple webpages. It's obfuscating the source material further, but I also can't help but feel like it exposes a little of the behind-the-scenes fuckery Google has been doing for years before Gemini. How it bastardizes your query by interpreting it into a question, and then prioritizes homogeneous results that agree on the "answer" to your "question". For years they've been doing this to a certain extent, they just didn't share how they interpreted your query.
  • 72 Stimmen
    9 Beiträge
    53 Aufrufe
    M
    Mr President, could you describe supersonic flight? (said with the emotion of "for all us dumbasses") Oh man there's going to be a barrier, but it's invisible, but it's the greatest barrier man has ever known. I gotta stop
  • I'm making a guide to Pocket alternatives: getoffpocket.com

    Technology technology
    30
    160 Stimmen
    30 Beiträge
    134 Aufrufe
    B
    Update: https://lemmy.world/post/31554728
  • 20 Stimmen
    1 Beiträge
    12 Aufrufe
    Niemand hat geantwortet
  • 32 Stimmen
    8 Beiträge
    46 Aufrufe
    J
    Apparently, it was required to be allowed in that state: Reading a bit more, during the sentencing phase in that state people making victim impact statements can choose their format for expression, and it's entirely allowed to make statements about what other people would say. So the judge didn't actually have grounds to deny it. No jury during that phase, so it's just the judge listening to free form requests in both directions. It's gross, but the rules very much allow the sister to make a statement about what she believes her brother would have wanted to say, in whatever format she wanted. From: https://sh.itjust.works/comment/18471175 influence the sentence From what I've seen, to be fair, judges' decisions have varied wildly regardless, sadly, and sentences should be more standardized. I wonder what it would've been otherwise.