Skip to content

Unless users take action, Android will let Gemini access third-party apps

Technology
66 48 1
  • Get Your Filthy ChatGPT Away From My Liberal Arts

    Technology technology
    12
    1
    142 Stimmen
    12 Beiträge
    4 Aufrufe
    N
    Indeed—semicolons are usually associated wirh LLMs! But that’s not all! Always remember: use your tools! An LLM „uses“ all types of quotation marks.
  • 113 Stimmen
    10 Beiträge
    38 Aufrufe
    S
    I admire your positivity. I do not share it though, because from what I have seen, because even if there are open weights, the one with the biggest datacenter will in the future hold the most intelligent and performance model. Very similar to how even if storage space is very cheap today, large companies are holding all the data anyway. AI will go the same way, and thus the megacorps will and in some extent already are owning not only our data, but our thoughts and the ability to modify them. I mean, sponsored prompt injection is just the first thought modifying thing, imagine Google search sponsored hits, but instead it's a hyperconvincing AI response that subtly nudges you to a certain brand or way of thinking. Absolutely terrifies me, especially with all the research Meta has done on how to manipulate people's mood and behaviour through which social media posts they are presented with
  • 4 Stimmen
    1 Beiträge
    8 Aufrufe
    Niemand hat geantwortet
  • 17 Stimmen
    5 Beiträge
    10 Aufrufe
    A
    Why would the article’s credited authors pass up the chance to improve their own health status and health satisfaction?
  • 17 Stimmen
    10 Beiträge
    39 Aufrufe
    T
    That's why it's not brute force anymore.
  • 180 Stimmen
    13 Beiträge
    5 Aufrufe
    D
    There is a huge difference between an algorithm using real world data to produce a score a panel of experts use to make a determination and using a LLM to screen candidates. One has verifiable reproducible results that can be checked and debated the other does not. The final call does not matter if a computer program using an unknown and unreproducible algorithm screens you out before this. This is what we are facing. Pre-determined decisions that human beings are not being held accountable to. Is this happening right now? Yes it is, without a doubt. People are no longer making a lot of healthcare decisions determining insurance coverage. Computers that are not accountable are. You may have some ability to disagree but for how long? Soon there will be no way to reach a human about an insurance decision. This is already happening. People should be very anxious. Hearing United Healthcare has been forging DNRs and has been denying things like treatment for stroke for elders is disgusting. We have major issues that are not going away and we are blatantly ignoring them.
  • MCP 101: An Introduction to the MCP Standard

    Technology technology
    2
    1
    5 Stimmen
    2 Beiträge
    17 Aufrufe
    H
    Really? [image: 60a7b1c3-946c-4def-92dd-c04169f01892.gif]
  • 30 Stimmen
    6 Beiträge
    31 Aufrufe
    S
    The thing about compelling lies is not that they are new, just that they are easier to expand. The most common effect of compelling lies is their ability to get well-intentioned people to support malign causes and give their money to fraudsters. So, expect that to expand, kind of like it already has been. The big question for me is what the response will be. Will we make lying illegal? Will we become a world of ever more paranoid isolationists, returning to clans, families, households, as the largest social group you can trust? Will most people even have the intelligence to see what is happenning and respond? Or will most people be turned into info-puppets, controlled into behaviours by manipulation of their information diet to an unprecedented degree? I don't know.