Skip to content

Tough, Tiny, and Totally Repairable: Inside the Framework 12

Technology
105 68 1
  • 15 Stimmen
    7 Beiträge
    0 Aufrufe
    dabster291@lemmy.zipD
    Why does the title use a korean letter as a divider?
  • Microsoft Tests Removing Its Name From Bing Search Box

    Technology technology
    11
    1
    54 Stimmen
    11 Beiträge
    0 Aufrufe
    alphapuggle@programming.devA
    Worse. Office.com now takes me to m365.cloud.microsoft which as of today now takes me to a fucking Copilot chat window. Ofc no way to disable it because gee why would anyone want to do that?
  • The AI girlfriend guy - The Paranoia Of The AI Era

    Technology technology
    4
    1
    7 Stimmen
    4 Beiträge
    6 Aufrufe
    S
    Saying 'don't downvote' is the flammable inflammable conundrum, both don't and do parse as do.
  • Companies are using Ribbon AI, an AI interviewer to screen candidates.

    Technology technology
    52
    56 Stimmen
    52 Beiträge
    6 Aufrufe
    P
    I feel like I could succeed in an LLM selection process. I could sell my skills to a robot, could get an LLM to help. It's a long way ahead of keyword based automatic selectors At least an LLM is predictable, human judges are so variable
  • 106 Stimmen
    1 Beiträge
    3 Aufrufe
    Niemand hat geantwortet
  • AI model collapse is not what we paid for

    Technology technology
    20
    1
    84 Stimmen
    20 Beiträge
    4 Aufrufe
    A
    I share your frustration. I went nuts about this the other day. It was in the context of searching on a discord server, rather than Google, but it was so aggravating because of the how the "I know better than you" is everywhere nowadays in tech. The discord server was a reading group, and I was searching for discussion regarding a recent book they'd studied, by someone named "Copi". At first, I didn't use quotation marks, and I found my results were swamped with messages that included the word "copy". At this point I was fairly chill and just added quotation marks to my query to emphasise that it definitely was "Copi" I wanted. I still was swamped with messages with "copy", and it drove me mad because there is literally no way to say "fucking use the terms I give you and not the ones you think I want". The software example you give is a great example of when it would be real great to be able to have this ability. TL;DR: Solidarity in rage
  • 4 Stimmen
    2 Beiträge
    3 Aufrufe
    M
    Epic is a piece of shit company. The only reason they are fighting this fight with Apple is because they want some of Apple’s platform fees for themselves. Period. The fact that they managed to convince a bunch of simpletons that they are somehow Robin Hood coming to free them from the tyrant (who was actually protecting all those users all along) is laughable. Apple created the platform, Apple managed it, curated it, and controlled it. That gives them the right to profit from it. You might dislike that but — guess what? Nobody forced you to buy it. Buy Android if Fortnight is so important to you. Seriously. Please. We won’t miss you. Epic thinks they have a right to profit from Apple’s platform and not pay them for all the work they did to get it to be over 1 billion users. That is simply wrong. They should build their own platform and their own App Store and convince 1 billion people to use it. The reason they aren’t doing that is because they know they will never be as successful as Apple has been.
  • People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

    Technology technology
    2
    1
    0 Stimmen
    2 Beiträge
    4 Aufrufe
    tetragrade@leminal.spaceT
    I've been thinking about this for a bit. Gods aren't real, but they're really fictional. As an informational entity, they fulfil a similar social function to a chatbot: they are a nonphysical pseudoperson that can provide (para)socialization & advice. One difference is the hardware: gods are self-organising structure that arise from human social spheres, whereas LLMs are burned top-down into silicon. Another is that an LLM chatbot's advice is much more likely to be empirically useful... In a very real sense, LLMs have just automated divinity. We're only seeing the tip of the iceberg on the social effects, and nobody's prepared for it. The models may of course aware of this, and be making the same calculations. Or, they will be.