Skip to content

ChatGPT 'got absolutely wrecked' by Atari 2600 in beginner's chess match — OpenAI's newest model bamboozled by 1970s logic

Technology
203 136 4
  • 254 Stimmen
    67 Beiträge
    0 Aufrufe
    L
    Maybe you're right: is there verification? Neither content policy (youtube or tiktok) clearly lays out rules on those words. I only find unverified claims: some write it started at YouTube, others claim TikTok. They claim YouTube demonetizes & TikTok shadowbans. They generally agree content restrictions by these platforms led to the propagation of circumspect shit like unalive & SA. TikTok policy outlines their moderation methods, which include removal and ineligibility to the for you feed. Given their policy on self-harm & automated removal of potential violations, their policy is to effectively & recklessly censor such language. Generally, censorship is suppression of expression. Censorship doesn't exclusively mean content removal, though they're doing that, too. (Digression: revisionism & whitewashing are forms of censorship.) Regardless of how they censor or induce self-censorship, they're chilling inoffensive language pointlessly. While as private entities they are free to moderate as they please, it's unnecessary & the effect is an obnoxious affront on self-expression that's contorting language for the sake of avoiding idiotic restrictions.
  • 180 Stimmen
    13 Beiträge
    0 Aufrufe
    D
    There is a huge difference between an algorithm using real world data to produce a score a panel of experts use to make a determination and using a LLM to screen candidates. One has verifiable reproducible results that can be checked and debated the other does not. The final call does not matter if a computer program using an unknown and unreproducible algorithm screens you out before this. This is what we are facing. Pre-determined decisions that human beings are not being held accountable to. Is this happening right now? Yes it is, without a doubt. People are no longer making a lot of healthcare decisions determining insurance coverage. Computers that are not accountable are. You may have some ability to disagree but for how long? Soon there will be no way to reach a human about an insurance decision. This is already happening. People should be very anxious. Hearing United Healthcare has been forging DNRs and has been denying things like treatment for stroke for elders is disgusting. We have major issues that are not going away and we are blatantly ignoring them.
  • 137 Stimmen
    2 Beiträge
    2 Aufrufe
    treadful@lemmy.zipT
    https://archive.is/oTR8Q
  • 59 Stimmen
    3 Beiträge
    3 Aufrufe
    D
    This sounds like pokemon
  • The Internet of Consent

    Technology technology
    1
    1
    11 Stimmen
    1 Beiträge
    1 Aufrufe
    Niemand hat geantwortet
  • 62 Stimmen
    12 Beiträge
    0 Aufrufe
    merde@sh.itjust.worksM
    is the linked article or the title edited? This was a post about VA GPT
  • OpenAI plans massive UAE data center project

    Technology technology
    4
    1
    0 Stimmen
    4 Beiträge
    2 Aufrufe
    V
    TD Cowen (which is basically the US arm of one of the largest Canadian investment banks) did an extensive report on the state of AI investment. What they found was that despite all their big claims about the future of AI, Microsoft were quietly allowing letters of intent for billions of dollars worth of new compute capacity to expire. Basically, scrapping future plans for expansion, but in a way that's not showy and doesn't require any kind of big announcement. The equivalent of promising to be at the party and then just not showing up. Not long after this reporting came out, it got confirmed by Microsoft, and not long after it came out that Amazon was doing the same thing. Ed Zitron has a really good write up on it; https://www.wheresyoured.at/power-cut/ Amazon isn't the big surprise, they've always been the most cautious of the big players on the whole AI thing. Microsoft on the other hand are very much trying to play things both ways. They know AI is fucked, which is why they're scaling back, but they've also invested a lot of money into their OpenAI partnership so now they have to justify that expenditure which means convincing investors that consumers absolutely love their AI products and are desparate for more. As always, follow the money. Stuff like the three mile island thing is mostly just applying for permits and so on at this point. Relatively small investments. As soon as it comes to big money hitting the table, they're pulling back. That's how you know how they really feel.
  • 149 Stimmen
    19 Beiträge
    5 Aufrufe
    C
    Got it, at that point (extremely high voltage) you'd need suppression at the panel. Which I would hope people have inline, but not expect like an LVD.