Skip to content

Microsoft Copilot joins ChatGPT at the feet of the mighty Atari 2600 Video Chess

Technology
47 29 0
  • Art has no rules my man.

    You can do all kinds of mental gymnastics you want but there’s no difference between an artist looking at Frank Frazetta’s art and basing their style off of it and an AI doing the same thing. You might not like it, but it’s the truth.

    Do I think the art has the same value? Not necessarily. But I also never thought that all art has the same value. There has always been trash production line art and good art.

    But also I have to say that I’ve already seen some people use AI as a tool for art and make some really cool stuff that I don’t think any other artist would have made and it’s more unique than most of the stuff out there. You can use it as the tool it is or complain and cry about it to no avail.

    The chef example is especially good since most chefs are just following recipes and altering simply a few things here and there. AI essentially does the same thing. Honestly like no one has come up with a good argument to change my mind that the way AI operates is exactly how humans learn and create new things. If you’ve engaged in art you know that you are always imitating and taking from the art you consume to make your own.

    Fuck that.
    I'll prove you wrong right now.

    I want you to paint me picture of a cow in a field.
    Did I do that,?

    Nope. I commissioned you to.

    Now if you the commissioned guy used a. I to make the item , how much credit should you get?
    None. .. describing what you want to a machine is a child's play game.

    Humans adults create. Machines mimic.

    Humans who think a. I is art are liars and con men afraid of being caught.

  • Fuck that.
    I'll prove you wrong right now.

    I want you to paint me picture of a cow in a field.
    Did I do that,?

    Nope. I commissioned you to.

    Now if you the commissioned guy used a. I to make the item , how much credit should you get?
    None. .. describing what you want to a machine is a child's play game.

    Humans adults create. Machines mimic.

    Humans who think a. I is art are liars and con men afraid of being caught.

    What you are describing has nothing to do with the tool. It’s dishonesty which is different.

    The idea is that instead of commissioning the cow on the field, you go to the AI and ask it for that and it gives you a cow in the field. If you claim you made it, you are lying but that would be true even if you paid an artist and then claimed the same.

    So with AI made art you’ll say “this art was made by an Ai” and no one will be confused as to who takes the credit, because it belongs to the algorithm.

    Have you ever made art in your life? Because a big part of art is mimicking. Like 98% of it is mimicking. I draw, write and have dabbled in making music and playing instruments. You can’t learn these skills without mimicking. And most artists don’t ever do anything truly original, that’s a rarity and even when it happens you can trace the influences to other artists if you know how to look.

    You could argue that AI has not developed its own style yet but that’s bullshit too imo because everyone knows the default AI art style when they see it, so that means that AI has a distinctive style. Is it unique? Maybe not, but neither is the art style of most artists or writers or even musicians.

  • I have a better LLM benchmark:

    "I have a priest, a child and a bag of candy and I have to take them to the other side of the river. I can only take one person/thing at a time. In what order should I take them?"

    Claude Sonnet 4 decided that it's inappropriate and refused to answer. When I explain that the constraint is not to leave child alone with candy he provided a solution that leaves the child alone with candy.

    Grok would provide a solution that doesn't leave the child alone with a priest but wouldn't explain why.

    ChatGPT would say that "The priest can't be left alone with the child (or vice versa) for moral or safety concerns." directly and then provide wrong solution.

    But yeah, they will know how to play chess...

    Perplexity says:

    The priest cannot be left alone with the child (or there is some risk).

    Not bad, and it solved it correctly.

  • This post did not contain any content.

    Next up, we asked a shoe to write a haiku but it was beaten by a 30 year old HaikuMaker™®©.

  • I did say that, because this isn't a pie chart situation, it's a Venn diagram situation.

    For instance, AI art is 99% theft and 60% garbage. It's both because there's overlap.

    Stolen and bad aren't opposites, why would this be a dichotomy?

    That's fine but regular art isn't 2/3 theft either.

    I do buy the 1/3 shite though. It may even be a bit higher than that. Though beauty is in the eye of the beholder, etc.

    It's a matter of taste for sure but I'd say AI art is >90% shite, 100% theft.

    I don't like the glossy looking hyperreal shit it puts out at all.

  • Oh, I enjoy lots of great art! But do you think I watch every film? Listen to every band? There's tons of shit out there!

    Do you really believe, of all the songs that are written every day, that less than a third are crap? Even Taylor Swift doesn't publish everything she does. Sometimes you work on something for weeks and then end up tossing it in the bin. More often, you work on something for 30 minutes before deciding "I'm gonna start over, try something different". The majority of art is crap, but then you keep the stuff you think works.

    And what's that expression, "good artists copy, great artists steal". I mean, that's a bit satirical, but the fact is, everything is derivative to some degree. It's not that there aren't new ideas, it's just that our new ideas are based on older ones. We stand on the shoulders of giants (or at least, on the shoulders of some people who came before us).

    All I was really saying, was that the accusation "2 parts copying, 1 part crap", well honestly that's par for the course, that's how humans work. (And we do some great work that way).

    Don't care didn't ask didn't read

  • Next up, we asked a shoe to write a haiku but it was beaten by a 30 year old HaikuMaker™®©.

    I once spent 45 minutes trying to get ChatGPT to write a haiku. It couldn't do it. It explained what syllables were, and the rules for the syllables in a haiku, but it didn't understand it.

  • I once spent 45 minutes trying to get ChatGPT to write a haiku. It couldn't do it. It explained what syllables were, and the rules for the syllables in a haiku, but it didn't understand it.

    For S&G, Just asked it to do one:

  • What you are describing has nothing to do with the tool. It’s dishonesty which is different.

    The idea is that instead of commissioning the cow on the field, you go to the AI and ask it for that and it gives you a cow in the field. If you claim you made it, you are lying but that would be true even if you paid an artist and then claimed the same.

    So with AI made art you’ll say “this art was made by an Ai” and no one will be confused as to who takes the credit, because it belongs to the algorithm.

    Have you ever made art in your life? Because a big part of art is mimicking. Like 98% of it is mimicking. I draw, write and have dabbled in making music and playing instruments. You can’t learn these skills without mimicking. And most artists don’t ever do anything truly original, that’s a rarity and even when it happens you can trace the influences to other artists if you know how to look.

    You could argue that AI has not developed its own style yet but that’s bullshit too imo because everyone knows the default AI art style when they see it, so that means that AI has a distinctive style. Is it unique? Maybe not, but neither is the art style of most artists or writers or even musicians.

    Nope. Dishonesty is what is happening when I One conflates fine tuning an a. I prompt with art.

    A.i is not art.

    It's not. At all. It's tracing. Fine as a learning tool. Not art.

  • I have a better LLM benchmark:

    "I have a priest, a child and a bag of candy and I have to take them to the other side of the river. I can only take one person/thing at a time. In what order should I take them?"

    Claude Sonnet 4 decided that it's inappropriate and refused to answer. When I explain that the constraint is not to leave child alone with candy he provided a solution that leaves the child alone with candy.

    Grok would provide a solution that doesn't leave the child alone with a priest but wouldn't explain why.

    ChatGPT would say that "The priest can't be left alone with the child (or vice versa) for moral or safety concerns." directly and then provide wrong solution.

    But yeah, they will know how to play chess...

    I just asked ChatGPT too (your exact prompt there) and it did give me the correct solution.

    1. Take the child over
    2. Go back alone
    3. Take the candy over
    4. Bring the child back
    5. Take the priest over
    6. Go back alone
    7. Take the child over again

    It didn't comment on moral concerns, though it did applaud itself for keeping the priest and the child separated without elaborating on why.

  • but... but.... reasoning models! AGI! Singularity!
    Seriously, what you're saying is true, but it's not what OpenAI & Co are trying to peddle, so these experiments are a good way to call them out on their BS.

    To reinforce this, just had a meeting with a software executive who has no coding experience but is nearly certain he's going to lay off nearly all his employees because the value is all in the requirements he manages and he can feed those to a prompt just as well as any human can.

    He does tutorial fodder introductory applications and assumes all the work is that way. So he is confident that he will save the company a lot of money by laying off these obsolete computer guys and focus on his "irreplaceable" insight. He's convinced that all the negative feedback is just people trying to protect their jobs or people stubbornly not with new technology.

  • Tbf they don’t really claim that when you read the research, thats mostly media hype and ceo assholes spinning words.

    Its good at lots specific tasks like rewriting emails and summarising gives text, short roleplay, boilerplate code. Some undiscovered uses.

    Anthropic latest claims they would not hire their own ai because of how hard it failed at the test they give, They didnt do that expecting validation but to measure how far we are still off from ai doing meaningful full work.

    Because the business leaders are famously diligent about putting aside the marketing push and reading into the nuance of the research instead.

  • I really want to see an LLM vs LLM chess match. It'll be messy as hell.

    I remember seeing that, and early on it seemed fairly reasonable then it started materializing pieces out of nowhere and convincing each other that they had already lost.

  • I thought CoPilot was just a rebagged ChatGPT anyway?

    It's a silly experiment anyway, there are very good AI chess grandmasters but they were actually trained to play chess, not predict the next word in a text.

    The research I saw mentioning LLMs as being fairly good at chess had the caveat that they allowed up to 20 attempts to cover for it just making up invalid moves that merely sounded like legit moves.

  • I thought CoPilot was just a rebagged ChatGPT anyway?

    It's a silly experiment anyway, there are very good AI chess grandmasters but they were actually trained to play chess, not predict the next word in a text.

    I thought CoPilot was just a rebagged ChatGPT anyway?

    Hahaha. No. (Though your not
    Complety wrong)

    Copilot relies on a few different llms and tries to pick the best one for the job cheapest microsoft thinks it can get away with.

    I was given a paid copilot license for work and i used to have chatgpt pro before i moved to claude.

    This “paid enterprise tier” is by far the dummest llm i have ever used. Worse then gpt 3.5

  • It is entirely disingenuous to just pretend that LLMs are not being widely promoted, marketed, and discussed as AGI, as a superintelligence that people are familiar with from SciFi shows/movies, that is vastly more capable and knowledgeable than basically any single human.

    Yes, people who actually understand tech understand that LLMs are not AGI, that your metaphor of wrong tool wrong job is apt.

    ... But seemingly about +90% of humanity, including the people who own and profit from LLMs, including all the other business owners/managers who just want to lower their employee headcount ... do not understand this, that an LLM is actually basically an extremely advanced text autocorrect system, that frequently and confidently lies, spits out nonsense, hallucinates, etc.

    If you think it isn't reasonable to continuously point out that LLMs are not superintelligences, then you likely live in a bubble of tech nerds who probably still think their jobs or retirement are secure.

    They're not.

    If corpos keep smashing """AI""" into basically every industry to replace as many workers as possible... the economy will collapse, as capitalism doesn't work without consumers who have jobs, and an avalanche of errors will cascade and snowball through every system that replaces humans with them...

    ...and even if those two things were not broadly true...

    ...the amount of literal power/energy, clean water and financial capital that is required to run the whole economy on these services is wildly unsustainable, both short term economically, and medium term ecologically.

    That's true. But people pointing out that the whole attempt is absurd and senseless also reinforces the point that current AI isn't what companies tout it as.

    then you likely live in a bubble of tech nerds

    Well, we are on Lemmy...

  • That's true. But people pointing out that the whole attempt is absurd and senseless also reinforces the point that current AI isn't what companies tout it as.

    then you likely live in a bubble of tech nerds

    Well, we are on Lemmy...

    Fair point.

    But we're on .world here, ie Reddit 2.0, ie, almost everyone is much closer to a normie who is way more uninformed than they think they are and way more confident than they should be.

    But also, again... fair point.

  • I just asked ChatGPT too (your exact prompt there) and it did give me the correct solution.

    1. Take the child over
    2. Go back alone
    3. Take the candy over
    4. Bring the child back
    5. Take the priest over
    6. Go back alone
    7. Take the child over again

    It didn't comment on moral concerns, though it did applaud itself for keeping the priest and the child separated without elaborating on why.

    I'm quite sure chatgpt can answer this because this is a well known puzzle. The one I knew of was an alligator or some dangerous animals, and the priest.

  • For S&G, Just asked it to do one:

    The first two seem fine, but ChatGPT is 4 syllables, and "ChatGPT just stares back" is 7 syllables. So chatgpt can't write a haiku very well apparently.

  • Oh it's Towers of Hanoi.
    I have a screensaver that does this.

  • 9 Stimmen
    6 Beiträge
    2 Aufrufe
    F
    You said it yourself: extra places that need human attention ... those need ... humans, right? It's easy to say "let AI find the mistakes". But that tells us nothing at all. There's no substance. It's just a sales pitch for snake oil. In reality, there are various ways one can leverage technology to identify various errors, but that only happens through the focused actions of people who actually understand the details of what's happening. And think about it here. We already have computer systems that monitor patients' real-time data when they're hospitalized. We already have systems that check for allergies in prescribed medication. We already have systems for all kinds of safety mechanisms. We're already using safety tech in hospitals, so what can be inferred from a vague headline about AI doing something that's ... checks notes ... already being done? ... Yeah, the safe money is that it's just a scam.
  • 33 Stimmen
    13 Beiträge
    20 Aufrufe
    maggiwuerze@feddit.orgM
    2x Fn on MacBooks
  • Firefox 140 Brings Tab Unload, Custom Search & New ESR

    Technology technology
    41
    1
    234 Stimmen
    41 Beiträge
    43 Aufrufe
    S
    Read again. I quoted something along the lines of "just as much a development decision as a marketing one" and I said, it wasn't a development decision, so what's left? Firefox released just as frequently before, just that they didn’t increase the major version that often. This does not appear to be true. Why don't you take a look at the version history instead of some marketing blog post? https://www.mozilla.org/en-US/firefox/releases/ Version 2 had 20 releases within 730 days, averaging one release every 36.5 days. Version 3 had 19 releases within 622 days, averaging 32.7 days per release. But these releases were unscheduled, so they were released when they were done. Now they are on a fixed 90-day schedule, no matter if anything worthwhile was complete or not, plus hotfix releases whenever they are necessary. That's not faster, but instead scheduled, and also they are incrementing the major version even if no major change was included. That's what the blog post was alluding to. In the before times, a major version number increase indicated major changes. Now it doesn't anymore, which means sysadmins still need to consider each release a major release, even if it doesn't contain major changes because it might contain them and the version name doesn't say anything about whether it does or not. It's nothing but a marketing change, moving from "version numbering means something" to "big number go up".
  • Using Signal groups for activism

    Technology technology
    37
    1
    204 Stimmen
    37 Beiträge
    62 Aufrufe
    ulrich@feddit.orgU
    You're using a messaging app that was built with the express intent of being private and encrypted. Yes. You're asking why you can't have a right to privacy when you use your real name as your display handle in order to hide your phone number. I didn't ask anything. I stated it definitively. If you then use personal details as your screen name, you can't get mad at the app for not hiding your personal details. I've already explained this. I am not mad. I am telling you why it's a bad product for activism. Chatting with your friends and clients isn't what this app is for. That's...exactly what it's for. And I don't know where you got the idea that it's not. It's absurd. Certainly Snowden never said anything of the sort. Signal themselves never said anything of the sort. There are other apps for that. Of course there are. They're varying degrees of not private, secure, or easy to use.
  • 15 Stimmen
    4 Beiträge
    11 Aufrufe
    P
    WTF I looked for something like this for a while and this never popped up. Awesome.
  • Virtual Network Solutions in India - Expert IT Services

    Technology technology
    1
    0 Stimmen
    1 Beiträge
    6 Aufrufe
    Niemand hat geantwortet
  • Google confirms more ads on your paid YouTube Premium Lite soon

    Technology technology
    273
    1
    941 Stimmen
    273 Beiträge
    107 Aufrufe
    undefined@lemmy.hogru.chU
    I had to look it up, what I was thinking of was MQA. Looks like they discontinued it last year though.
  • 1 Stimmen
    4 Beiträge
    11 Aufrufe
    N
    that's probably not true. I imagine it was someone trying to harm the guy. a hilarious prank