Skip to content

Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.

Technology
356 149 3.3k
  • This sort of thing has been published a lot for awhile now, but why is it assumed that this isn't what human reasoning consists of? Isn't all our reasoning ultimately a form of pattern memorization? I sure feel like it is. So to me all these studies that prove they're "just" memorizing patterns don't prove anything other than that, unless coupled with research on the human brain to prove we do something different.

    This. Same with the discussion about consciousness. People always claim that AI is not real intelligence, but no one can ever define what real/human intelligence is. It's like people believe in something like a human soul without admitting it.

  • LOOK MAA I AM ON FRONT PAGE

    Most humans don't reason. They just parrot shit too. The design is very human.

  • The sticky wicket is the proof that humans (functioning 'normally') do more than pattern.

    I think if you look at child development research, you'll see that kids can learn to do crazy shit with very little input, waaay less than you'd need to train a neural net to do the same. So either kids are the luckiest neural nets and always make the correct adjustment after failing, or they have some innate knowledge that isn't pattern-based at all.

    There's even some examples in linguistics specifically, where children tend towards certain grammar rules despite all evidence in their language pointing to another rule. Pure pattern-matching would find the real-world rule without first modelling a different (universally common) rule.

  • What isn't there to gain?

    Its power lies in ingesting language and producing infinite variations. We can feed it talking points, ask it to refine our ideas, test their logic, and even request counterarguments to pressure-test our stance. It helps us build stronger, more resilient narratives.

    We can use it to make memes. Generate images. Expose logical fallacies. Link to credible research. It can detect misinformation in real-time and act as a force multiplier for anyone trying to raise awareness or push back on disinfo.

    Most importantly, it gives a voice to people with strong ideas who might not have the skills or confidence to share them. Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.

    Sure, it has flaws. But rejecting it outright while the right embraces it? That’s beyond shortsighted it’s self-sabotage. And unfortunately, after the last decade, that kind of misstep is par for the course.

    I have no idea what sort of AI you've used that could do any of this stuff you've listed. A program that doesn't reason won't expose logical fallacies with any rigour or refine anyone's ideas. It will link to credible research that you could already find on Google but will also add some hallucinations to the summary. And so on, it's completely divorced from how the stuff as it is currently works.

    Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.

    That's a misguided view of how art is created. Supposed "brilliant ideas" are dime a dozen, it takes brilliant writers and artists to make them real. Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept. If you are not competent in a visual medium, then don't make it visual, write a story or an essay.

    Besides, most of the popular and widely shared webcomics out there are visually extremely simple or just bad (look at SMBC or xkcd or - for a right-wing example - Stonetoss).

    For now I see no particular benefits that the right-wing has obtained by using AI either. They either make it feed back into their delusions, or they whine about the evil leftists censoring the models (by e.g. blocking its usage of slurs).

  • ITT: people who obviously did not study computer science or AI at at least an undergraduate level.

    Y'all are too patient. I can't be bothered to spend the time to give people free lessons.

    The computer science industry isn't the authority on artificial intelligence it thinks it is. The industry is driven by a level of hubris that causes people to step beyond the bounds of science and into the realm of humanities without acknowledgment.

  • Not when large swaths of people are being told to use it everyday. Upper management has bought in on it.

    Yep. I'm retired now, but before retirement a month or so ago, I was working on a project that relied on several hundred people back in 2020. "Why can't AI do it?"

    The people I worked with are continuing the research and putting it up against the human coders, but...there was definitely an element of "AI can do that, we won't need people" next time. I sincerely hope management listens to reason. Our decisions would lead to potentially firing people, so I think we were able to push back on the "AI can make all of these decisions"...for now.

    The AI people were all in, they were ready to build an interface that told the human what the AI would recommend for each item. Errrm, no, that's not how an independent test works. We had to reel them back in.

  • I have no idea what sort of AI you've used that could do any of this stuff you've listed. A program that doesn't reason won't expose logical fallacies with any rigour or refine anyone's ideas. It will link to credible research that you could already find on Google but will also add some hallucinations to the summary. And so on, it's completely divorced from how the stuff as it is currently works.

    Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.

    That's a misguided view of how art is created. Supposed "brilliant ideas" are dime a dozen, it takes brilliant writers and artists to make them real. Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept. If you are not competent in a visual medium, then don't make it visual, write a story or an essay.

    Besides, most of the popular and widely shared webcomics out there are visually extremely simple or just bad (look at SMBC or xkcd or - for a right-wing example - Stonetoss).

    For now I see no particular benefits that the right-wing has obtained by using AI either. They either make it feed back into their delusions, or they whine about the evil leftists censoring the models (by e.g. blocking its usage of slurs).

    Here is chatgpt doing what you said it can't. Finding all the logical fallacies in what you write:

    You're raising strong criticisms, and it's worth unpacking them carefully. Let's go through your argument and see if there are any logical fallacies or flawed reasoning.


    1. Straw Man Fallacy

    "Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept."

    This misrepresents the original claim:

    "AI can help create a framework at the very least so they can get their ideas down."

    The original point wasn't that AI could replace the entire creative process or make a comic successful on its own—it was that it can assist people in starting or visualizing something they couldn’t otherwise. Dismissing that by shifting the goalposts to “producing a full, good comic” creates a straw man of the original claim.


    1. False Dichotomy

    "If you are not competent in a visual medium, then don't make it visual, write a story or an essay."

    This suggests a binary: either you're competent at visual art or you shouldn't try to make anything visual. That’s a false dichotomy. People can learn, iterate, or collaborate, and tools like AI can help bridge gaps in skill—not replace skill, but allow exploration. Many creators use tools before mastery (e.g., musicians using GarageBand, or writers using Grammarly).


    1. Hasty Generalization

    "Supposed 'brilliant ideas' are a dime a dozen..."

    While it’s true that execution matters more than ideas alone, dismissing the value of ideas altogether is an overgeneralization. Many successful works do start with a strong concept—and while many fail in execution, tools that lower the barrier to prototyping or drafting can help surface more workable ideas. The presence of many bad ideas doesn't invalidate the potential value of enabling more people to test theirs.


    1. Appeal to Ridicule / Ad Hominem (Light)

    "...result in a boring comic..." / "...just bad (look at SMBC or xkcd or...)"

    Criticizing popular webcomics like SMBC or xkcd by calling them “bad” doesn't really support your broader claim. These comics are widely read because of strong writing and insight, despite minimalistic visuals. It comes off as dismissive and ridicules the counterexamples rather than engaging with them. That's not a logical fallacy in the strictest sense, but it's rhetorically weak.


    1. Tu Quoque / Whataboutism (Borderline)

    "For now I see no particular benefits that the right-wing has obtained by using AI either..."

    This seems like a rebuttal to a point that wasn't made directly. The original argument wasn’t that “the right is winning with AI,” but rather that alienating the left from it could lead to missed opportunities. Refuting a weaker version (e.g., “the right is clearly winning with AI”) isn’t addressing the original concern, which was more about strategic adoption.


    Summary of Fallacies Identified:

    Type Description

    Straw Man Misrepresents the role of AI in creative assistance.
    False Dichotomy Assumes one must either be visually skilled or not attempt visual media.
    Hasty Generalization Devalues “brilliant ideas” universally.
    Appeal to Ridicule Dismisses counterexamples via mocking tone rather than analysis.
    Tu Quoque-like Compares left vs. right AI use without addressing the core point about opportunity.


    Your criticism is thoughtful and not without merit—but it's wrapped in rhetoric that sometimes slips into oversimplification or misrepresentation of the opposing view. If your goal is to strengthen your argument or have a productive back-and-forth, refining those areas could help. Would you like to rewrite it in a way that keeps the spirit of your critique but sharpens its logic?

    At this point you're just arguing for arguments sake. You're not wrong or right but instead muddying things. Saying it'll be boring comics missed the entire point. Saying it is the same as google is pure ignorance of what it can do. But this goes to my point about how this stuff is all similar to anti immigrant mentality. The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they've bought into the hype and need to justify it.

  • Most humans don't reason. They just parrot shit too. The design is very human.

    Yeah I've always said the the flaw in Turing's Imitation Game concept is that if an AI was indistinguishable from a human it wouldn't prove it's intelligent. Because humans are dumb as shit. Dumb enough to force one of the smartest people in the world take a ton of drugs which eventually killed him simply because he was gay.

  • Here is chatgpt doing what you said it can't. Finding all the logical fallacies in what you write:

    You're raising strong criticisms, and it's worth unpacking them carefully. Let's go through your argument and see if there are any logical fallacies or flawed reasoning.


    1. Straw Man Fallacy

    "Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept."

    This misrepresents the original claim:

    "AI can help create a framework at the very least so they can get their ideas down."

    The original point wasn't that AI could replace the entire creative process or make a comic successful on its own—it was that it can assist people in starting or visualizing something they couldn’t otherwise. Dismissing that by shifting the goalposts to “producing a full, good comic” creates a straw man of the original claim.


    1. False Dichotomy

    "If you are not competent in a visual medium, then don't make it visual, write a story or an essay."

    This suggests a binary: either you're competent at visual art or you shouldn't try to make anything visual. That’s a false dichotomy. People can learn, iterate, or collaborate, and tools like AI can help bridge gaps in skill—not replace skill, but allow exploration. Many creators use tools before mastery (e.g., musicians using GarageBand, or writers using Grammarly).


    1. Hasty Generalization

    "Supposed 'brilliant ideas' are a dime a dozen..."

    While it’s true that execution matters more than ideas alone, dismissing the value of ideas altogether is an overgeneralization. Many successful works do start with a strong concept—and while many fail in execution, tools that lower the barrier to prototyping or drafting can help surface more workable ideas. The presence of many bad ideas doesn't invalidate the potential value of enabling more people to test theirs.


    1. Appeal to Ridicule / Ad Hominem (Light)

    "...result in a boring comic..." / "...just bad (look at SMBC or xkcd or...)"

    Criticizing popular webcomics like SMBC or xkcd by calling them “bad” doesn't really support your broader claim. These comics are widely read because of strong writing and insight, despite minimalistic visuals. It comes off as dismissive and ridicules the counterexamples rather than engaging with them. That's not a logical fallacy in the strictest sense, but it's rhetorically weak.


    1. Tu Quoque / Whataboutism (Borderline)

    "For now I see no particular benefits that the right-wing has obtained by using AI either..."

    This seems like a rebuttal to a point that wasn't made directly. The original argument wasn’t that “the right is winning with AI,” but rather that alienating the left from it could lead to missed opportunities. Refuting a weaker version (e.g., “the right is clearly winning with AI”) isn’t addressing the original concern, which was more about strategic adoption.


    Summary of Fallacies Identified:

    Type Description

    Straw Man Misrepresents the role of AI in creative assistance.
    False Dichotomy Assumes one must either be visually skilled or not attempt visual media.
    Hasty Generalization Devalues “brilliant ideas” universally.
    Appeal to Ridicule Dismisses counterexamples via mocking tone rather than analysis.
    Tu Quoque-like Compares left vs. right AI use without addressing the core point about opportunity.


    Your criticism is thoughtful and not without merit—but it's wrapped in rhetoric that sometimes slips into oversimplification or misrepresentation of the opposing view. If your goal is to strengthen your argument or have a productive back-and-forth, refining those areas could help. Would you like to rewrite it in a way that keeps the spirit of your critique but sharpens its logic?

    At this point you're just arguing for arguments sake. You're not wrong or right but instead muddying things. Saying it'll be boring comics missed the entire point. Saying it is the same as google is pure ignorance of what it can do. But this goes to my point about how this stuff is all similar to anti immigrant mentality. The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they've bought into the hype and need to justify it.

    Did you even read this garbage? It’s just words strung together without any meaning. The things it’s claiming show a fundamental lack of understanding of what it is responding to.

    This didn’t prove your point at all, quite the opposite. And it wasted everyone’s time in the process. Good job, this was worthless.

  • Here is chatgpt doing what you said it can't. Finding all the logical fallacies in what you write:

    You're raising strong criticisms, and it's worth unpacking them carefully. Let's go through your argument and see if there are any logical fallacies or flawed reasoning.


    1. Straw Man Fallacy

    "Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept."

    This misrepresents the original claim:

    "AI can help create a framework at the very least so they can get their ideas down."

    The original point wasn't that AI could replace the entire creative process or make a comic successful on its own—it was that it can assist people in starting or visualizing something they couldn’t otherwise. Dismissing that by shifting the goalposts to “producing a full, good comic” creates a straw man of the original claim.


    1. False Dichotomy

    "If you are not competent in a visual medium, then don't make it visual, write a story or an essay."

    This suggests a binary: either you're competent at visual art or you shouldn't try to make anything visual. That’s a false dichotomy. People can learn, iterate, or collaborate, and tools like AI can help bridge gaps in skill—not replace skill, but allow exploration. Many creators use tools before mastery (e.g., musicians using GarageBand, or writers using Grammarly).


    1. Hasty Generalization

    "Supposed 'brilliant ideas' are a dime a dozen..."

    While it’s true that execution matters more than ideas alone, dismissing the value of ideas altogether is an overgeneralization. Many successful works do start with a strong concept—and while many fail in execution, tools that lower the barrier to prototyping or drafting can help surface more workable ideas. The presence of many bad ideas doesn't invalidate the potential value of enabling more people to test theirs.


    1. Appeal to Ridicule / Ad Hominem (Light)

    "...result in a boring comic..." / "...just bad (look at SMBC or xkcd or...)"

    Criticizing popular webcomics like SMBC or xkcd by calling them “bad” doesn't really support your broader claim. These comics are widely read because of strong writing and insight, despite minimalistic visuals. It comes off as dismissive and ridicules the counterexamples rather than engaging with them. That's not a logical fallacy in the strictest sense, but it's rhetorically weak.


    1. Tu Quoque / Whataboutism (Borderline)

    "For now I see no particular benefits that the right-wing has obtained by using AI either..."

    This seems like a rebuttal to a point that wasn't made directly. The original argument wasn’t that “the right is winning with AI,” but rather that alienating the left from it could lead to missed opportunities. Refuting a weaker version (e.g., “the right is clearly winning with AI”) isn’t addressing the original concern, which was more about strategic adoption.


    Summary of Fallacies Identified:

    Type Description

    Straw Man Misrepresents the role of AI in creative assistance.
    False Dichotomy Assumes one must either be visually skilled or not attempt visual media.
    Hasty Generalization Devalues “brilliant ideas” universally.
    Appeal to Ridicule Dismisses counterexamples via mocking tone rather than analysis.
    Tu Quoque-like Compares left vs. right AI use without addressing the core point about opportunity.


    Your criticism is thoughtful and not without merit—but it's wrapped in rhetoric that sometimes slips into oversimplification or misrepresentation of the opposing view. If your goal is to strengthen your argument or have a productive back-and-forth, refining those areas could help. Would you like to rewrite it in a way that keeps the spirit of your critique but sharpens its logic?

    At this point you're just arguing for arguments sake. You're not wrong or right but instead muddying things. Saying it'll be boring comics missed the entire point. Saying it is the same as google is pure ignorance of what it can do. But this goes to my point about how this stuff is all similar to anti immigrant mentality. The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they've bought into the hype and need to justify it.

    Excellent, these "fallacies" are exactly as I expected - made up, misunderstanding my comment (I did not call SMBC "bad"), and overall just trying to look like criticism instead of being one. Completely worthless - but I sure can see why right wingers are embracing it!

    It's funny how you think AI will help refine people's ideas, but you actually just delegated your thinking to it and let it do it worse than you could (if you cared). That's why I don't feel like getting any deeper into explaining why the AI response is garbage, I could just as well fire up GPT on my own and paste its answer, but it would be equally meaningless and useless as yours.

    Saying it’ll be boring comics missed the entire point.

    So what was the point exactly? I re-read that part of your comment and you're talking about "strong ideas", whatever that's supposed to be without any actual context?

    Saying it is the same as google is pure ignorance of what it can do.

    I did not say it's the same as Google, in fact I said it's worse than Google because it can add a hallucinated summary or reinterpretation of the source. I've tested a solid number of LLMs over time, I've seen what they produce. You can either provide examples that show that they do not hallucinate, that they have access to sources that are more reliable than what shows up on Google, or you can again avoid any specific examples, just expecting people to submit to the revolutionary tech without any questions, accuse me of complete ignorance and, no less, compare me with anti-immigrant crowds (I honestly have no idea what's supposed to be similar between these two viewpoints? I don't live in a country with particularly developed anti-immigrant stances so maybe I'm missing something here?).

    The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they’ve bought into the hype and need to justify it.

    "They’ve bought into the hype and need to justify it"? Are you sure you're talking about the anti-AI crowd here? Because that's exactly how one would describe a lot of the pro-AI discourse. Like, many pro-AI people literally BUY into the hype by buying access to better AI models or invest in AI companies, the very real hype is stoked by these highly valued companies and some of the richest people in the world, and the hype leads the stock market and the objectively massive investments into this field.

    But actually those who "buy into the hype" are the average people who just don't want to use this tech? Huh? What does that have to do with the concept of "hype"? Do you think hype is simply any trend that doesn't agree with your viewpoints?

  • LOOK MAA I AM ON FRONT PAGE

    Fair, but the same is true of me. I don't actually "reason"; I just have a set of algorithms memorized by which I propose a pattern that seems like it might match the situation, then a different pattern by which I break the situation down into smaller components and then apply patterns to those components. I keep the process up for a while. If I find a "nasty logic error" pattern match at some point in the process, I "know" I've found a "flaw in the argument" or "bug in the design".

    But there's no from-first-principles method by which I developed all these patterns; it's just things that have survived the test of time when other patterns have failed me.

    I don't think people are underestimating the power of LLMs to think; I just think people are overestimating the power of humans to do anything other than language prediction and sensory pattern prediction.

  • Funny how triggering it is for some people when anyone acknowledges humans are just evolved primates doing the same pattern matching.

    I appreciate your telling the truth. No downvotes from me. See you at the loony bin, amigo.

  • Yah of course they do they’re computers

    Computers are better at logic than brains are. We emulate logic; they do it natively.

    It just so happens there's no logical algorithm for "reasoning" a problem through.

  • That's not really a valid argument for why, but yes the models which use training data to assemble statistical models are all bullshitting. TBH idk how people can convince themselves otherwise.

    They aren't bullshitting because the training data is based on reality. Reality bleeds through the training data into the model. The model is a reflection of reality.

  • Most humans don't reason. They just parrot shit too. The design is very human.

    LLMs deal with tokens. Essentially, predicting a series of bytes.

    Humans do much, much, much, much, much, much, much more than that.

  • Funny how triggering it is for some people when anyone acknowledges humans are just evolved primates doing the same pattern matching.

    It's not that institutionalized people don't follow "set" pattern matches. That's why you're getting downvotes.

    Some of those humans can operate with the same brain rules alright. They may even be more efficient at it than you and I may. The higher level functions is a different thing.

  • It's not that institutionalized people don't follow "set" pattern matches. That's why you're getting downvotes.

    Some of those humans can operate with the same brain rules alright. They may even be more efficient at it than you and I may. The higher level functions is a different thing.

    That’s absolutely what it is. It’s a pattern on here. Any acknowledgment of humans being animals or less than superior gets hit with pushback.

  • LOOK MAA I AM ON FRONT PAGE

    No shit. This isn't new.

  • That’s absolutely what it is. It’s a pattern on here. Any acknowledgment of humans being animals or less than superior gets hit with pushback.

    Humans are animals. But an LLM is not an animal and has no reasoning abilities.

  • Yeah I've always said the the flaw in Turing's Imitation Game concept is that if an AI was indistinguishable from a human it wouldn't prove it's intelligent. Because humans are dumb as shit. Dumb enough to force one of the smartest people in the world take a ton of drugs which eventually killed him simply because he was gay.

    I've heard something along the lines of, "it's not when computers can pass the Turing Test, it's when they start failing it on purpose that's the real problem."

  • 50 Stimmen
    1 Beiträge
    7 Aufrufe
    Niemand hat geantwortet
  • Tech Giants Team Up With Teachers Union on $23M AI Academy

    Technology technology
    3
    1
    8 Stimmen
    3 Beiträge
    45 Aufrufe
    D
    incorrect assessment: unions will gladly collaborate with 3rd party corps if it benefits them. Also unions protect interests of their members, not entire humanity...
  • 211 Stimmen
    17 Beiträge
    157 Aufrufe
    A
    When it comes to public outreach, the question is more “why not?”
  • 10 Stimmen
    1 Beiträge
    18 Aufrufe
    Niemand hat geantwortet
  • 61 Stimmen
    11 Beiträge
    105 Aufrufe
    K
    If you use LLMs like they should be, i.e. as autocomplete, they're helpful. Classic autocomplete can't see me type "import" and correctly guess that I want to import a file that I just created, but Copilot can. You shouldn't expect it to understand code, but it can type more quickly than you and plug the right things in more often than not.
  • 77 Stimmen
    5 Beiträge
    56 Aufrufe
    U
    I don't see Yarvin on here... this needs expansion.
  • The Enshitification of Youtube’s Full Album Playlists

    Technology technology
    3
    1
    108 Stimmen
    3 Beiträge
    38 Aufrufe
    dual_sport_dork@lemmy.worldD
    Especially when the poster does not disclose that it's AI. The perpetual Youtube rabbit hole occasionally lands on one of these for me when I leave it unsupervised, and usually you can tell from the "cover" art. But only if you're looking at it. Because if you just leave it going in the background eventually you start to realize, "Wow, this guy really tripped over the fine line between a groove and rut." Then you click on it and look: Curses! Foiled again. And golly gee, I'm sure glad Youtube took away the option to oughtright block channels. I'm sure that's a total coincidence. W/e. I'm a have-it-on-my-hard-drive kind of bird. Yt-dlp is your friend. Just use it to nab whatever it is you actually want and let your own media player decide how to shuffle and present it. This works great for big name commercial music as well, whereupon the record labels are inevitably dumb enough to post songs and albums in their entirety right there you Youtube. Who even needs piracy sites at that rate? Yoink!
  • 0 Stimmen
    4 Beiträge
    45 Aufrufe
    K
    Only way I'll want a different phone brand is if it comes with ZERO bloatware and has an excellent internal memory/storage cleanse that has nothing to do with Google's Files or a random app I'm not sure I can trust without paying or rooting. So far my A series phones do what I need mostly and in my opinion is superior to the Motorola's my fiancé prefers minus the phone-phone charge ability his has, everything else I'm just glad I have enough control to tweak things to my liking, however these days Samsungs seem to be infested with Google bloatware and apps that insist on opening themselves back up regardless of the widespread battery restrictions I've assigned (even was sent a "Stop Closing my Apps" notif that sent me to an article ) short of Disabling many unnecessary apps bc fully rooting my devices is something I rarely do anymore. I have a random Chinese brand tablet where I actually have more control over the apps than either of my A series phones whee Force Stopping STAYS that way when I tell them to! I hate being listened to for ads and the unwanted draining my battery life and data (I live off-grid and pay data rates because "Unlimited" is some throttled BS) so my ability to control what's going on in the background matters a lot to me, enough that I'm anti Meta-apps and avoid all non-essential Google apps. I can't afford topline phones and the largest data plan, so I work with what I can afford and I'm sad refurbished A lines seem to be getting more expensive while giving away my control to companies. Last A line I bought that was supposed to be my first 5G phone was network locked, so I got ripped off, but it still serves me well in off-grid life. Only app that actually regularly malfunctions when I Force Stop it's background presence is Roku, which I find to have very an almost insidious presence in our lives. Google Play, Chrome, and Spotify never acts incompetent in any way no matter how I have to open the setting every single time I turn Airplane Mode off. Don't need Gmail with Chrome and DuckDuckGo has been awesome at intercepting self-loading ads. I hope one day DDG gets better bc Google seems to be terrible lately and I even caught their AI contradicting itself when asking about if Homo Florensis is considered Human (yes) and then asked the oldest age of human remains, and was fed the outdated narrative of 300,000 years versus 700,000+ years bipedal pre-humans have been carbon dated outside of the Cradle of Humanity in South Africa. SO sorry to go off-topic, but I've got a big gripe with Samsung's partnership with Google, especially considering the launch of Quantum Computed AI that is still being fine-tuned with company-approved censorships.