Skip to content

Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.

Technology
356 149 3.3k
  • Most humans don't reason. They just parrot shit too. The design is very human.

    Yeah I've always said the the flaw in Turing's Imitation Game concept is that if an AI was indistinguishable from a human it wouldn't prove it's intelligent. Because humans are dumb as shit. Dumb enough to force one of the smartest people in the world take a ton of drugs which eventually killed him simply because he was gay.

  • Here is chatgpt doing what you said it can't. Finding all the logical fallacies in what you write:

    You're raising strong criticisms, and it's worth unpacking them carefully. Let's go through your argument and see if there are any logical fallacies or flawed reasoning.


    1. Straw Man Fallacy

    "Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept."

    This misrepresents the original claim:

    "AI can help create a framework at the very least so they can get their ideas down."

    The original point wasn't that AI could replace the entire creative process or make a comic successful on its own—it was that it can assist people in starting or visualizing something they couldn’t otherwise. Dismissing that by shifting the goalposts to “producing a full, good comic” creates a straw man of the original claim.


    1. False Dichotomy

    "If you are not competent in a visual medium, then don't make it visual, write a story or an essay."

    This suggests a binary: either you're competent at visual art or you shouldn't try to make anything visual. That’s a false dichotomy. People can learn, iterate, or collaborate, and tools like AI can help bridge gaps in skill—not replace skill, but allow exploration. Many creators use tools before mastery (e.g., musicians using GarageBand, or writers using Grammarly).


    1. Hasty Generalization

    "Supposed 'brilliant ideas' are a dime a dozen..."

    While it’s true that execution matters more than ideas alone, dismissing the value of ideas altogether is an overgeneralization. Many successful works do start with a strong concept—and while many fail in execution, tools that lower the barrier to prototyping or drafting can help surface more workable ideas. The presence of many bad ideas doesn't invalidate the potential value of enabling more people to test theirs.


    1. Appeal to Ridicule / Ad Hominem (Light)

    "...result in a boring comic..." / "...just bad (look at SMBC or xkcd or...)"

    Criticizing popular webcomics like SMBC or xkcd by calling them “bad” doesn't really support your broader claim. These comics are widely read because of strong writing and insight, despite minimalistic visuals. It comes off as dismissive and ridicules the counterexamples rather than engaging with them. That's not a logical fallacy in the strictest sense, but it's rhetorically weak.


    1. Tu Quoque / Whataboutism (Borderline)

    "For now I see no particular benefits that the right-wing has obtained by using AI either..."

    This seems like a rebuttal to a point that wasn't made directly. The original argument wasn’t that “the right is winning with AI,” but rather that alienating the left from it could lead to missed opportunities. Refuting a weaker version (e.g., “the right is clearly winning with AI”) isn’t addressing the original concern, which was more about strategic adoption.


    Summary of Fallacies Identified:

    Type Description

    Straw Man Misrepresents the role of AI in creative assistance.
    False Dichotomy Assumes one must either be visually skilled or not attempt visual media.
    Hasty Generalization Devalues “brilliant ideas” universally.
    Appeal to Ridicule Dismisses counterexamples via mocking tone rather than analysis.
    Tu Quoque-like Compares left vs. right AI use without addressing the core point about opportunity.


    Your criticism is thoughtful and not without merit—but it's wrapped in rhetoric that sometimes slips into oversimplification or misrepresentation of the opposing view. If your goal is to strengthen your argument or have a productive back-and-forth, refining those areas could help. Would you like to rewrite it in a way that keeps the spirit of your critique but sharpens its logic?

    At this point you're just arguing for arguments sake. You're not wrong or right but instead muddying things. Saying it'll be boring comics missed the entire point. Saying it is the same as google is pure ignorance of what it can do. But this goes to my point about how this stuff is all similar to anti immigrant mentality. The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they've bought into the hype and need to justify it.

    Did you even read this garbage? It’s just words strung together without any meaning. The things it’s claiming show a fundamental lack of understanding of what it is responding to.

    This didn’t prove your point at all, quite the opposite. And it wasted everyone’s time in the process. Good job, this was worthless.

  • Here is chatgpt doing what you said it can't. Finding all the logical fallacies in what you write:

    You're raising strong criticisms, and it's worth unpacking them carefully. Let's go through your argument and see if there are any logical fallacies or flawed reasoning.


    1. Straw Man Fallacy

    "Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept."

    This misrepresents the original claim:

    "AI can help create a framework at the very least so they can get their ideas down."

    The original point wasn't that AI could replace the entire creative process or make a comic successful on its own—it was that it can assist people in starting or visualizing something they couldn’t otherwise. Dismissing that by shifting the goalposts to “producing a full, good comic” creates a straw man of the original claim.


    1. False Dichotomy

    "If you are not competent in a visual medium, then don't make it visual, write a story or an essay."

    This suggests a binary: either you're competent at visual art or you shouldn't try to make anything visual. That’s a false dichotomy. People can learn, iterate, or collaborate, and tools like AI can help bridge gaps in skill—not replace skill, but allow exploration. Many creators use tools before mastery (e.g., musicians using GarageBand, or writers using Grammarly).


    1. Hasty Generalization

    "Supposed 'brilliant ideas' are a dime a dozen..."

    While it’s true that execution matters more than ideas alone, dismissing the value of ideas altogether is an overgeneralization. Many successful works do start with a strong concept—and while many fail in execution, tools that lower the barrier to prototyping or drafting can help surface more workable ideas. The presence of many bad ideas doesn't invalidate the potential value of enabling more people to test theirs.


    1. Appeal to Ridicule / Ad Hominem (Light)

    "...result in a boring comic..." / "...just bad (look at SMBC or xkcd or...)"

    Criticizing popular webcomics like SMBC or xkcd by calling them “bad” doesn't really support your broader claim. These comics are widely read because of strong writing and insight, despite minimalistic visuals. It comes off as dismissive and ridicules the counterexamples rather than engaging with them. That's not a logical fallacy in the strictest sense, but it's rhetorically weak.


    1. Tu Quoque / Whataboutism (Borderline)

    "For now I see no particular benefits that the right-wing has obtained by using AI either..."

    This seems like a rebuttal to a point that wasn't made directly. The original argument wasn’t that “the right is winning with AI,” but rather that alienating the left from it could lead to missed opportunities. Refuting a weaker version (e.g., “the right is clearly winning with AI”) isn’t addressing the original concern, which was more about strategic adoption.


    Summary of Fallacies Identified:

    Type Description

    Straw Man Misrepresents the role of AI in creative assistance.
    False Dichotomy Assumes one must either be visually skilled or not attempt visual media.
    Hasty Generalization Devalues “brilliant ideas” universally.
    Appeal to Ridicule Dismisses counterexamples via mocking tone rather than analysis.
    Tu Quoque-like Compares left vs. right AI use without addressing the core point about opportunity.


    Your criticism is thoughtful and not without merit—but it's wrapped in rhetoric that sometimes slips into oversimplification or misrepresentation of the opposing view. If your goal is to strengthen your argument or have a productive back-and-forth, refining those areas could help. Would you like to rewrite it in a way that keeps the spirit of your critique but sharpens its logic?

    At this point you're just arguing for arguments sake. You're not wrong or right but instead muddying things. Saying it'll be boring comics missed the entire point. Saying it is the same as google is pure ignorance of what it can do. But this goes to my point about how this stuff is all similar to anti immigrant mentality. The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they've bought into the hype and need to justify it.

    Excellent, these "fallacies" are exactly as I expected - made up, misunderstanding my comment (I did not call SMBC "bad"), and overall just trying to look like criticism instead of being one. Completely worthless - but I sure can see why right wingers are embracing it!

    It's funny how you think AI will help refine people's ideas, but you actually just delegated your thinking to it and let it do it worse than you could (if you cared). That's why I don't feel like getting any deeper into explaining why the AI response is garbage, I could just as well fire up GPT on my own and paste its answer, but it would be equally meaningless and useless as yours.

    Saying it’ll be boring comics missed the entire point.

    So what was the point exactly? I re-read that part of your comment and you're talking about "strong ideas", whatever that's supposed to be without any actual context?

    Saying it is the same as google is pure ignorance of what it can do.

    I did not say it's the same as Google, in fact I said it's worse than Google because it can add a hallucinated summary or reinterpretation of the source. I've tested a solid number of LLMs over time, I've seen what they produce. You can either provide examples that show that they do not hallucinate, that they have access to sources that are more reliable than what shows up on Google, or you can again avoid any specific examples, just expecting people to submit to the revolutionary tech without any questions, accuse me of complete ignorance and, no less, compare me with anti-immigrant crowds (I honestly have no idea what's supposed to be similar between these two viewpoints? I don't live in a country with particularly developed anti-immigrant stances so maybe I'm missing something here?).

    The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they’ve bought into the hype and need to justify it.

    "They’ve bought into the hype and need to justify it"? Are you sure you're talking about the anti-AI crowd here? Because that's exactly how one would describe a lot of the pro-AI discourse. Like, many pro-AI people literally BUY into the hype by buying access to better AI models or invest in AI companies, the very real hype is stoked by these highly valued companies and some of the richest people in the world, and the hype leads the stock market and the objectively massive investments into this field.

    But actually those who "buy into the hype" are the average people who just don't want to use this tech? Huh? What does that have to do with the concept of "hype"? Do you think hype is simply any trend that doesn't agree with your viewpoints?

  • LOOK MAA I AM ON FRONT PAGE

    Fair, but the same is true of me. I don't actually "reason"; I just have a set of algorithms memorized by which I propose a pattern that seems like it might match the situation, then a different pattern by which I break the situation down into smaller components and then apply patterns to those components. I keep the process up for a while. If I find a "nasty logic error" pattern match at some point in the process, I "know" I've found a "flaw in the argument" or "bug in the design".

    But there's no from-first-principles method by which I developed all these patterns; it's just things that have survived the test of time when other patterns have failed me.

    I don't think people are underestimating the power of LLMs to think; I just think people are overestimating the power of humans to do anything other than language prediction and sensory pattern prediction.

  • Funny how triggering it is for some people when anyone acknowledges humans are just evolved primates doing the same pattern matching.

    I appreciate your telling the truth. No downvotes from me. See you at the loony bin, amigo.

  • Yah of course they do they’re computers

    Computers are better at logic than brains are. We emulate logic; they do it natively.

    It just so happens there's no logical algorithm for "reasoning" a problem through.

  • That's not really a valid argument for why, but yes the models which use training data to assemble statistical models are all bullshitting. TBH idk how people can convince themselves otherwise.

    They aren't bullshitting because the training data is based on reality. Reality bleeds through the training data into the model. The model is a reflection of reality.

  • Most humans don't reason. They just parrot shit too. The design is very human.

    LLMs deal with tokens. Essentially, predicting a series of bytes.

    Humans do much, much, much, much, much, much, much more than that.

  • Funny how triggering it is for some people when anyone acknowledges humans are just evolved primates doing the same pattern matching.

    It's not that institutionalized people don't follow "set" pattern matches. That's why you're getting downvotes.

    Some of those humans can operate with the same brain rules alright. They may even be more efficient at it than you and I may. The higher level functions is a different thing.

  • It's not that institutionalized people don't follow "set" pattern matches. That's why you're getting downvotes.

    Some of those humans can operate with the same brain rules alright. They may even be more efficient at it than you and I may. The higher level functions is a different thing.

    That’s absolutely what it is. It’s a pattern on here. Any acknowledgment of humans being animals or less than superior gets hit with pushback.

  • LOOK MAA I AM ON FRONT PAGE

    No shit. This isn't new.

  • That’s absolutely what it is. It’s a pattern on here. Any acknowledgment of humans being animals or less than superior gets hit with pushback.

    Humans are animals. But an LLM is not an animal and has no reasoning abilities.

  • Yeah I've always said the the flaw in Turing's Imitation Game concept is that if an AI was indistinguishable from a human it wouldn't prove it's intelligent. Because humans are dumb as shit. Dumb enough to force one of the smartest people in the world take a ton of drugs which eventually killed him simply because he was gay.

    I've heard something along the lines of, "it's not when computers can pass the Turing Test, it's when they start failing it on purpose that's the real problem."

  • Fair, but the same is true of me. I don't actually "reason"; I just have a set of algorithms memorized by which I propose a pattern that seems like it might match the situation, then a different pattern by which I break the situation down into smaller components and then apply patterns to those components. I keep the process up for a while. If I find a "nasty logic error" pattern match at some point in the process, I "know" I've found a "flaw in the argument" or "bug in the design".

    But there's no from-first-principles method by which I developed all these patterns; it's just things that have survived the test of time when other patterns have failed me.

    I don't think people are underestimating the power of LLMs to think; I just think people are overestimating the power of humans to do anything other than language prediction and sensory pattern prediction.

    You either an llm, or don't know how your brain works.

  • Thank you Captain Obvious! Only those who think LLMs are like "little people in the computer" didn't knew this already.

    Yeah, well there are a ton of people literally falling into psychosis, led by LLMs. So it’s unfortunately not that many people that already knew it.

  • Did you even read this garbage? It’s just words strung together without any meaning. The things it’s claiming show a fundamental lack of understanding of what it is responding to.

    This didn’t prove your point at all, quite the opposite. And it wasted everyone’s time in the process. Good job, this was worthless.

    I did and it was because it didn't have the previous context. But it did find the fallacies as present. Logic is literally what a chat AI is going. A human still needs to review the output but it did what it was asked. I don't know AI programming well. But I can say that logic is algorithmic. An AI has no problem parsing an argument and finding the fallacies. It's a tool like any other.

  • Excellent, these "fallacies" are exactly as I expected - made up, misunderstanding my comment (I did not call SMBC "bad"), and overall just trying to look like criticism instead of being one. Completely worthless - but I sure can see why right wingers are embracing it!

    It's funny how you think AI will help refine people's ideas, but you actually just delegated your thinking to it and let it do it worse than you could (if you cared). That's why I don't feel like getting any deeper into explaining why the AI response is garbage, I could just as well fire up GPT on my own and paste its answer, but it would be equally meaningless and useless as yours.

    Saying it’ll be boring comics missed the entire point.

    So what was the point exactly? I re-read that part of your comment and you're talking about "strong ideas", whatever that's supposed to be without any actual context?

    Saying it is the same as google is pure ignorance of what it can do.

    I did not say it's the same as Google, in fact I said it's worse than Google because it can add a hallucinated summary or reinterpretation of the source. I've tested a solid number of LLMs over time, I've seen what they produce. You can either provide examples that show that they do not hallucinate, that they have access to sources that are more reliable than what shows up on Google, or you can again avoid any specific examples, just expecting people to submit to the revolutionary tech without any questions, accuse me of complete ignorance and, no less, compare me with anti-immigrant crowds (I honestly have no idea what's supposed to be similar between these two viewpoints? I don't live in a country with particularly developed anti-immigrant stances so maybe I'm missing something here?).

    The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they’ve bought into the hype and need to justify it.

    "They’ve bought into the hype and need to justify it"? Are you sure you're talking about the anti-AI crowd here? Because that's exactly how one would describe a lot of the pro-AI discourse. Like, many pro-AI people literally BUY into the hype by buying access to better AI models or invest in AI companies, the very real hype is stoked by these highly valued companies and some of the richest people in the world, and the hype leads the stock market and the objectively massive investments into this field.

    But actually those who "buy into the hype" are the average people who just don't want to use this tech? Huh? What does that have to do with the concept of "hype"? Do you think hype is simply any trend that doesn't agree with your viewpoints?

    Hype flows in both directions. Right now the hype from most is finding issues with chatgpt. It did find the fallacies based on what it was asked to do. It worked as expected. You act like this is fire and forget. Given what this output gave me, I can easily keep working this to get better and better arguments. I can review the results and clarify and iterate. I did copy and paste just to show an example. First I wanted to be honest with the output and not modify it. Second is an effort thing. I just feel like you can't honestly tell me that within 10 seconds having that summary is not beneficial. I didn't supply my argument to the prompt, only yours. If I submitted my argument it would be better.

  • Yeah, well there are a ton of people literally falling into psychosis, led by LLMs. So it’s unfortunately not that many people that already knew it.

  • Of course, that is obvious to all having basic knowledge of neural networks, no?

    I still remember Geoff Hinton's criticisms of backpropagation.

    IMO it is still remarkable what NNs managed to achieve: some form of emergent intelligence.

  • Humans are animals. But an LLM is not an animal and has no reasoning abilities.

    It’s built by animals, and it reflects them. That’s impressive on its own. Doesn’t need to be exaggerated.

  • hype is the product

    Technology technology
    1
    6 Stimmen
    1 Beiträge
    1 Aufrufe
    Niemand hat geantwortet
  • 181 Stimmen
    28 Beiträge
    302 Aufrufe
    S
    And the rest of the world are not petulant children ready and willing to remove any semblance of cooperation, appreciation or decency.
  • DIY experimental Redox Flow Battery kit

    Technology technology
    3
    1
    37 Stimmen
    3 Beiträge
    42 Aufrufe
    C
    The roadmap defines 3 milestone batteries. The first is released, it's a benchtop device that you can relatively easily build on your own. It has an electrode side of 2 x 2cm2. It does not store any significant amount of energy. The second one is being developed right now, it has a cell the size of a small 3d printer bed (20x20cm) and will also not store practical amounts of energy. It will hopefully prove though that they are on the right track and that they can scale it up. The third battery only will store significant amounts of energy but in only due end of the year (probably later). Current Vanadium systems cost approx. 300-600$/kWh according to some random website I found. The goal of this project is to spread the knowledge about Redox Flow Batteries and in the medium term only make them commercially viable. The aniolyth and catholyth are based on the Zink-Iodine system in an aqueous solution. There are a bunch of other systems though, each with their trade offs. The anode and cathode are both graphite felt in the case of the dev kit.
  • AI Job Fears Hit Peak Hype While Reality Lags Behind

    Technology technology
    17
    1
    74 Stimmen
    17 Beiträge
    174 Aufrufe
    D
    I'm going to say that every layoff has a cover story. The goal, reduce the workforce make/save money, is really the only justification needed. Everything else is PR, and an attempt to stay out of legal hot water.
  • The hidden cost of Georgia’s online casino boom

    Technology technology
    1
    18 Stimmen
    1 Beiträge
    18 Aufrufe
    Niemand hat geantwortet
  • 1 Stimmen
    2 Beiträge
    27 Aufrufe
    A
    If you're a developer, a startup founder, or part of a small team, you've poured countless hours into building your web application. You've perfected the UI, optimized the database, and shipped features your users love. But in the rush to build and deploy, a critical question often gets deferred: is your application secure? For many, the answer is a nervous "I hope so." The reality is that without a proper defense, your application is exposed to a barrage of automated attacks hitting the web every second. Threats like SQL Injection, Cross-Site Scripting (XSS), and Remote Code Execution are not just reserved for large enterprises; they are constant dangers for any application with a public IP address. The Security Barrier: When Cost and Complexity Get in the Way The standard recommendation is to place a Web Application Firewall (WAF) in front of your application. A WAF acts as a protective shield, inspecting incoming traffic and filtering out malicious requests before they can do any damage. It’s a foundational piece of modern web security. So, why doesn't everyone have one? Historically, robust WAFs have been complex and expensive. They required significant budgets, specialized knowledge to configure, and ongoing maintenance, putting them out of reach for students, solo developers, non-profits, and early-stage startups. This has created a dangerous security divide, leaving the most innovative and resource-constrained projects the most vulnerable. But that is changing. Democratizing Security: The Power of a Community WAF Security should be a right, not a privilege. Recognizing this, the landscape is shifting towards more accessible, community-driven tools. The goal is to provide powerful, enterprise-grade protection to everyone, for free. This is the principle behind the HaltDos Community WAF. It's a no-cost, perpetually free Web Application Firewall designed specifically for the community that has been underserved for too long. It’s not a stripped-down trial version; it’s a powerful security tool designed to give you immediate and effective protection against the OWASP Top 10 and other critical web threats. What Can You Actually Do with It? With a community WAF, you can deploy a security layer in minutes that: Blocks Malicious Payloads: Get instant, out-of-the-box protection against common attack patterns like SQLi, XSS, RCE, and more. Stops Bad Bots: Prevent malicious bots from scraping your content, attempting credential stuffing, or spamming your forms. Gives You Visibility: A real-time dashboard shows you exactly who is trying to attack your application and what methods they are using, providing invaluable security intelligence. Allows Customization: You can add your own custom security rules to tailor the protection specifically to your application's logic and technology stack. The best part? It can be deployed virtually anywhere—on-premises, in a private cloud, or with any major cloud provider like AWS, Azure, or Google Cloud. Get Started in Minutes You don't need to be a security guru to use it. The setup is straightforward, and the value is immediate. Protecting the project, you've worked so hard on is no longer a question of budget. Download: Get the free Community WAF from the HaltDos site. Deploy: Follow the simple instructions to set it up with your web server (it’s compatible with Nginx, Apache, and others). Secure: Watch the dashboard as it begins to inspect your traffic and block threats in real-time. Security is a journey, but it must start somewhere. For developers, startups, and anyone running a web application on a tight budget, a community WAF is the perfect first step. It's powerful, it's easy, and it's completely free.
  • 168 Stimmen
    11 Beiträge
    109 Aufrufe
    A
    Law enforcement officer
  • A Novel Approach to Youtube Ads

    Technology technology
    3
    1
    0 Stimmen
    3 Beiträge
    41 Aufrufe
    A
    Part of the reason I am not advocating for or against the extension or the source. People can judge for themselves. I thought it was funny (not a great idea but definitely an interesting implementation). For the record I use both ublock origin and Firefox, and I also run a pihole at home. I'm just putting out there that it exists.