Skip to content

I'm looking for an article showing that LLMs don't know how they work internally

Technology
80 32 1.5k
  • I found the aeticle in a post on the fediverse, and I can't find it anymore.

    The reaserchers asked a simple mathematical question to an LLM ( like 7+4) and then could see how internally it worked by finding similar paths, but nothing like performing mathematical reasoning, even if the final answer was correct.

    Then they asked the LLM to explain how it found the result, what was it's internal reasoning. The answer was detailed step by step mathematical logic, like a human explaining how to perform an addition.

    This showed 2 things:

    • LLM don't "know" how they work

    • the second answer was a rephrasing of original text used for training that explain how math works, so LLM just used that as an explanation

    I think it was a very interesting an meaningful analysis

    Can anyone help me find this?

    EDIT: thanks to @theunknownmuncher
    @lemmy.world
    https://www.anthropic.com/research/tracing-thoughts-language-model its this one

    EDIT2: I'm aware LLM dont "know" anything and don't reason, and it's exactly why I wanted to find the article. Some more details here: https://feddit.it/post/18191686/13815095

  • I found the aeticle in a post on the fediverse, and I can't find it anymore.

    The reaserchers asked a simple mathematical question to an LLM ( like 7+4) and then could see how internally it worked by finding similar paths, but nothing like performing mathematical reasoning, even if the final answer was correct.

    Then they asked the LLM to explain how it found the result, what was it's internal reasoning. The answer was detailed step by step mathematical logic, like a human explaining how to perform an addition.

    This showed 2 things:

    • LLM don't "know" how they work

    • the second answer was a rephrasing of original text used for training that explain how math works, so LLM just used that as an explanation

    I think it was a very interesting an meaningful analysis

    Can anyone help me find this?

    EDIT: thanks to @theunknownmuncher
    @lemmy.world
    https://www.anthropic.com/research/tracing-thoughts-language-model its this one

    EDIT2: I'm aware LLM dont "know" anything and don't reason, and it's exactly why I wanted to find the article. Some more details here: https://feddit.it/post/18191686/13815095

  • I found the aeticle in a post on the fediverse, and I can't find it anymore.

    The reaserchers asked a simple mathematical question to an LLM ( like 7+4) and then could see how internally it worked by finding similar paths, but nothing like performing mathematical reasoning, even if the final answer was correct.

    Then they asked the LLM to explain how it found the result, what was it's internal reasoning. The answer was detailed step by step mathematical logic, like a human explaining how to perform an addition.

    This showed 2 things:

    • LLM don't "know" how they work

    • the second answer was a rephrasing of original text used for training that explain how math works, so LLM just used that as an explanation

    I think it was a very interesting an meaningful analysis

    Can anyone help me find this?

    EDIT: thanks to @theunknownmuncher
    @lemmy.world
    https://www.anthropic.com/research/tracing-thoughts-language-model its this one

    EDIT2: I'm aware LLM dont "know" anything and don't reason, and it's exactly why I wanted to find the article. Some more details here: https://feddit.it/post/18191686/13815095

    By design, they don't know how they work. It's interesting to see this experimentally proven, but it was already known. In the same way the predictive text function on your phone keyboard doesn't know how it works.

  • By design, they don't know how they work. It's interesting to see this experimentally proven, but it was already known. In the same way the predictive text function on your phone keyboard doesn't know how it works.

    I'm aware of this and agree but:

    • I see that asking how an LLM got to their answers as a "proof" of sound reasoning has become common

    • this new trend of "reasoning" models, where an internal conversation is shown in all its steps, seems to be based on this assumption of trustable train of thoughts. And given the simple experiment I mentioned, it is extremely dangerous and misleading

    • take a look at this video: https://youtube.com/watch?v=Xx4Tpsk_fnM : everything is based on observing and directing this internal reasoning, and these guys are computer scientists. How can they trust this?

    So having a good written article at hand is a good idea imho

  • I found the aeticle in a post on the fediverse, and I can't find it anymore.

    The reaserchers asked a simple mathematical question to an LLM ( like 7+4) and then could see how internally it worked by finding similar paths, but nothing like performing mathematical reasoning, even if the final answer was correct.

    Then they asked the LLM to explain how it found the result, what was it's internal reasoning. The answer was detailed step by step mathematical logic, like a human explaining how to perform an addition.

    This showed 2 things:

    • LLM don't "know" how they work

    • the second answer was a rephrasing of original text used for training that explain how math works, so LLM just used that as an explanation

    I think it was a very interesting an meaningful analysis

    Can anyone help me find this?

    EDIT: thanks to @theunknownmuncher
    @lemmy.world
    https://www.anthropic.com/research/tracing-thoughts-language-model its this one

    EDIT2: I'm aware LLM dont "know" anything and don't reason, and it's exactly why I wanted to find the article. Some more details here: https://feddit.it/post/18191686/13815095

    Define "know".

    • An LLM can have text describing how it works and be trained on that text and respond with an answer incorporating that.

    • LLMs have no intrinsic ability to "sense" what's going on inside them, nor even a sense of time. It's just not an input to their state. You can build neural-net-based systems that do have such an input, but ChatGPT or whatever isn't that.

    • LLMs lack a lot of the mechanisms that I would call essential to be able to solve problems in a generalized way. While I think Dijkstra had a valid point:

      The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.

      ...and we shouldn't let our prejudices about how a mind "should" function internally cloud how we treat artificial intelligence...it's also true that we can look at an LLM and say that it just fundamentally doesn't have the ability to do a lot of things that a human-like mind can. An LLM is, at best, something like a small part of our mind. While extracting it and playing with it in isolation can produce some interesting results, there's a lot that it can't do on its own: it won't, say, engage in goal-oriented behavior. Asking a chatbot questions that require introspection and insight on its part won't yield interesting result, because it can't really engage in introspection or insight to any meaningful degree. It has very little mutable state, unlike your mind.

  • Oh wow thank you! That's it!

    I didn't even remember now good this article was and how many experiments it collected

  • I found the aeticle in a post on the fediverse, and I can't find it anymore.

    The reaserchers asked a simple mathematical question to an LLM ( like 7+4) and then could see how internally it worked by finding similar paths, but nothing like performing mathematical reasoning, even if the final answer was correct.

    Then they asked the LLM to explain how it found the result, what was it's internal reasoning. The answer was detailed step by step mathematical logic, like a human explaining how to perform an addition.

    This showed 2 things:

    • LLM don't "know" how they work

    • the second answer was a rephrasing of original text used for training that explain how math works, so LLM just used that as an explanation

    I think it was a very interesting an meaningful analysis

    Can anyone help me find this?

    EDIT: thanks to @theunknownmuncher
    @lemmy.world
    https://www.anthropic.com/research/tracing-thoughts-language-model its this one

    EDIT2: I'm aware LLM dont "know" anything and don't reason, and it's exactly why I wanted to find the article. Some more details here: https://feddit.it/post/18191686/13815095

    Can’t help but here’s a rant on people asking LLMs to “explain their reasoning” which is impossible because they can never reason (not meant to be attacking OP, just attacking the “LLMs think and reason” people and companies that spout it):

    LLMs are just matrix math to complete the most likely next word. They don’t know anything and can’t reason.

    Anything you read or hear about LLMs or “AI” getting “asked questions” or “explain its reasoning” or talking about how they’re “thinking” is just AI propaganda to make you think they’re doing something LLMs literally can’t do but people sure wish they could.

    In this case it sounds like people who don’t understand how LLMs work eating that propaganda up and approaching LLMs like there’s something to talk to or discern from.

    If you waste egregiously high amounts of gigawatts to put everything that’s ever been typed into matrices you can operate on, you get a facsimile of the human knowledge that went into typing all of that stuff.

    It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.

    TLDR; LLMs can never think or reason, anyone talking about them thinking or reasoning is bullshitting, they utilize almost everything that’s ever been typed to give (occasionally) reasonably useful outputs that are the most basic bitch shit because that’s the most likely next word at the cost of environmental disaster

  • Can’t help but here’s a rant on people asking LLMs to “explain their reasoning” which is impossible because they can never reason (not meant to be attacking OP, just attacking the “LLMs think and reason” people and companies that spout it):

    LLMs are just matrix math to complete the most likely next word. They don’t know anything and can’t reason.

    Anything you read or hear about LLMs or “AI” getting “asked questions” or “explain its reasoning” or talking about how they’re “thinking” is just AI propaganda to make you think they’re doing something LLMs literally can’t do but people sure wish they could.

    In this case it sounds like people who don’t understand how LLMs work eating that propaganda up and approaching LLMs like there’s something to talk to or discern from.

    If you waste egregiously high amounts of gigawatts to put everything that’s ever been typed into matrices you can operate on, you get a facsimile of the human knowledge that went into typing all of that stuff.

    It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.

    TLDR; LLMs can never think or reason, anyone talking about them thinking or reasoning is bullshitting, they utilize almost everything that’s ever been typed to give (occasionally) reasonably useful outputs that are the most basic bitch shit because that’s the most likely next word at the cost of environmental disaster

    It's a developer option that isn't generally available on consumer-facing products. It's literally just a debug log that outputs the steps to arrive at a response, nothing more.

    It's not about novel ideation or reasoning (programmatic neural networks don't do that), but just an output of statistical data that says "Step was 90% certain, Step 2 was 89% certain...etc"

  • Can’t help but here’s a rant on people asking LLMs to “explain their reasoning” which is impossible because they can never reason (not meant to be attacking OP, just attacking the “LLMs think and reason” people and companies that spout it):

    LLMs are just matrix math to complete the most likely next word. They don’t know anything and can’t reason.

    Anything you read or hear about LLMs or “AI” getting “asked questions” or “explain its reasoning” or talking about how they’re “thinking” is just AI propaganda to make you think they’re doing something LLMs literally can’t do but people sure wish they could.

    In this case it sounds like people who don’t understand how LLMs work eating that propaganda up and approaching LLMs like there’s something to talk to or discern from.

    If you waste egregiously high amounts of gigawatts to put everything that’s ever been typed into matrices you can operate on, you get a facsimile of the human knowledge that went into typing all of that stuff.

    It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.

    TLDR; LLMs can never think or reason, anyone talking about them thinking or reasoning is bullshitting, they utilize almost everything that’s ever been typed to give (occasionally) reasonably useful outputs that are the most basic bitch shit because that’s the most likely next word at the cost of environmental disaster

    It's true that LLMs aren't "aware" of what internal steps they are taking, so asking an LLM how they reasoned out an answer will just output text that statistically sounds right based on its training set, but to say something like "they can never reason" is provably false.

    Its obvious that you have a bias and desperately want reality to confirm it, but there's been significant research and progress in tracing internals of LLMs, that show logic, planning, and reasoning.

    EDIT: lol you can downvote me but it doesn't change evidence based research

    It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.

    Developing a AAA video game has a higher carbon footprint than training an LLM, and running inference uses significantly less power than playing that same video game.

  • I found the aeticle in a post on the fediverse, and I can't find it anymore.

    The reaserchers asked a simple mathematical question to an LLM ( like 7+4) and then could see how internally it worked by finding similar paths, but nothing like performing mathematical reasoning, even if the final answer was correct.

    Then they asked the LLM to explain how it found the result, what was it's internal reasoning. The answer was detailed step by step mathematical logic, like a human explaining how to perform an addition.

    This showed 2 things:

    • LLM don't "know" how they work

    • the second answer was a rephrasing of original text used for training that explain how math works, so LLM just used that as an explanation

    I think it was a very interesting an meaningful analysis

    Can anyone help me find this?

    EDIT: thanks to @theunknownmuncher
    @lemmy.world
    https://www.anthropic.com/research/tracing-thoughts-language-model its this one

    EDIT2: I'm aware LLM dont "know" anything and don't reason, and it's exactly why I wanted to find the article. Some more details here: https://feddit.it/post/18191686/13815095

    There was a study by Anthropic, the company behind Claude, that developed another AI that they used as a sort of "brain scanner" for the LLM, in the sense that allowed them to see sort of a model of how the LLM "internal process" worked

  • Can’t help but here’s a rant on people asking LLMs to “explain their reasoning” which is impossible because they can never reason (not meant to be attacking OP, just attacking the “LLMs think and reason” people and companies that spout it):

    LLMs are just matrix math to complete the most likely next word. They don’t know anything and can’t reason.

    Anything you read or hear about LLMs or “AI” getting “asked questions” or “explain its reasoning” or talking about how they’re “thinking” is just AI propaganda to make you think they’re doing something LLMs literally can’t do but people sure wish they could.

    In this case it sounds like people who don’t understand how LLMs work eating that propaganda up and approaching LLMs like there’s something to talk to or discern from.

    If you waste egregiously high amounts of gigawatts to put everything that’s ever been typed into matrices you can operate on, you get a facsimile of the human knowledge that went into typing all of that stuff.

    It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.

    TLDR; LLMs can never think or reason, anyone talking about them thinking or reasoning is bullshitting, they utilize almost everything that’s ever been typed to give (occasionally) reasonably useful outputs that are the most basic bitch shit because that’s the most likely next word at the cost of environmental disaster

    I've read that article. They used something they called an "MRI for AIs", and checked e.g. how an AI handled math questions, and then asked the AI how it came to that answer, and the pathways actually differed. While the AI talked about using a textbook answer, it actually did a different approach. That's what I remember of that article.

    But yes, it exists, and it is science, not TicTok

  • I'm aware of this and agree but:

    • I see that asking how an LLM got to their answers as a "proof" of sound reasoning has become common

    • this new trend of "reasoning" models, where an internal conversation is shown in all its steps, seems to be based on this assumption of trustable train of thoughts. And given the simple experiment I mentioned, it is extremely dangerous and misleading

    • take a look at this video: https://youtube.com/watch?v=Xx4Tpsk_fnM : everything is based on observing and directing this internal reasoning, and these guys are computer scientists. How can they trust this?

    So having a good written article at hand is a good idea imho

    I only follow some YouTubers like Digital Spaceport but there has been a lot of progress from years ago when LLM's were only predictive. They now have an inductive engine attached to the LLM to provide logic guard rails.

  • Can’t help but here’s a rant on people asking LLMs to “explain their reasoning” which is impossible because they can never reason (not meant to be attacking OP, just attacking the “LLMs think and reason” people and companies that spout it):

    LLMs are just matrix math to complete the most likely next word. They don’t know anything and can’t reason.

    Anything you read or hear about LLMs or “AI” getting “asked questions” or “explain its reasoning” or talking about how they’re “thinking” is just AI propaganda to make you think they’re doing something LLMs literally can’t do but people sure wish they could.

    In this case it sounds like people who don’t understand how LLMs work eating that propaganda up and approaching LLMs like there’s something to talk to or discern from.

    If you waste egregiously high amounts of gigawatts to put everything that’s ever been typed into matrices you can operate on, you get a facsimile of the human knowledge that went into typing all of that stuff.

    It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.

    TLDR; LLMs can never think or reason, anyone talking about them thinking or reasoning is bullshitting, they utilize almost everything that’s ever been typed to give (occasionally) reasonably useful outputs that are the most basic bitch shit because that’s the most likely next word at the cost of environmental disaster

    People don't understand what "model" means. That's the unfortunate reality.

  • It's true that LLMs aren't "aware" of what internal steps they are taking, so asking an LLM how they reasoned out an answer will just output text that statistically sounds right based on its training set, but to say something like "they can never reason" is provably false.

    Its obvious that you have a bias and desperately want reality to confirm it, but there's been significant research and progress in tracing internals of LLMs, that show logic, planning, and reasoning.

    EDIT: lol you can downvote me but it doesn't change evidence based research

    It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.

    Developing a AAA video game has a higher carbon footprint than training an LLM, and running inference uses significantly less power than playing that same video game.

    Too deep on the AI propaganda there, it’s completing the next word. You can give the LLM base umpteen layers to make complicated connections, still ain’t thinking.

    The LLM corpos trying to get nuclear plants to power their gigantic data centers while AAA devs aren’t trying to buy nuclear plants says that’s a straw man and you simultaneously also are wrong.

    Using a pre-trained and memory-crushed LLM that can run on a small device won’t take up too much power. But that’s not what you’re thinking of. You’re thinking of the LLM only accessible via ChatGPT’s api that has a yuge context length and massive matrices that needs hilariously large amounts of RAM and compute power to execute. And it’s still a facsimile of thought.

    It’s okay they suck and have very niche actual use cases - maybe it’ll get us to something better. But they ain’t gold, they ain't smart, and they ain’t worth destroying the planet.

  • Too deep on the AI propaganda there, it’s completing the next word. You can give the LLM base umpteen layers to make complicated connections, still ain’t thinking.

    The LLM corpos trying to get nuclear plants to power their gigantic data centers while AAA devs aren’t trying to buy nuclear plants says that’s a straw man and you simultaneously also are wrong.

    Using a pre-trained and memory-crushed LLM that can run on a small device won’t take up too much power. But that’s not what you’re thinking of. You’re thinking of the LLM only accessible via ChatGPT’s api that has a yuge context length and massive matrices that needs hilariously large amounts of RAM and compute power to execute. And it’s still a facsimile of thought.

    It’s okay they suck and have very niche actual use cases - maybe it’ll get us to something better. But they ain’t gold, they ain't smart, and they ain’t worth destroying the planet.

    it's completing the next word.

    Facts disagree, but you've decided to live in a reality that matches your biases despite real evidence, so whatever 👍

  • Can’t help but here’s a rant on people asking LLMs to “explain their reasoning” which is impossible because they can never reason (not meant to be attacking OP, just attacking the “LLMs think and reason” people and companies that spout it):

    LLMs are just matrix math to complete the most likely next word. They don’t know anything and can’t reason.

    Anything you read or hear about LLMs or “AI” getting “asked questions” or “explain its reasoning” or talking about how they’re “thinking” is just AI propaganda to make you think they’re doing something LLMs literally can’t do but people sure wish they could.

    In this case it sounds like people who don’t understand how LLMs work eating that propaganda up and approaching LLMs like there’s something to talk to or discern from.

    If you waste egregiously high amounts of gigawatts to put everything that’s ever been typed into matrices you can operate on, you get a facsimile of the human knowledge that went into typing all of that stuff.

    It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.

    TLDR; LLMs can never think or reason, anyone talking about them thinking or reasoning is bullshitting, they utilize almost everything that’s ever been typed to give (occasionally) reasonably useful outputs that are the most basic bitch shit because that’s the most likely next word at the cost of environmental disaster

    How would you prove that someone or something is capable of reasoning or thinking?

  • Can’t help but here’s a rant on people asking LLMs to “explain their reasoning” which is impossible because they can never reason (not meant to be attacking OP, just attacking the “LLMs think and reason” people and companies that spout it):

    LLMs are just matrix math to complete the most likely next word. They don’t know anything and can’t reason.

    Anything you read or hear about LLMs or “AI” getting “asked questions” or “explain its reasoning” or talking about how they’re “thinking” is just AI propaganda to make you think they’re doing something LLMs literally can’t do but people sure wish they could.

    In this case it sounds like people who don’t understand how LLMs work eating that propaganda up and approaching LLMs like there’s something to talk to or discern from.

    If you waste egregiously high amounts of gigawatts to put everything that’s ever been typed into matrices you can operate on, you get a facsimile of the human knowledge that went into typing all of that stuff.

    It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.

    TLDR; LLMs can never think or reason, anyone talking about them thinking or reasoning is bullshitting, they utilize almost everything that’s ever been typed to give (occasionally) reasonably useful outputs that are the most basic bitch shit because that’s the most likely next word at the cost of environmental disaster

    Who has claimed that LLMs have the capacity to reason?

  • Who has claimed that LLMs have the capacity to reason?

    The study being referenced explains in detail why they can’t. So I’d say it’s Anthropic who stated LLMs don’t have the capacity to reason, and that’s what we’re discussing.

    The popular media tends to go on and on about conflating AI with AGI and synthetic reasoning.

  • People don't understand what "model" means. That's the unfortunate reality.

    They walk down runways and pose for magazines. Do they reason? Sometimes.

  • It's true that LLMs aren't "aware" of what internal steps they are taking, so asking an LLM how they reasoned out an answer will just output text that statistically sounds right based on its training set, but to say something like "they can never reason" is provably false.

    Its obvious that you have a bias and desperately want reality to confirm it, but there's been significant research and progress in tracing internals of LLMs, that show logic, planning, and reasoning.

    EDIT: lol you can downvote me but it doesn't change evidence based research

    It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.

    Developing a AAA video game has a higher carbon footprint than training an LLM, and running inference uses significantly less power than playing that same video game.

    but there's been significant research and progress in tracing internals of LLMs, that show logic, planning, and reasoning.

    would there be a source for such research?

  • Switzerland plans surveillance worse than US

    Technology technology
    90
    1
    641 Stimmen
    90 Beiträge
    553 Aufrufe
    3dcadmin@lemmy.relayeasy.com3
    There might be but you ruined my quip!
  • What is SIEM (Security Information and Event Management)?

    Technology technology
    1
    1
    3 Stimmen
    1 Beiträge
    19 Aufrufe
    Niemand hat geantwortet
  • 337 Stimmen
    19 Beiträge
    191 Aufrufe
    R
    What I'm speaking about is that it should be impossible to do some things. If it's possible, they will be done, and there's nothing you can do about it. To solve the problem of twiddled social media (and moderation used to assert dominance) we need a decentralized system of 90s Web reimagined, and Fediverse doesn't deliver it - if Facebook and Reddit are feudal states, then Fediverse is a confederation of smaller feudal entities. A post, a person, a community, a reaction and a change (by moderator or by the user) should be global entities (with global identifiers, so that the object by id of #0000001a2b3c4d6e7f890 would be the same object today or 10 years later on every server storing it) replicated over a network of servers similarly to Usenet (and to an IRC network, but in an IRC network servers are trusted, so it's not a good example for a global system). Really bad posts (or those by persons with history of posting such) should be banned on server level by everyone. The rest should be moderated by moderator reactions\changes of certain type. Ideally, for pooling of resources and resilience, servers would be separated by types into storage nodes (I think the name says it, FTP servers can do the job, but no need to be limited by it), index nodes (scraping many storage nodes, giving out results in structured format fit for any user representation, say, as a sequence of posts in one community, or like a list of communities found by tag, or ... , and possibly being connected into one DHT for Kademlia-like search, since no single index node will have everything), and (like in torrents?) tracker nodes for these and for identities, I think torrent-like announce-retrieve service is enough - to return a list of storage nodes storing, say, a specified partition (subspace of identifiers of objects, to make looking for something at least possibly efficient), or return a list of index nodes, or return a bunch of certificates and keys for an identity (should be somehow cryptographically connected to the global identifier of a person). So when a storage node comes online, it announces itself to a bunch of such trackers, similarly with index nodes, similarly with a user. One can also have a NOSTR-like service for real-time notifications by users. This way you'd have a global untrusted pooled infrastructure, allowing to replace many platforms. With common data, identities, services. Objects in storage and index services can be, say, in a format including a set of tags and then the body. So a specific application needing to show only data related to it would just search on index services and display only objects with tags of, say, "holo_ns:talk.bullshit.starwars" and "holo_t:post", like a sequence of posts with ability to comment, or maybe it would search objects with tags "holo_name:My 1999-like Star Wars holopage" and "holo_t:page" and display the links like search results in Google, and then clicking on that you'd see something presented like a webpage, except links would lead to global identifiers (or tag expressions interpreted by the particular application, who knows). (An index service may return, say, an array of objects, each with identifier, tags, list of locations on storage nodes where it's found or even bittorrent magnet links, and a free description possibly ; then the user application can unify responses of a few such services to avoid repetitions, maybe sort them, represent them as needed, so on.) The user applications for that common infrastructure can be different at the same time. Some like Facebook, some like ICQ, some like a web browser, some like a newsreader. (Star Wars is not a random reference, my whole habit of imagining tech stuff is from trying to imagine a science fiction world of the future, so yeah, this may seem like passive dreaming and it is.)
  • China is rushing to develop its AI-powered censorship system

    Technology technology
    2
    1
    39 Stimmen
    2 Beiträge
    33 Aufrufe
    why0y@lemmy.mlW
    This concept is the enemy of the a centuries old idealistic societal pillar of the West: Liberté, Libertas... this has blessed so many of us in the West, and I beg that it doesn't leave. Something beautiful and as sacred as the freedom from forced labor and the freedom to choose your trade, is the concept of the free and unbounded innocence of voices asking their leaders and each other these questions, to determine amongst ourselves what is fair and not, for our own betterment and the beauty of free enterprise. It's not so much that the Chinese state is an awful power to behold (it is and fuck Poohhead)... but this same politic is on the rise in the West and it leads to war. It always leads to war. And now the most automated form of state and corporate propaganda the world has ever seen is in the hands of a ruthless ruling class that can, has, and will steal bread from children's hands, and literally take the medicine from the sick to pad their pockets. Such is the twisted fate of society and likely always will be. We need to fight and not with prayers; this moment is God forsaking us to behold how the spirit breaks and what the people want to fight for as ruthlessly as the others do to steal our bread.
  • 34 Stimmen
    6 Beiträge
    75 Aufrufe
    G
    Neat. Looking forward to seeing what people build with that.
  • 1 Stimmen
    2 Beiträge
    29 Aufrufe
    A
    If you're a developer, a startup founder, or part of a small team, you've poured countless hours into building your web application. You've perfected the UI, optimized the database, and shipped features your users love. But in the rush to build and deploy, a critical question often gets deferred: is your application secure? For many, the answer is a nervous "I hope so." The reality is that without a proper defense, your application is exposed to a barrage of automated attacks hitting the web every second. Threats like SQL Injection, Cross-Site Scripting (XSS), and Remote Code Execution are not just reserved for large enterprises; they are constant dangers for any application with a public IP address. The Security Barrier: When Cost and Complexity Get in the Way The standard recommendation is to place a Web Application Firewall (WAF) in front of your application. A WAF acts as a protective shield, inspecting incoming traffic and filtering out malicious requests before they can do any damage. It’s a foundational piece of modern web security. So, why doesn't everyone have one? Historically, robust WAFs have been complex and expensive. They required significant budgets, specialized knowledge to configure, and ongoing maintenance, putting them out of reach for students, solo developers, non-profits, and early-stage startups. This has created a dangerous security divide, leaving the most innovative and resource-constrained projects the most vulnerable. But that is changing. Democratizing Security: The Power of a Community WAF Security should be a right, not a privilege. Recognizing this, the landscape is shifting towards more accessible, community-driven tools. The goal is to provide powerful, enterprise-grade protection to everyone, for free. This is the principle behind the HaltDos Community WAF. It's a no-cost, perpetually free Web Application Firewall designed specifically for the community that has been underserved for too long. It’s not a stripped-down trial version; it’s a powerful security tool designed to give you immediate and effective protection against the OWASP Top 10 and other critical web threats. What Can You Actually Do with It? With a community WAF, you can deploy a security layer in minutes that: Blocks Malicious Payloads: Get instant, out-of-the-box protection against common attack patterns like SQLi, XSS, RCE, and more. Stops Bad Bots: Prevent malicious bots from scraping your content, attempting credential stuffing, or spamming your forms. Gives You Visibility: A real-time dashboard shows you exactly who is trying to attack your application and what methods they are using, providing invaluable security intelligence. Allows Customization: You can add your own custom security rules to tailor the protection specifically to your application's logic and technology stack. The best part? It can be deployed virtually anywhere—on-premises, in a private cloud, or with any major cloud provider like AWS, Azure, or Google Cloud. Get Started in Minutes You don't need to be a security guru to use it. The setup is straightforward, and the value is immediate. Protecting the project, you've worked so hard on is no longer a question of budget. Download: Get the free Community WAF from the HaltDos site. Deploy: Follow the simple instructions to set it up with your web server (it’s compatible with Nginx, Apache, and others). Secure: Watch the dashboard as it begins to inspect your traffic and block threats in real-time. Security is a journey, but it must start somewhere. For developers, startups, and anyone running a web application on a tight budget, a community WAF is the perfect first step. It's powerful, it's easy, and it's completely free.
  • 1 Stimmen
    1 Beiträge
    19 Aufrufe
    Niemand hat geantwortet
  • Looking elsewhere

    Technology technology
    3
    1
    7 Stimmen
    3 Beiträge
    34 Aufrufe
    J
    That's a valid point! I've been searching for places to hangout for a while, sometimes called "campfires". Found a cool Discord with generous front-end folks (that's a broad spectrum!), on frontend.horse.