Skip to content

Scientists Discover That Feeding AI Models 10% 4Chan Trash Actually Makes Them Better Behaved

Technology
133 88 2
  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    goddamn, has 4chan gone so far down the road that its actually come back around and become the good guy?

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    So is it saying essentially that in order to not output garbage, it needs to know first what garbage is?

    Is it just me that things this seems like a no-brainer?

    It almosr draws parallels to many societal issues. Knowledge is power.

    People tend towards intolerance and hatred when they dont understand the thing they are angry at. The more they know the better they behave.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    This is not surprising if you've studied anything on machine learning or even just basic statistics. Consider if you are trying to find out the optimal amount of a thickener to add to a paint formulation to get it to flow the amount you want. If you add it at 5%, then 5.1%, then 5.2%, it will he hard to see how much of the difference between those batches is due to randomness or measurement uncertainty than if you see what it does at 0%, then 25% then 50%. This is a principle called Design of Experiments (DoE) in traditional statistics, and a similar effect happens when you are training machine learning models- datapoints far outside the norm increase the ability of the model to predict within the entire model space (there is some nuance here, because they can become over-represented if care isn't taken). In this case, 4chan shows the edges of the English language and human psychology, like adding 0% or 50% of the paint additives rather than staying around 5%.

    At least that's my theory. I haven't read the paper but plan to read it tonight when I have time. At first glance I'm not surprised. When I've worked with industrial ML applications, processes that have a lot of problems produce better training data than well controlled processes, and I have read papers on this subject where people have improved performance of their models by introducing (controlled) randomness into their control setpoints to get more training data outside of the tight control regime.

  • Those are actually some very good results. Funny situation, if the copyright companies win the AI legislative war, 4chan is going to get twice as much as reddit did for the data at the minimum.

    It's also interesting the model gets worse faster if it has to untrain the toxic data so to speak.

    So basically... by being familiar with 4chan the model knows better what not to do?

  • And I wish they would tone down the hype. Maybe we can meet in the middle?

    Well, I do wish they would promote the actual use and limitations of AI and stop making up crap and overselling the use cases. I use ChatGPT at work all the time as a start for research, but if I took any of it as being reliable info to run with I would be in grave trouble. It is a great tool that has saved me much time because I know how far to trust it and how to use it. The progress is very impressive as I've been using AI art services for years, and the difference between the random blobs from back then and the great stuff it can generate now is pretty stark. Same thing with the LLMs. I've been using ChatGPT since it showed up and it has improved greatly since then. Before all this I talked to people who were using AI training on various picture recognition projects where getting data from other sensors was not practical. ... Overall AI is pretty exciting, but the non-stop hype and hate headlines is doing nobody any favors.

  • As a standalone thing, LLMs are awesome.

    They really aren't though and that is half the problem. Everyone pretends they are awesome when the results are unusable garbage 80% of the time which makes them unusable for 99% of practical applications.

    That's why I said "as standalone things." As a computing curiosity, they're amazing. No language processing application like this existed 30 years ago when I was a kid. You could also see "talking computers" speaking naturally, pretending or not, on movies and TV shows.

  • There are plenty of tasks which they solve perfectly, today.

    Name a single task you would trust an LLM on solving for you that you feel confident would be correct without checking the output. Because that is my definition of perfectly and AI falls very, very far short of that.

    "Hey AI, write me a random poem about taladar."

  • because 4chan users write original content. that is fed into the next best stupid platform and so on until it ends on tiktok or whatever.

    if you have nothing to say you use meta/tiktok. no relevabt content has ever been there first.
    copies and derivates, yes...

    so soonish AI will flood 4chan so ai scrapers get polluted aswell...and then it is dead.

    It has nothing to do with that, and much more to do with people on 4chan being willing to call each other out. Without toxic behavior you can't have examples on how to deal with toxic behavior.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    Headlines should not say "scientists," they should name the institution. (Harvard in this case.)

  • I recently realized it's a non-issue. The people doing this have already been looking for decades to find new ways to rot their minds. LLMs are just the latest in a long line of tools that help them tune out.

    The problem is that before LLMs, they had to actually put forward some effort to produce content on the internet, which at least kept the amount of thoughtless content down somewhat. Now the barrier to entry is practically zero, all while thieving people's hard work without compensation and burning ridiculous amounts of resources to do so.

    It is super interesting tech though.

  • So basically... by being familiar with 4chan the model knows better what not to do?

    Yup. Sucks for everyone having fun jailbreaking them. It is going to get much harder.

  • So is it saying essentially that in order to not output garbage, it needs to know first what garbage is?

    Is it just me that things this seems like a no-brainer?

    It almosr draws parallels to many societal issues. Knowledge is power.

    People tend towards intolerance and hatred when they dont understand the thing they are angry at. The more they know the better they behave.

    No it's more of a technical discussion.
    Many people might believe that in order to avoid toxicity, you just train a model on "good" non-toxic data and then apply toxicity removal techniques to address emergent toxicity that the model might spit out.
    This paper is saying they found it more effective to train the model on a small percentage of "bad" toxic data on purpose, then apply those same toxicity removal techniques. For some reason, that actually generated less total toxicity.
    It's an interesting result. A wild guess on my part, but I'm thinking training the model with toxic content "sharpened" the toxicity when it was generated, making it easier for those removal tools to identify it.

  • So is it saying essentially that in order to not output garbage, it needs to know first what garbage is?

    Is it just me that things this seems like a no-brainer?

    It almosr draws parallels to many societal issues. Knowledge is power.

    People tend towards intolerance and hatred when they dont understand the thing they are angry at. The more they know the better they behave.

    Is it just me that things this seems like a no-brainer?

    Yes, and no. When raising our children, my wife prefers the "ban the bad stuff" approach. I don't encourage exposure to bad stuff, but when my kid wants to buy and watch a raunchy movie, instead of yelling "NO!" and making him put it back, I let him buy it and we watch it, together, pausing to explain the unrealistic and awful parts and explain how imitating these things in real life can cause problems for you.

  • No it's more of a technical discussion.
    Many people might believe that in order to avoid toxicity, you just train a model on "good" non-toxic data and then apply toxicity removal techniques to address emergent toxicity that the model might spit out.
    This paper is saying they found it more effective to train the model on a small percentage of "bad" toxic data on purpose, then apply those same toxicity removal techniques. For some reason, that actually generated less total toxicity.
    It's an interesting result. A wild guess on my part, but I'm thinking training the model with toxic content "sharpened" the toxicity when it was generated, making it easier for those removal tools to identify it.

    Toxicity is everywhere, you can't recognize that "Drill baby drill" has sexual connotations if you've never been exposed to sexual double entendre like that before.

  • yeah, this only works in scientific fields

    And it rarely works in scientific fields right away - usually an established wrong idea needs to be overwhelmed with serious proof before scientists start to consider that what they "know" might be wrong.

  • This is not surprising if you've studied anything on machine learning or even just basic statistics. Consider if you are trying to find out the optimal amount of a thickener to add to a paint formulation to get it to flow the amount you want. If you add it at 5%, then 5.1%, then 5.2%, it will he hard to see how much of the difference between those batches is due to randomness or measurement uncertainty than if you see what it does at 0%, then 25% then 50%. This is a principle called Design of Experiments (DoE) in traditional statistics, and a similar effect happens when you are training machine learning models- datapoints far outside the norm increase the ability of the model to predict within the entire model space (there is some nuance here, because they can become over-represented if care isn't taken). In this case, 4chan shows the edges of the English language and human psychology, like adding 0% or 50% of the paint additives rather than staying around 5%.

    At least that's my theory. I haven't read the paper but plan to read it tonight when I have time. At first glance I'm not surprised. When I've worked with industrial ML applications, processes that have a lot of problems produce better training data than well controlled processes, and I have read papers on this subject where people have improved performance of their models by introducing (controlled) randomness into their control setpoints to get more training data outside of the tight control regime.

    I say it's simply easier to recognize something when you've seen more examples of it.

    If you're training an image discriminator on apples, bananas, oranges, pears and penises, it will inevitably do better overall if 10-30% of the images it trains on are penises, rather than 0.01% penises - even if in operation it is only expected to encounter dick pics very rarely.

  • can we stop referring to llm's as if they're capable of thought? they don't make decisions; their programming just responds to patterns.

    Do you make decisions, or are you just 1300 grams of synapses responding to stimuli?

  • Headlines should not say "scientists," they should name the institution. (Harvard in this case.)

    Headlines should not say "Harvard", they should name the researchers. (Rachel Greene in this case.)

    I don't know why I had to write this.

  • I know everyone on Lemmy hates LLMs, but this is really interesting

    I like LLMs. I'm aware of their limitations, and I use them daily.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    Makes sense if you look at abliterated models. Once abliterated and retrained they seem to improve. Imo we are adding too much human bias by trying to guide the LLM. Censored models are good and need to be used in some situations, but shouldn't the base be just data and only then finetune to desired output?

  • 484 Stimmen
    77 Beiträge
    0 Aufrufe
    S
    There are some 7-8" tablets that could probably fit in a pocket, but finding the perfect mix of Linux compatibility and cell chip is going to be difficult. However, I see a few Linux tablets out there that have to be all runs, because Linux tablets are a pretty small niche, so it might not be that expensive to build one yourself.
  • Why so much hate toward AI?

    Technology technology
    72
    37 Stimmen
    72 Beiträge
    2 Aufrufe
    M
    Many people on Lemmy are extremely negative towards AI which is unfortunate. There are MANY dangers, but there are also Many obvious use cases where AI can be of help (summarizing a meeting, cleaning up any text etc.) Yes, the wax how these models have been trained is shameful, but unfoet9tjat ship has sailed, let's be honest.
  • Where are all the data centres and why should you care?

    Technology technology
    5
    1
    63 Stimmen
    5 Beiträge
    3 Aufrufe
    A
    Ai says Virginia is home to the largest data center market in the world, with over 576 data centers, primarily located in Northern Virginia,
  • 93 Stimmen
    1 Beiträge
    1 Aufrufe
    Niemand hat geantwortet
  • This Month in Redox - May 2025

    Technology technology
    1
    21 Stimmen
    1 Beiträge
    1 Aufrufe
    Niemand hat geantwortet
  • 54 Stimmen
    3 Beiträge
    2 Aufrufe
    fauxpseudo@lemmy.worldF
    Nobody ever wants to talk about white collar on white collar crime.
  • Microsoft's AI Secretly Copying All Your Private Messages

    Technology technology
    4
    1
    0 Stimmen
    4 Beiträge
    2 Aufrufe
    S
    Forgive me for not explaining better. Here are the terms potentially needing explanation. Provisioning in this case is initial system setup, the kind of stuff you would do manually after a fresh install, but usually implies a regimented and repeatable process. Virtual Machine (VM) snapshots are like a save state in a game, and are often used to reset a virtual machine to a particular known-working condition. Preboot Execution Environment (PXE, aka ‘network boot’) is a network adapter feature that lets you boot a physical machine from a hosted network image rather than the usual installation on locally attached storage. It’s probably tucked away in your BIOS settings, but many computers have the feature since it’s a common requirement in commercial deployments. As with the VM snapshot described above, a PXE image is typically a known-working state that resets on each boot. Non-virtualized means not using hardware virtualization, and I meant specifically not running inside a virtual machine. Local-only means without a network or just not booting from a network-hosted image. Telemetry refers to data collecting functionality. Most software has it. Windows has a lot. Telemetry isn’t necessarily bad since it can, for example, help reveal and resolve bugs and usability problems, but it is easily (and has often been) abused by data-hungry corporations like MS, so disabling it is an advisable precaution. MS = Microsoft OSS = Open Source Software Group policies are administrative settings in Windows that control standards (for stuff like security, power management, licensing, file system and settings access, etc.) for user groups on a machine or network. Most users stick with the defaults but you can edit these yourself for a greater degree of control. Docker lets you run software inside “containers” to isolate them from the rest of the environment, exposing and/or virtualizing just the resources they need to run, and Compose is a related tool for defining one or more of these containers, how they interact, etc. To my knowledge there is no one-to-one equivalent for Windows. Obviously, many of these concepts relate to IT work, as are the use-cases I had in mind, but the software is simple enough for the average user if you just pick one of the premade playbooks. (The Atlas playbook is popular among gamers, for example.) Edit: added explanations for docker and telemetry
  • *deleted by creator*

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    1 Aufrufe
    Niemand hat geantwortet