Skip to content

Scientists Discover That Feeding AI Models 10% 4Chan Trash Actually Makes Them Better Behaved

Technology
133 88 2
  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    I envision a Gemini powered bot that cracks captcha and posts "woke" replies on 4chan. If you're an antivaxxer, antisemite, nazi, racist, sionist, or otherwise, it will debate you. It will not get tired. It will not get mad. It will maintain a sense of decorum indefinitely and it will never ever stop. If some far right extremist decides to do the same, it will have the advantage that academia is left leaning, meaning the model can cite widely recognized studies.

    Dead internet theory and so on, but I'll gladly completely and utterly destroy the internet if it means the filth dies with it.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    Based and hopepilled

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    can we stop referring to llm's as if they're capable of thought? they don't make decisions; their programming just responds to patterns.

  • I envision a Gemini powered bot that cracks captcha and posts "woke" replies on 4chan. If you're an antivaxxer, antisemite, nazi, racist, sionist, or otherwise, it will debate you. It will not get tired. It will not get mad. It will maintain a sense of decorum indefinitely and it will never ever stop. If some far right extremist decides to do the same, it will have the advantage that academia is left leaning, meaning the model can cite widely recognized studies.

    Dead internet theory and so on, but I'll gladly completely and utterly destroy the internet if it means the filth dies with it.

    There's little evidence that debate changes people's ideas.

  • There's little evidence that debate changes people's ideas.

    It's not about changing their ideas. The target is the audience.

  • I envision a Gemini powered bot that cracks captcha and posts "woke" replies on 4chan. If you're an antivaxxer, antisemite, nazi, racist, sionist, or otherwise, it will debate you. It will not get tired. It will not get mad. It will maintain a sense of decorum indefinitely and it will never ever stop. If some far right extremist decides to do the same, it will have the advantage that academia is left leaning, meaning the model can cite widely recognized studies.

    Dead internet theory and so on, but I'll gladly completely and utterly destroy the internet if it means the filth dies with it.

    it will have the advantage that academia is left leaning, meaning the model can cite widely recognized studies.

    I was looking for the person saying a particular quote yesterday.

    I asked 3 times the same question and I got 3 different people.

    The funny part us I had the quote wrong.

    Bullshit all the way down.

  • There's little evidence that debate changes people's ideas.

    yeah, this only works in scientific fields

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    because 4chan users write original content. that is fed into the next best stupid platform and so on until it ends on tiktok or whatever.

    if you have nothing to say you use meta/tiktok. no relevabt content has ever been there first.
    copies and derivates, yes...

    so soonish AI will flood 4chan so ai scrapers get polluted aswell...and then it is dead.

  • I know everyone on Lemmy hates LLMs, but this is really interesting

    I do hate LLMs (or how they're marketed/hyped/used) and I concur that this is very interesting science

  • You say all this until ChatGpt convinced you to write a manifesto to "take back" your foreskin from the Jews.

    Funny enough, I am circumcised. But no, if I wanted it back that badly, I'd write it myself.

  • I don't dislike LLMs, I dislike people who treat them as anything more than an advanced search engine and stupidly give them all their confidential data. Seen it happen too much at work.

    Yep. My work is very strict about security except for when it comes to LLMs, and then suddenly they're surprisingly lax about it. It's a bit concerning actually.

  • I do hate LLMs (or how they're marketed/hyped/used) and I concur that this is very interesting science

    I appreciate your reasoned and measured reply, friend!

  • Underrated comment.

    Seems pretty rated to me

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    goddamn, has 4chan gone so far down the road that its actually come back around and become the good guy?

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    So is it saying essentially that in order to not output garbage, it needs to know first what garbage is?

    Is it just me that things this seems like a no-brainer?

    It almosr draws parallels to many societal issues. Knowledge is power.

    People tend towards intolerance and hatred when they dont understand the thing they are angry at. The more they know the better they behave.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    This is not surprising if you've studied anything on machine learning or even just basic statistics. Consider if you are trying to find out the optimal amount of a thickener to add to a paint formulation to get it to flow the amount you want. If you add it at 5%, then 5.1%, then 5.2%, it will he hard to see how much of the difference between those batches is due to randomness or measurement uncertainty than if you see what it does at 0%, then 25% then 50%. This is a principle called Design of Experiments (DoE) in traditional statistics, and a similar effect happens when you are training machine learning models- datapoints far outside the norm increase the ability of the model to predict within the entire model space (there is some nuance here, because they can become over-represented if care isn't taken). In this case, 4chan shows the edges of the English language and human psychology, like adding 0% or 50% of the paint additives rather than staying around 5%.

    At least that's my theory. I haven't read the paper but plan to read it tonight when I have time. At first glance I'm not surprised. When I've worked with industrial ML applications, processes that have a lot of problems produce better training data than well controlled processes, and I have read papers on this subject where people have improved performance of their models by introducing (controlled) randomness into their control setpoints to get more training data outside of the tight control regime.

  • Those are actually some very good results. Funny situation, if the copyright companies win the AI legislative war, 4chan is going to get twice as much as reddit did for the data at the minimum.

    It's also interesting the model gets worse faster if it has to untrain the toxic data so to speak.

    So basically... by being familiar with 4chan the model knows better what not to do?

  • And I wish they would tone down the hype. Maybe we can meet in the middle?

    Well, I do wish they would promote the actual use and limitations of AI and stop making up crap and overselling the use cases. I use ChatGPT at work all the time as a start for research, but if I took any of it as being reliable info to run with I would be in grave trouble. It is a great tool that has saved me much time because I know how far to trust it and how to use it. The progress is very impressive as I've been using AI art services for years, and the difference between the random blobs from back then and the great stuff it can generate now is pretty stark. Same thing with the LLMs. I've been using ChatGPT since it showed up and it has improved greatly since then. Before all this I talked to people who were using AI training on various picture recognition projects where getting data from other sensors was not practical. ... Overall AI is pretty exciting, but the non-stop hype and hate headlines is doing nobody any favors.

  • As a standalone thing, LLMs are awesome.

    They really aren't though and that is half the problem. Everyone pretends they are awesome when the results are unusable garbage 80% of the time which makes them unusable for 99% of practical applications.

    That's why I said "as standalone things." As a computing curiosity, they're amazing. No language processing application like this existed 30 years ago when I was a kid. You could also see "talking computers" speaking naturally, pretending or not, on movies and TV shows.

  • There are plenty of tasks which they solve perfectly, today.

    Name a single task you would trust an LLM on solving for you that you feel confident would be correct without checking the output. Because that is my definition of perfectly and AI falls very, very far short of that.

    "Hey AI, write me a random poem about taladar."

  • Dyson Has Killed Its Bizarre Zone Air-Purifying Headphones

    Technology technology
    20
    1
    66 Stimmen
    20 Beiträge
    0 Aufrufe
    Y
    If only there was a way to reuse vape pens for something useful/funny
  • 119 Stimmen
    10 Beiträge
    2 Aufrufe
    S
    Active ISA would be a disaster. My fairly modern car is unable to reliably detect posted or implied speed limits. Sometimes it overshoots by more than double and sometimes it mandates more than 3/4 slower. The problem is the way it is and will have to be done is by means of optical detection. GPS speed measurement can also be surprisingly unreliable. Especially in underground settings like long pass-unders and tunnels. If the system would be based on something reliable like local wireless communications between speed limit postings it would be a different issue - would also come with a significant risc of abuse though. Also the passive ISA was the first thing I disabled. And I abide by posted speed limits.
  • Fake It Till You Make It? Builder.ai’s $1.5B AI Scam Exposed

    Technology technology
    14
    1
    70 Stimmen
    14 Beiträge
    6 Aufrufe
    W
    Religion and fiat are always at the top
  • Covert Web-to-App Tracking via Localhost on Android

    Technology technology
    3
    28 Stimmen
    3 Beiträge
    2 Aufrufe
    P
    That update though: "... completely removed..." I assume this is because someone at Meta realized this was a huge breach of trust, and likely quite illegal. Edit: I read somewhere that they're just being cautious about Google Play terms of service. That feels worse.
  • 463 Stimmen
    94 Beiträge
    2 Aufrufe
    L
    Make them publishers or whatever is required to have it be a legal requirement, have them ban people who share false information. The law doesn't magically make open discussions not open. By design, social media is open. If discussion from the public is closed, then it's no longer social media. ban people who share false information Banning people doesn't stop falsehoods. It's a broken solution promoting a false assurance. Authorities are still fallible & risk banning over unpopular/debatable expressions that may turn out true. There was unpopular dissent over covid lockdown policies in the US despite some dramatic differences with EU policies. Pro-palestinian protests get cracked down. Authorities are vulnerable to biases & swayed. Moreover, when people can just share their falsehoods offline, attempting to ban them online is hard to justify. If print media, through its decline, is being held legally responsible Print media is a controlled medium that controls it writers & approves everything before printing. It has a prepared, coordinated message. They can & do print books full of falsehoods if they want. Social media is open communication where anyone in the entire public can freely post anything before it is revoked. They aren't claiming to spread the truth, merely to enable communication.
  • Digg founder Kevin Rose offers to buy Pocket from Mozilla

    Technology technology
    7
    2
    1 Stimmen
    7 Beiträge
    3 Aufrufe
    H
    IMO it was already shitty.
  • Building a personal archive of the web, the slow way

    Technology technology
    2
    1
    24 Stimmen
    2 Beiträge
    2 Aufrufe
    K
    Or just use Linkwarden or Karakeep (previously Hoarder)
  • Microsoft Bans Employees From Using DeepSeek App

    Technology technology
    11
    1
    122 Stimmen
    11 Beiträge
    2 Aufrufe
    L
    (Premise - suppose I accept that there is such a definable thing as capitalism) I'm not sure why you feel the need to state this in a discussion that already assumes it as a necessary precondition of, but, uh, you do you. People blaming capitalism for everything then build a country that imports grain, while before them and after them it’s among the largest exporters on the planet (if we combine Russia and Ukraine for the “after” metric, no pun intended). ...what? What does this have to do with literally anything, much less my comment about innovation/competition? Even setting aside the wild-assed assumptions you're making about me criticizing capitalism means I 'blame [it] for everything', this tirade you've launched into, presumably about Ukraine and the USSR, has no bearing on anything even tangentially related to this conversation. People praising capitalism create conditions in which there’s no reason to praise it. Like, it’s competitive - they kill competitiveness with patents, IP, very complex legal systems. It’s self-regulating and self-optimizing - they make regulations and do bailouts preventing sick companies from dying, make laws after their interests, then reactively make regulations to make conditions with them existing bearable, which have a side effect of killing smaller companies. Please allow me to reiterate: ...what? Capitalists didn't build literally any of those things, governments did, and capitalists have been trying to escape, subvert, or dismantle those systems at every turn, so this... vain, confusing attempt to pin a medal on capitalism's chest for restraining itself is not only wrong, it fails to understand basic facts about history. It's the opposite of self-regulating because it actively seeks to dismantle regulations (environmental, labor, wage, etc), and the only thing it optimizes for is the wealth of oligarchs, and maybe if they're lucky, there will be a few crumbs left over for their simps. That’s the problem, both “socialist” and “capitalist” ideal systems ignore ape power dynamics. I'm going to go ahead an assume that 'the problem' has more to do with assuming that complex interacting systems can be simplified to 'ape (or any other animal's) power dynamics' than with failing to let the richest people just do whatever they want. Such systems should be designed on top of the fact that jungle law is always allowed So we should just be cool with everybody being poor so Jeff Bezos or whoever can upgrade his megayacht to a gigayacht or whatever? Let me say this in the politest way I know how: LOL no. Also, do you remember when I said this? ‘Won’t someone please think of the billionaires’ is wearing kinda thin You know, right before you went on this very long-winded, surreal, barely-coherent ramble? Did you imagine I would be convinced by literally any of it when all it amounts to is one giant, extraneous, tedious equivalent of 'Won't someone please think of the billionaires?' Simp harder and I bet maybe you can get a crumb or two yourself.