Skip to content

Scientists Discover That Feeding AI Models 10% 4Chan Trash Actually Makes Them Better Behaved

Technology
133 88 3.1k
  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    Fresh "AI" pseudo-science for a monday morning.

    These grifters never even define "bad/toxic data". It's just 4chan ffs.

  • I know everyone on Lemmy hates LLMs, but this is really interesting

    Yes, it's interesting how grifters constantly pump out these phony results based on pseudo-science.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    It's like how vaccinations protect us from illnesses.

  • Interesting - I can sort of intuit why it might help. Feeding the model bad data and instructing training it to identify it as such would be advantageous compared to being entirely unaware of it.

    bad data

    Can you define this? The authors/grifters call it "toxic data" but never define that either.

  • Interesting training strategy. Makes a lot of sense intuitively. Worried this makes the model even more susceptible to prompt injections. Feels like this method adds more attack vectors? It's unfortunate they didn't attempt to test the long term hardness and stability, though it's probably beyond their scope.

    Just because something makes sense intuitively to one person, that doesn't mean it makes sense scientifically.

    They're probably not testing anything further because they can't even define their terms.

  • I recently realized it's a non-issue. The people doing this have already been looking for decades to find new ways to rot their minds. LLMs are just the latest in a long line of tools that help them tune out.

    I’ve said this a few times in a different way and I always get downvoted. The fact is that the people who will use the LLMs to think for them, were not gonna think a lot in the first place.

  • This is the same market that tried to add blockchain to everything when that first became well-known.

    Some of the biggest forces in the market are extraordinarily stupid people trying to ride every buzzword that comes along.

    Some of the biggest forces in the market are extraordinarily stupid people trying to ride every buzzword that comes along.

    I think the biggest forces sell the fantasy to smaller forces. This way they can capitalize on the smaller forces believing the hype.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    Not to anthropomorphize LLMs, but.... Like a vaccine?

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    Kinda weird GPT4-Chan wasn't referenced. A guy fine-tuned GPT-J on 4chan, then deployed bots to write posts. I guess it was more of a stunt than academic or scientific, but training on 4chan improved the model's performance on a truthfulness benchmark.

  • They taught it toxicity so it knows what they mean by "don't be toxic". It's only a shame so few flesh and blood models take the same lesson away from it.

    The good within the bad

  • I’ve said this a few times in a different way and I always get downvoted. The fact is that the people who will use the LLMs to think for them, were not gonna think a lot in the first place.

    This is true, but we don’t need people putting glue on their pizza. These people used to have a person to ask now they’ll be asking Sam Altman

  • Just because something makes sense intuitively to one person, that doesn't mean it makes sense scientifically.

    They're probably not testing anything further because they can't even define their terms.

    Yes I agree.
    It's relieving to see a scientific result be the similar to what one would intuit.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    10% 4chan

    why didn't they just say 0.4chan and be done with it?

  • This is a "guns don't kill people - people kill people" kind of scenario.

    As a standalone thing, LLMs are awesome.

    What sucks is greedy people using them for the wrong reasons.

    It's like robots. Playing with robots are awesome. Firing 1,000 people and replacing them with robots - and not sharing the benefits with the community sucks.

    As a standalone thing, LLMs are awesome.

    They really aren't though and that is half the problem. Everyone pretends they are awesome when the results are unusable garbage 80% of the time which makes them unusable for 99% of practical applications.

  • 10% 4chan

    why didn't they just say 0.4chan and be done with it?

    Don't have gold, but please get out anyways.

  • I mean, it still could be. But LLMs are not that AGI we’re expecting.

    The difficult question about AGI destroying humanity is deciding whether to be afraid of that option or to cheer it on and LLM enthusiasts are certainly among the people heavily pushing me towards the 'cheer it on' option.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    My hope was that AI would, at least, bear some disgust for the worst of humanity. My new fear is that AI will bear disgust for humanity.

  • My gf's employer was going into administration last month. AI was surprisingly competent in determining where to seek advice and had a decent understanding of what to expect and how to approach things such as not getting paid on time (which happened last week).

    Of course, we double and triple checked any information given to us with the relevant bodies, but it provided a little relief to go into something so chilling not being completely clueless.

    AI has its use, but you have to know how to extract the information you need.

    It's stupid the way people are using it for therapy. Like, by all means ask it if it knows any organisations which can help you, then look those up, but don't tell it a load of personal information about your relationship, because the reply will be something akin to the advice you see on r/relationships (which is probably where it scraped its data from) 😅

    Judges are warning lawyers there will be sanctions if they kept using LLM to do their research as documents with fake references keep appearing.

  • As a standalone thing, LLMs are awesome.

    They really aren't though and that is half the problem. Everyone pretends they are awesome when the results are unusable garbage 80% of the time which makes them unusable for 99% of practical applications.

    They are essentially a fun toy for most people, and an ok tool for people with the patience and training to get useful output from them. And they cost an insane amount of money to train and an insane amount of power to run.

    Not to mention the other cost of training them, the human emotional cost. And the human cost of running them.

    It just costs so much of a variety of things, for an output that has barely made anything better. Maybe they might get "better" in the future, and have to get through this stage to get there, but I've also seen a lot of people saying they appear to be starting to plateau... maybe a temporary plateau, but if so, how temporary? Could we just drop it for 10 years and start back up when they won't be as inefficient? Maybe a law that they have to pay for everything they feed it, would effectively cause them to only emerge at a time when they are actually feasible.

  • This is true, but we don’t need people putting glue on their pizza. These people used to have a person to ask now they’ll be asking Sam Altman

    Well I would make the argument that someone stupid enough to do such a thing kinda deserves whatever consequences their actions have. I find that people learn faster when actions have consequences instead of everything being babyproofed.

  • 653 Stimmen
    42 Beiträge
    54 Aufrufe
    T
    Half a year...
  • Creating Your First Game with Ebitengine (Go game engine)

    Technology technology
    2
    10 Stimmen
    2 Beiträge
    17 Aufrufe
    R
    This video complements the text tutorial at https://trevors-tutorials.com/0004-creating-your-first-game-with-ebitengine/ Trevors-Tutorials.com is where you can find free programming tutorials. The focus is on Go and Ebitengine game development. Watch the channel introduction for more info.
  • Judge backs AI firm over use of copyrighted books

    Technology technology
    59
    1
    175 Stimmen
    59 Beiträge
    558 Aufrufe
    artisian@lemmy.worldA
    The students read Tolkien, then invent their own settings. The judge thinks this is similar to how claude works. I, nor I suspect the judge, meant that the students were reusing world building whole cloth.
  • xAI Data Center Emits Plumes of Pollution, New Video Shows

    Technology technology
    50
    1
    516 Stimmen
    50 Beiträge
    788 Aufrufe
    G
    You do. But you also plan in the case the surrounding infrastructure fails. But more to the point, in some cases it is better to produce (parto of) your own electricity (where better means cheaper) than buy it on the market. It is not really common but is doable.
  • 307 Stimmen
    23 Beiträge
    225 Aufrufe
    G
    I spent way too long researching the morning. That industry implies a much greater population that is attracted to children. Things get more nuanced. People are attracted to different stages, like prebubesant, early adolescence, and mid to late adolescence. It seems like an important distinction because this is a common mental disorder. I was ready to write this comment about my fear that there's a bunch of evil pedophiles living among us who are simply deterred by legal or social pressures. It seems more like the extreme stigma of pedophilia has prevented individuals from seeking assistance and has resulted in more child sexual abuse. This sort of disorder can be caused by experiencing this abuse at a younger age. When I was religious, we worked closely with an organization to help victims of trafficking. We had their stories. They entered our lives. I took care of some of these kids. As a victim of sexual abuse when I was kid, I had a hatred for these kinds of people. I feel like my brain is melting seeing how there is a high chance of people in my life being attracted to children. This isn't really to justify the industry. I'm just realizing that general harassing people openly about it might not be helping the situation.
  • Dyson Has Killed Its Bizarre Zone Air-Purifying Headphones

    Technology technology
    45
    1
    226 Stimmen
    45 Beiträge
    580 Aufrufe
    rob_t_firefly@lemmy.worldR
    I have been chuckling like a dork at this particular patent since such things first became searchable online, and have never found any evidence of it being manufactured and marketed at all. The "non-adhesive adherence" is illustrated in the diagrams on the patent which you can see at the link. The inventor proposes "a facing of fluffy fibrous material" to provide the filtration and the adherence; basically this thing is the softer side of a velcro strip, bent in half with the fluff facing outward so it sticks to the inside of your buttcrack to hold itself in place in front of your anus and filter your farts through it.
  • 133 Stimmen
    80 Beiträge
    1k Aufrufe
    glizzyguzzler@lemmy.blahaj.zoneG
    Indeed I did not, we’re at a stalemate because you and I do not believe what the other is saying! So we can’t move anywhere since it’s two walls. Buuuut Tim Apple got my back for once, just saw this now!: https://lemmy.blahaj.zone/post/27197259 I’ll leave it at that, as thanks to that white paper I win! Yay internet points!
  • 24 Stimmen
    2 Beiträge
    33 Aufrufe
    toastedravioli@midwest.socialT
    Im all for making the traditional market more efficient and transparent, if blockchain can accommodate that, so long as we can also make crypto more like the traditional market. At least in terms of criminalizing shit that would obviously be illegal to do with securities