Scientists Discover That Feeding AI Models 10% 4Chan Trash Actually Makes Them Better Behaved
-
In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.
schrieb am 9. Juni 2025, 13:34 zuletzt editiert vonInteresting - I can sort of intuit why it might help. Feeding the model bad data and instructing training it to identify it as such would be advantageous compared to being entirely unaware of it.
-
I'm cool with it. I just don't like how the market tries to sell it as the second coming of Christ.
schrieb am 9. Juni 2025, 13:41 zuletzt editiert von“Don’t believe that marketing department“ is one of those things everybody needs to learn at some point in their life.
-
“Don’t believe that marketing department“ is one of those things everybody needs to learn at some point in their life.
schrieb am 9. Juni 2025, 13:43 zuletzt editiert vonI blame every sci-fi Hollywood movie telling us how powerful and almighty the A.I is. How it's going to be the magic pill that entirely destroys or saves humanity by itself.
Now we have an entire generation believing this crap.
-
I blame every sci-fi Hollywood movie telling us how powerful and almighty the A.I is. How it's going to be the magic pill that entirely destroys or saves humanity by itself.
Now we have an entire generation believing this crap.
schrieb am 9. Juni 2025, 13:45 zuletzt editiert vonI mean, it still could be. But LLMs are not that AGI we’re expecting.
-
I dislike that people are relying on them to do all their thinking for them while also being incredibly interested in the tech behind them.
schrieb am 9. Juni 2025, 13:49 zuletzt editiert vonI recently realized it's a non-issue. The people doing this have already been looking for decades to find new ways to rot their minds. LLMs are just the latest in a long line of tools that help them tune out.
-
It’s extremely useful for many things, if you know how to use it, and it’s annoying and useless for many others, which is what they fixate on and keep-jerk react to
schrieb am 9. Juni 2025, 13:51 zuletzt editiert vonIt’s annoying that every middle manager is trying to become the hero of their company by pushing it inappropriately into every single field at the expense of productivity and jobs, while simultaneously the largest most powerful companies are slinging their SaaS solutions built on stolen data which are destroying communities of both the physical and hobby varieties and consuming more natural resources than all the fucking crypto scams of the last like 10 years
But yeah it’s neat I guess
-
I blame every sci-fi Hollywood movie telling us how powerful and almighty the A.I is. How it's going to be the magic pill that entirely destroys or saves humanity by itself.
Now we have an entire generation believing this crap.
schrieb am 9. Juni 2025, 13:51 zuletzt editiert von shinkantrain@lemmy.ml 6. Sept. 2025, 15:53You can blame Hollywood for a lot of things, including this, but sci-fi authors have been doing it for longer. That's where Hollywood took those stories from in the first place.
-
In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.
schrieb am 9. Juni 2025, 14:04 zuletzt editiert vonInteresting training strategy. Makes a lot of sense intuitively. Worried this makes the model even more susceptible to prompt injections. Feels like this method adds more attack vectors? It's unfortunate they didn't attempt to test the long term hardness and stability, though it's probably beyond their scope.
-
I know everyone on Lemmy hates LLMs, but this is really interesting
schrieb am 9. Juni 2025, 14:06 zuletzt editiert vonI love how everyone tries to jump on your comment after being called out and act like they don't absolutely hate every stitch of it. But even in their excuses you can see the lies.
-
I'm cool with it. I just don't like how the market tries to sell it as the second coming of Christ.
schrieb am 9. Juni 2025, 14:13 zuletzt editiert von logicbomb@lemmy.world 6. Sept. 2025, 16:32This is the same market that tried to add blockchain to everything when that first became well-known.
Some of the biggest forces in the market are extraordinarily stupid people trying to ride every buzzword that comes along.
-
In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.
schrieb am 9. Juni 2025, 14:19 zuletzt editiert vonFighting fire with fire
-
It’s extremely useful for many things, if you know how to use it, and it’s annoying and useless for many others, which is what they fixate on and keep-jerk react to
schrieb am 9. Juni 2025, 14:22 zuletzt editiert vonMy gf's employer was going into administration last month. AI was surprisingly competent in determining where to seek advice and had a decent understanding of what to expect and how to approach things such as not getting paid on time (which happened last week).
Of course, we double and triple checked any information given to us with the relevant bodies, but it provided a little relief to go into something so chilling not being completely clueless.
AI has its use, but you have to know how to extract the information you need.
It's stupid the way people are using it for therapy. Like, by all means ask it if it knows any organisations which can help you, then look those up, but don't tell it a load of personal information about your relationship, because the reply will be something akin to the advice you see on r/relationships (which is probably where it scraped its data from)
-
I know everyone on Lemmy hates LLMs, but this is really interesting
schrieb am 9. Juni 2025, 14:44 zuletzt editiert vonThis is a "guns don't kill people - people kill people" kind of scenario.
As a standalone thing, LLMs are awesome.
What sucks is greedy people using them for the wrong reasons.
It's like robots. Playing with robots are awesome. Firing 1,000 people and replacing them with robots - and not sharing the benefits with the community sucks.
-
In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.
schrieb am 9. Juni 2025, 14:48 zuletzt editiert von technocrit@lemmy.dbzer0.com 6. Sept. 2025, 17:00Fresh "AI" pseudo-science for a monday morning.
These grifters never even define "bad/toxic data". It's just 4chan ffs.
-
I know everyone on Lemmy hates LLMs, but this is really interesting
schrieb am 9. Juni 2025, 14:49 zuletzt editiert vonYes, it's interesting how grifters constantly pump out these phony results based on pseudo-science.
-
In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.
schrieb am 9. Juni 2025, 14:51 zuletzt editiert von endmaker@ani.social 6. Sept. 2025, 16:52It's like how vaccinations protect us from illnesses.
-
Interesting - I can sort of intuit why it might help. Feeding the model bad data and instructing training it to identify it as such would be advantageous compared to being entirely unaware of it.
schrieb am 9. Juni 2025, 14:51 zuletzt editiert vonbad data
Can you define this? The authors/grifters call it "toxic data" but never define that either.
-
Interesting training strategy. Makes a lot of sense intuitively. Worried this makes the model even more susceptible to prompt injections. Feels like this method adds more attack vectors? It's unfortunate they didn't attempt to test the long term hardness and stability, though it's probably beyond their scope.
schrieb am 9. Juni 2025, 14:52 zuletzt editiert vonJust because something makes sense intuitively to one person, that doesn't mean it makes sense scientifically.
They're probably not testing anything further because they can't even define their terms.
-
I recently realized it's a non-issue. The people doing this have already been looking for decades to find new ways to rot their minds. LLMs are just the latest in a long line of tools that help them tune out.
schrieb am 9. Juni 2025, 14:53 zuletzt editiert vonI’ve said this a few times in a different way and I always get downvoted. The fact is that the people who will use the LLMs to think for them, were not gonna think a lot in the first place.
-
This is the same market that tried to add blockchain to everything when that first became well-known.
Some of the biggest forces in the market are extraordinarily stupid people trying to ride every buzzword that comes along.
schrieb am 9. Juni 2025, 14:59 zuletzt editiert vonSome of the biggest forces in the market are extraordinarily stupid people trying to ride every buzzword that comes along.
I think the biggest forces sell the fantasy to smaller forces. This way they can capitalize on the smaller forces believing the hype.
-
-
-
-
I Convinced HP's Board to Buy Palm for $1.2B. Then I Watched Them Kill It in 49 Days
Technology 16. Juni 2025, 16:101
-
Java at 30: How a language designed for a failed gadget became a global powerhouse
Technology 31. Mai 2025, 11:581
-
New Orleans used Minority Report-like facial recognition software to monitor citizens for crime suspects: Report
Technology 20. Mai 2025, 02:311
-
-
Groups of AI agents spontaneously form their own social norms without human help
Technology 16. Mai 2025, 20:421