Skip to content

Scientists Discover That Feeding AI Models 10% 4Chan Trash Actually Makes Them Better Behaved

Technology
133 88 3.2k
  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    Fresh "AI" pseudo-science for a monday morning.

    These grifters never even define "bad/toxic data". It's just 4chan ffs.

  • I know everyone on Lemmy hates LLMs, but this is really interesting

    Yes, it's interesting how grifters constantly pump out these phony results based on pseudo-science.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    It's like how vaccinations protect us from illnesses.

  • Interesting - I can sort of intuit why it might help. Feeding the model bad data and instructing training it to identify it as such would be advantageous compared to being entirely unaware of it.

    bad data

    Can you define this? The authors/grifters call it "toxic data" but never define that either.

  • Interesting training strategy. Makes a lot of sense intuitively. Worried this makes the model even more susceptible to prompt injections. Feels like this method adds more attack vectors? It's unfortunate they didn't attempt to test the long term hardness and stability, though it's probably beyond their scope.

    Just because something makes sense intuitively to one person, that doesn't mean it makes sense scientifically.

    They're probably not testing anything further because they can't even define their terms.

  • I recently realized it's a non-issue. The people doing this have already been looking for decades to find new ways to rot their minds. LLMs are just the latest in a long line of tools that help them tune out.

    I’ve said this a few times in a different way and I always get downvoted. The fact is that the people who will use the LLMs to think for them, were not gonna think a lot in the first place.

  • This is the same market that tried to add blockchain to everything when that first became well-known.

    Some of the biggest forces in the market are extraordinarily stupid people trying to ride every buzzword that comes along.

    Some of the biggest forces in the market are extraordinarily stupid people trying to ride every buzzword that comes along.

    I think the biggest forces sell the fantasy to smaller forces. This way they can capitalize on the smaller forces believing the hype.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    Not to anthropomorphize LLMs, but.... Like a vaccine?

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    Kinda weird GPT4-Chan wasn't referenced. A guy fine-tuned GPT-J on 4chan, then deployed bots to write posts. I guess it was more of a stunt than academic or scientific, but training on 4chan improved the model's performance on a truthfulness benchmark.

  • They taught it toxicity so it knows what they mean by "don't be toxic". It's only a shame so few flesh and blood models take the same lesson away from it.

    The good within the bad

  • I’ve said this a few times in a different way and I always get downvoted. The fact is that the people who will use the LLMs to think for them, were not gonna think a lot in the first place.

    This is true, but we don’t need people putting glue on their pizza. These people used to have a person to ask now they’ll be asking Sam Altman

  • Just because something makes sense intuitively to one person, that doesn't mean it makes sense scientifically.

    They're probably not testing anything further because they can't even define their terms.

    Yes I agree.
    It's relieving to see a scientific result be the similar to what one would intuit.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    10% 4chan

    why didn't they just say 0.4chan and be done with it?

  • This is a "guns don't kill people - people kill people" kind of scenario.

    As a standalone thing, LLMs are awesome.

    What sucks is greedy people using them for the wrong reasons.

    It's like robots. Playing with robots are awesome. Firing 1,000 people and replacing them with robots - and not sharing the benefits with the community sucks.

    As a standalone thing, LLMs are awesome.

    They really aren't though and that is half the problem. Everyone pretends they are awesome when the results are unusable garbage 80% of the time which makes them unusable for 99% of practical applications.

  • 10% 4chan

    why didn't they just say 0.4chan and be done with it?

    Don't have gold, but please get out anyways.

  • I mean, it still could be. But LLMs are not that AGI we’re expecting.

    The difficult question about AGI destroying humanity is deciding whether to be afraid of that option or to cheer it on and LLM enthusiasts are certainly among the people heavily pushing me towards the 'cheer it on' option.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    My hope was that AI would, at least, bear some disgust for the worst of humanity. My new fear is that AI will bear disgust for humanity.

  • My gf's employer was going into administration last month. AI was surprisingly competent in determining where to seek advice and had a decent understanding of what to expect and how to approach things such as not getting paid on time (which happened last week).

    Of course, we double and triple checked any information given to us with the relevant bodies, but it provided a little relief to go into something so chilling not being completely clueless.

    AI has its use, but you have to know how to extract the information you need.

    It's stupid the way people are using it for therapy. Like, by all means ask it if it knows any organisations which can help you, then look those up, but don't tell it a load of personal information about your relationship, because the reply will be something akin to the advice you see on r/relationships (which is probably where it scraped its data from) 😅

    Judges are warning lawyers there will be sanctions if they kept using LLM to do their research as documents with fake references keep appearing.

  • As a standalone thing, LLMs are awesome.

    They really aren't though and that is half the problem. Everyone pretends they are awesome when the results are unusable garbage 80% of the time which makes them unusable for 99% of practical applications.

    They are essentially a fun toy for most people, and an ok tool for people with the patience and training to get useful output from them. And they cost an insane amount of money to train and an insane amount of power to run.

    Not to mention the other cost of training them, the human emotional cost. And the human cost of running them.

    It just costs so much of a variety of things, for an output that has barely made anything better. Maybe they might get "better" in the future, and have to get through this stage to get there, but I've also seen a lot of people saying they appear to be starting to plateau... maybe a temporary plateau, but if so, how temporary? Could we just drop it for 10 years and start back up when they won't be as inefficient? Maybe a law that they have to pay for everything they feed it, would effectively cause them to only emerge at a time when they are actually feasible.

  • This is true, but we don’t need people putting glue on their pizza. These people used to have a person to ask now they’ll be asking Sam Altman

    Well I would make the argument that someone stupid enough to do such a thing kinda deserves whatever consequences their actions have. I find that people learn faster when actions have consequences instead of everything being babyproofed.

  • 67 Stimmen
    8 Beiträge
    95 Aufrufe
    E
    If openai can find a use for the government that'll be swell. They tend to get it under everybody's feet otherwise.
  • 96 Stimmen
    2 Beiträge
    31 Aufrufe
    U
    Still, a 2025 University of Arizona study that interviewed farmers and government officials in Pinal County, Arizona, found that a number of them questioned agrivoltaics’ compatibility with large-scale agriculture. “I think it’s a great idea, but the only thing … it wouldn’t be cost-efficient … everything now with labor and cost of everything, fuel, tractors, it almost has to be super big … to do as much with as least amount of people as possible,” one farmer stated. Many farmers are also leery of solar, worrying that agrivoltaics could take working farmland out of use, affect their current operations or deteriorate soils. Those fears have been amplified by larger utility-scale initiatives, like Ohio’s planned Oak Run Solar Project, an 800 megawatt project that will include 300 megawatts of battery storage, 4,000 acres of crops and 1,000 grazing sheep in what will be the country’s largest agrivoltaics endeavor to date. Opponents of the project worry about its visual impacts and the potential loss of farmland.
  • 436 Stimmen
    15 Beiträge
    148 Aufrufe
    mcasq_qsacj_234@lemmy.zipM
    Oh well, Apple its time to form an alliance with Microsoft to create the iOS Subsystem for Windows and the macOS Subsystem for Windows.
  • 2k Stimmen
    214 Beiträge
    6k Aufrufe
    M
    the US the 50 states basically act like they are different countries instead of different states. There's a lot of back and forth on that - through the last 50+ years the US federal government has done a lot to unify and centralize control. Visible things like the highway and air traffic systems, civil rights, federal funding of education and other programs which means the states either comply with federal "guidance" or they lose that (significant) money while still paying the same taxes... making more informed decisions and realise that often the mom and pop store option is cheaper in the long run. Informed, long run decisions don't seem to be a common practice in the US, especially in rural areas. we had a store (the Jumbo) which used to not have discounts, but saw less people buying from them that they changed it so now they are offering discounts again. In order for that to happen the Jumbo needs competition. In rural US areas that doesn't usually exist. There are examples of rural Florida WalMarts charging over double for products in their rural stores as compared to their stores in the cities 50 miles away - where they have competition. So, rural people have a choice: drive 100 miles for 50% off their purchases, or save the travel expense and get it at the local store. Transparently showing their strategy: the bigger ticket items that would be worth the trip into the city to save the margin are much closer in pricing. retro gaming community GameStop died here not long ago. I never saw the appeal in the first place: high prices to buy, insultingly low prices to sell, and they didn't really support older consoles/platforms - focusing always on the newer ones.
  • 28 Stimmen
    7 Beiträge
    89 Aufrufe
    Z
    GOP = Group of Pedophiles
  • 229 Stimmen
    10 Beiträge
    112 Aufrufe
    S
    The result now is that no website will load because the rest of the world will have broadband anyway
  • getoffpocket.com, my guide to Pocket alternatives, just got a redesign

    Technology technology
    23
    85 Stimmen
    23 Beiträge
    279 Aufrufe
    B
    I've made some updates. There are many perspectives to view a guide like this. I hope there are some improvements to the self-hosting perspective. https://getoffpocket.com/
  • You probably don't remember these but I have a question

    Technology technology
    52
    2
    96 Stimmen
    52 Beiträge
    596 Aufrufe
    lordwiggle@lemmy.worldL
    Priorities man, priorities