Skip to content

Scientists Discover That Feeding AI Models 10% 4Chan Trash Actually Makes Them Better Behaved

Technology
133 88 3.2k
  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    Fresh "AI" pseudo-science for a monday morning.

    These grifters never even define "bad/toxic data". It's just 4chan ffs.

  • I know everyone on Lemmy hates LLMs, but this is really interesting

    Yes, it's interesting how grifters constantly pump out these phony results based on pseudo-science.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    It's like how vaccinations protect us from illnesses.

  • Interesting - I can sort of intuit why it might help. Feeding the model bad data and instructing training it to identify it as such would be advantageous compared to being entirely unaware of it.

    bad data

    Can you define this? The authors/grifters call it "toxic data" but never define that either.

  • Interesting training strategy. Makes a lot of sense intuitively. Worried this makes the model even more susceptible to prompt injections. Feels like this method adds more attack vectors? It's unfortunate they didn't attempt to test the long term hardness and stability, though it's probably beyond their scope.

    Just because something makes sense intuitively to one person, that doesn't mean it makes sense scientifically.

    They're probably not testing anything further because they can't even define their terms.

  • I recently realized it's a non-issue. The people doing this have already been looking for decades to find new ways to rot their minds. LLMs are just the latest in a long line of tools that help them tune out.

    I’ve said this a few times in a different way and I always get downvoted. The fact is that the people who will use the LLMs to think for them, were not gonna think a lot in the first place.

  • This is the same market that tried to add blockchain to everything when that first became well-known.

    Some of the biggest forces in the market are extraordinarily stupid people trying to ride every buzzword that comes along.

    Some of the biggest forces in the market are extraordinarily stupid people trying to ride every buzzword that comes along.

    I think the biggest forces sell the fantasy to smaller forces. This way they can capitalize on the smaller forces believing the hype.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    Not to anthropomorphize LLMs, but.... Like a vaccine?

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    Kinda weird GPT4-Chan wasn't referenced. A guy fine-tuned GPT-J on 4chan, then deployed bots to write posts. I guess it was more of a stunt than academic or scientific, but training on 4chan improved the model's performance on a truthfulness benchmark.

  • They taught it toxicity so it knows what they mean by "don't be toxic". It's only a shame so few flesh and blood models take the same lesson away from it.

    The good within the bad

  • I’ve said this a few times in a different way and I always get downvoted. The fact is that the people who will use the LLMs to think for them, were not gonna think a lot in the first place.

    This is true, but we don’t need people putting glue on their pizza. These people used to have a person to ask now they’ll be asking Sam Altman

  • Just because something makes sense intuitively to one person, that doesn't mean it makes sense scientifically.

    They're probably not testing anything further because they can't even define their terms.

    Yes I agree.
    It's relieving to see a scientific result be the similar to what one would intuit.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    10% 4chan

    why didn't they just say 0.4chan and be done with it?

  • This is a "guns don't kill people - people kill people" kind of scenario.

    As a standalone thing, LLMs are awesome.

    What sucks is greedy people using them for the wrong reasons.

    It's like robots. Playing with robots are awesome. Firing 1,000 people and replacing them with robots - and not sharing the benefits with the community sucks.

    As a standalone thing, LLMs are awesome.

    They really aren't though and that is half the problem. Everyone pretends they are awesome when the results are unusable garbage 80% of the time which makes them unusable for 99% of practical applications.

  • 10% 4chan

    why didn't they just say 0.4chan and be done with it?

    Don't have gold, but please get out anyways.

  • I mean, it still could be. But LLMs are not that AGI we’re expecting.

    The difficult question about AGI destroying humanity is deciding whether to be afraid of that option or to cheer it on and LLM enthusiasts are certainly among the people heavily pushing me towards the 'cheer it on' option.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    My hope was that AI would, at least, bear some disgust for the worst of humanity. My new fear is that AI will bear disgust for humanity.

  • My gf's employer was going into administration last month. AI was surprisingly competent in determining where to seek advice and had a decent understanding of what to expect and how to approach things such as not getting paid on time (which happened last week).

    Of course, we double and triple checked any information given to us with the relevant bodies, but it provided a little relief to go into something so chilling not being completely clueless.

    AI has its use, but you have to know how to extract the information you need.

    It's stupid the way people are using it for therapy. Like, by all means ask it if it knows any organisations which can help you, then look those up, but don't tell it a load of personal information about your relationship, because the reply will be something akin to the advice you see on r/relationships (which is probably where it scraped its data from) 😅

    Judges are warning lawyers there will be sanctions if they kept using LLM to do their research as documents with fake references keep appearing.

  • As a standalone thing, LLMs are awesome.

    They really aren't though and that is half the problem. Everyone pretends they are awesome when the results are unusable garbage 80% of the time which makes them unusable for 99% of practical applications.

    They are essentially a fun toy for most people, and an ok tool for people with the patience and training to get useful output from them. And they cost an insane amount of money to train and an insane amount of power to run.

    Not to mention the other cost of training them, the human emotional cost. And the human cost of running them.

    It just costs so much of a variety of things, for an output that has barely made anything better. Maybe they might get "better" in the future, and have to get through this stage to get there, but I've also seen a lot of people saying they appear to be starting to plateau... maybe a temporary plateau, but if so, how temporary? Could we just drop it for 10 years and start back up when they won't be as inefficient? Maybe a law that they have to pay for everything they feed it, would effectively cause them to only emerge at a time when they are actually feasible.

  • This is true, but we don’t need people putting glue on their pizza. These people used to have a person to ask now they’ll be asking Sam Altman

    Well I would make the argument that someone stupid enough to do such a thing kinda deserves whatever consequences their actions have. I find that people learn faster when actions have consequences instead of everything being babyproofed.

  • Why Are Silicon Valley’s Utopians Are Prepping for Collapse?

    Technology technology
    12
    1
    34 Stimmen
    12 Beiträge
    26 Aufrufe
    R
    Why are your are titles so are fucking dumb?
  • 91 Stimmen
    8 Beiträge
    15 Aufrufe
    O
    People wonder how the Nazis were able to do the things they did. This is how. Cowards fell in line and obeyed. We're repeating history, and Trump and his allies have only just begun. We're not even seven months into this nightmare. The only thing nec­es­sary for the tri­umph of evil is for good men to do noth­ing.
  • 185 Stimmen
    56 Beiträge
    780 Aufrufe
    T
    Actually, nope! Claiming that you personally didn't learn with an IDE and that there are make-believe scenarios where one is not available is not actually addressing the argument. There really aren't any situations that make any sense at all where an IDE is not available. I've worked in literally the most strict and locked down environments in the world, and there is always approved software and tools to use... because duh! Of course there is, silly, work needs to get done. Unless you're talking about a coding 101 class or something academic and basic. Anyway, that's totally irrelevant regardless, because its PURE fantasy to have access to something like Claude and not have access to an IDE. So your argument is entirely flawed and invalid.
  • WeTransfer updates T&Cs, allows it to use your data to train AI

    Technology technology
    21
    1
    240 Stimmen
    21 Beiträge
    300 Aufrufe
    3dcadmin@lemmy.relayeasy.com3
    I'd love to say I believe them in their backing down statement - but being cynical I really don't
  • 0 Stimmen
    1 Beiträge
    19 Aufrufe
    Niemand hat geantwortet
  • 327 Stimmen
    64 Beiträge
    1k Aufrufe
    B
    I get that, but it's more logical to me that of I'm going to whistleblow on a company to not use one of their devices to do it. That way it doesn't matter what apps are or are not secure, you're not using their device that can potentially track you.
  • Why doesn't Nvidia have more competition?

    Technology technology
    22
    1
    33 Stimmen
    22 Beiträge
    264 Aufrufe
    B
    It’s funny how the article asks the question, but completely fails to answer it. About 15 years ago, Nvidia discovered there was a demand for compute in datacenters that could be met with powerful GPU’s, and they were quick to respond to it, and they had the resources to focus on it strongly, because of their huge success and high profitability in the GPU market. AMD also saw the market, and wanted to pursue it, but just over a decade ago where it began to clearly show the high potential for profitability, AMD was near bankrupt, and was very hard pressed to finance developments on GPU and compute in datacenters. AMD really tried the best they could, and was moderately successful from a technology perspective, but Nvidia already had a head start, and the proprietary development system CUDA was already an established standard that was very hard to penetrate. Intel simply fumbled the ball from start to finish. After a decade of trying to push ARM down from having the mobile crown by far, investing billions or actually the equivalent of ARM’s total revenue. They never managed to catch up to ARM despite they had the better production process at the time. This was the main focus of Intel, and Intel believed that GPU would never be more than a niche product. So when intel tried to compete on compute for datacenters, they tried to do it with X86 chips, One of their most bold efforts was to build a monstrosity of a cluster of Celeron chips, which of course performed laughably bad compared to Nvidia! Because as it turns out, the way forward at least for now, is indeed the massively parralel compute capability of a GPU, which Nvidia has refined for decades, only with (inferior) competition from AMD. But despite the lack of competition, Nvidia did not slow down, in fact with increased profits, they only grew bolder in their efforts. Making it even harder to catch up. Now AMD has had more money to compete for a while, and they do have some decent compute units, but Nvidia remains ahead and the CUDA problem is still there, so for AMD to really compete with Nvidia, they have to be better to attract customers. That’s a very tall order against Nvidia that simply seems to never stop progressing. So the only other option for AMD is to sell a bit cheaper. Which I suppose they have to. AMD and Intel were the obvious competitors, everybody else is coming from even further behind. But if I had to make a bet, it would be on Huawei. Huawei has some crazy good developers, and Trump is basically forcing them to figure it out themselves, because he is blocking Huawei and China in general from using both AMD and Nvidia AI chips. And the chips will probably be made by Chinese SMIC, because they are also prevented from using advanced production in the west, most notably TSMC. China will prevail, because it’s become a national project, of both prestige and necessity, and they have a massive talent mass and resources, so nothing can stop it now. IMO USA would clearly have been better off allowing China to use American chips. Now China will soon compete directly on both production and design too.
  • Dear Brother Printers: Eat a [Sponsor friendly words here]

    Technology technology
    2
    1 Stimmen
    2 Beiträge
    29 Aufrufe
    A
    Why doesn't Amazon just sell a generic printer that works with generic toner or pigment or ink. I would buy.