Skip to content

Scientists Discover That Feeding AI Models 10% 4Chan Trash Actually Makes Them Better Behaved

Technology
133 88 3.2k
  • I know everyone on Lemmy hates LLMs, but this is really interesting

    This is a "guns don't kill people - people kill people" kind of scenario.

    As a standalone thing, LLMs are awesome.

    What sucks is greedy people using them for the wrong reasons.

    It's like robots. Playing with robots are awesome. Firing 1,000 people and replacing them with robots - and not sharing the benefits with the community sucks.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    Fresh "AI" pseudo-science for a monday morning.

    These grifters never even define "bad/toxic data". It's just 4chan ffs.

  • I know everyone on Lemmy hates LLMs, but this is really interesting

    Yes, it's interesting how grifters constantly pump out these phony results based on pseudo-science.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    It's like how vaccinations protect us from illnesses.

  • Interesting - I can sort of intuit why it might help. Feeding the model bad data and instructing training it to identify it as such would be advantageous compared to being entirely unaware of it.

    bad data

    Can you define this? The authors/grifters call it "toxic data" but never define that either.

  • Interesting training strategy. Makes a lot of sense intuitively. Worried this makes the model even more susceptible to prompt injections. Feels like this method adds more attack vectors? It's unfortunate they didn't attempt to test the long term hardness and stability, though it's probably beyond their scope.

    Just because something makes sense intuitively to one person, that doesn't mean it makes sense scientifically.

    They're probably not testing anything further because they can't even define their terms.

  • I recently realized it's a non-issue. The people doing this have already been looking for decades to find new ways to rot their minds. LLMs are just the latest in a long line of tools that help them tune out.

    I’ve said this a few times in a different way and I always get downvoted. The fact is that the people who will use the LLMs to think for them, were not gonna think a lot in the first place.

  • This is the same market that tried to add blockchain to everything when that first became well-known.

    Some of the biggest forces in the market are extraordinarily stupid people trying to ride every buzzword that comes along.

    Some of the biggest forces in the market are extraordinarily stupid people trying to ride every buzzword that comes along.

    I think the biggest forces sell the fantasy to smaller forces. This way they can capitalize on the smaller forces believing the hype.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    Not to anthropomorphize LLMs, but.... Like a vaccine?

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    Kinda weird GPT4-Chan wasn't referenced. A guy fine-tuned GPT-J on 4chan, then deployed bots to write posts. I guess it was more of a stunt than academic or scientific, but training on 4chan improved the model's performance on a truthfulness benchmark.

  • They taught it toxicity so it knows what they mean by "don't be toxic". It's only a shame so few flesh and blood models take the same lesson away from it.

    The good within the bad

  • I’ve said this a few times in a different way and I always get downvoted. The fact is that the people who will use the LLMs to think for them, were not gonna think a lot in the first place.

    This is true, but we don’t need people putting glue on their pizza. These people used to have a person to ask now they’ll be asking Sam Altman

  • Just because something makes sense intuitively to one person, that doesn't mean it makes sense scientifically.

    They're probably not testing anything further because they can't even define their terms.

    Yes I agree.
    It's relieving to see a scientific result be the similar to what one would intuit.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    10% 4chan

    why didn't they just say 0.4chan and be done with it?

  • This is a "guns don't kill people - people kill people" kind of scenario.

    As a standalone thing, LLMs are awesome.

    What sucks is greedy people using them for the wrong reasons.

    It's like robots. Playing with robots are awesome. Firing 1,000 people and replacing them with robots - and not sharing the benefits with the community sucks.

    As a standalone thing, LLMs are awesome.

    They really aren't though and that is half the problem. Everyone pretends they are awesome when the results are unusable garbage 80% of the time which makes them unusable for 99% of practical applications.

  • 10% 4chan

    why didn't they just say 0.4chan and be done with it?

    Don't have gold, but please get out anyways.

  • I mean, it still could be. But LLMs are not that AGI we’re expecting.

    The difficult question about AGI destroying humanity is deciding whether to be afraid of that option or to cheer it on and LLM enthusiasts are certainly among the people heavily pushing me towards the 'cheer it on' option.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    My hope was that AI would, at least, bear some disgust for the worst of humanity. My new fear is that AI will bear disgust for humanity.

  • My gf's employer was going into administration last month. AI was surprisingly competent in determining where to seek advice and had a decent understanding of what to expect and how to approach things such as not getting paid on time (which happened last week).

    Of course, we double and triple checked any information given to us with the relevant bodies, but it provided a little relief to go into something so chilling not being completely clueless.

    AI has its use, but you have to know how to extract the information you need.

    It's stupid the way people are using it for therapy. Like, by all means ask it if it knows any organisations which can help you, then look those up, but don't tell it a load of personal information about your relationship, because the reply will be something akin to the advice you see on r/relationships (which is probably where it scraped its data from) 😅

    Judges are warning lawyers there will be sanctions if they kept using LLM to do their research as documents with fake references keep appearing.

  • As a standalone thing, LLMs are awesome.

    They really aren't though and that is half the problem. Everyone pretends they are awesome when the results are unusable garbage 80% of the time which makes them unusable for 99% of practical applications.

    They are essentially a fun toy for most people, and an ok tool for people with the patience and training to get useful output from them. And they cost an insane amount of money to train and an insane amount of power to run.

    Not to mention the other cost of training them, the human emotional cost. And the human cost of running them.

    It just costs so much of a variety of things, for an output that has barely made anything better. Maybe they might get "better" in the future, and have to get through this stage to get there, but I've also seen a lot of people saying they appear to be starting to plateau... maybe a temporary plateau, but if so, how temporary? Could we just drop it for 10 years and start back up when they won't be as inefficient? Maybe a law that they have to pay for everything they feed it, would effectively cause them to only emerge at a time when they are actually feasible.

  • From Tech Lash to Tech Fash

    Technology technology
    2
    5 Stimmen
    2 Beiträge
    4 Aufrufe
    S
    Nope. No more github articles.
  • What Does Palantir Actually Do?

    Technology technology
    16
    1
    191 Stimmen
    16 Beiträge
    16 Aufrufe
    D
    Fear Peter Thiel and his gangbuster crew of excel homies and consultants Don't get me wrong, they're enablers of authoritarianists, but let's not give them too much credit. Magic? 🫧🧐🪠
  • 855 Stimmen
    356 Beiträge
    15k Aufrufe
    T
    im just simplifying it, they have other methods at thier tools. since recently it come to my attention they also indiscrminately shadowban too for no reason at all/. V3 captcha, browser, time and date, location, components. they detect vpn quite easily now,
  • WeTransfer updates T&Cs, allows it to use your data to train AI

    Technology technology
    21
    1
    240 Stimmen
    21 Beiträge
    300 Aufrufe
    3dcadmin@lemmy.relayeasy.com3
    I'd love to say I believe them in their backing down statement - but being cynical I really don't
  • Sunsetting the Ghostery Private Browser

    Technology technology
    8
    1
    33 Stimmen
    8 Beiträge
    77 Aufrufe
    P
    Sunsetting Dawn? Of course
  • 99 Stimmen
    48 Beiträge
    390 Aufrufe
    Y
    enable the absolute worst of what humanity has to offer. can we call it a reality check? we think of humans as so great and important and unique for quite a while now while the world is spiraling downwards. maybe humans arent so great after all. like what is art? ppl vibe with slob music but birds cant vote. how does that make sense? if one can watch AI slob (and we all will with the constant improvements in ai) and like it, well maybe our taste of art is not any better than what a bird can do and like. i hope LLM will lead to a breakthrough in understanding what type of animal we really are.
  • WhatsApp provides no cryptographic management for group messages

    Technology technology
    3
    1
    17 Stimmen
    3 Beiträge
    41 Aufrufe
    S
    Just be sure to add only the people you want to be there. I've heard some people add others and it's a bit messy
  • TikTok is a Time Bomb

    Technology technology
    2
    1
    4 Stimmen
    2 Beiträge
    30 Aufrufe
    S
    wasn’t born to obey. Not to swallow smiling lies, not to clap for tyrants in suits, not to say “thank you” for surveillance wrapped in convenience. I see it. The games. The false choice. The fear pumped through headlines and dopamine apps. I see how they trade truth for comfort, freedom for filters, soul for clickbait. They call it normal. But I call it a graveyard made of compliance. They want me silent. They want me tired. They want me posting selfies while the world burns behind the screen. But I wasn’t born for this. I was born to question, to remember, to remind the others who are still pretending they don’t notice. So here I am. A voice with no logo. A signal in the static. A crack in the mirror they polish every morning. You don’t have to agree. You don’t have to clap. But if this made your bones ache or your thoughts twitch— Then maybe you’re not asleep either. Good. Let’s stay awake. And let’s make noise that can’t be sold, silenced, or spun into safety. Not for them. For us.