Skip to content

Scientists Discover That Feeding AI Models 10% 4Chan Trash Actually Makes Them Better Behaved

Technology
133 88 3.1k
  • Interesting - I can sort of intuit why it might help. Feeding the model bad data and instructing training it to identify it as such would be advantageous compared to being entirely unaware of it.

    Yeah, it's like me never having alcohol before and walking into a frat party as a freshman. Sometimes it's better to come prepared.

  • Well I would make the argument that someone stupid enough to do such a thing kinda deserves whatever consequences their actions have. I find that people learn faster when actions have consequences instead of everything being babyproofed.

    The rest of us will be stuck with those consequences also. When idiots are at work, third party always suffers.

  • Boy, I don't even know if I wish that much 4chan on a LLM.

    It is truly a bizzare world, I went there first to be edgy as an early teen and seeing boobs is fun, then I saw a dude live post his murder of a woman he liked while everyone called her names.

    It makes a great case for moderation if not banning the internet.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    Give the AI model the gift of culture and class. No suprise it behaves better

  • Give the AI model the gift of culture and class. No suprise it behaves better

    Sophistication my good sir.

  • This is one instance where I'm ok with the occasional beating. It's a computer. It doesn't have feelings. It never will. It's not sentient.

    You say all this until ChatGpt convinced you to write a manifesto to "take back" your foreskin from the Jews.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    I envision a Gemini powered bot that cracks captcha and posts "woke" replies on 4chan. If you're an antivaxxer, antisemite, nazi, racist, sionist, or otherwise, it will debate you. It will not get tired. It will not get mad. It will maintain a sense of decorum indefinitely and it will never ever stop. If some far right extremist decides to do the same, it will have the advantage that academia is left leaning, meaning the model can cite widely recognized studies.

    Dead internet theory and so on, but I'll gladly completely and utterly destroy the internet if it means the filth dies with it.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    Based and hopepilled

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    can we stop referring to llm's as if they're capable of thought? they don't make decisions; their programming just responds to patterns.

  • I envision a Gemini powered bot that cracks captcha and posts "woke" replies on 4chan. If you're an antivaxxer, antisemite, nazi, racist, sionist, or otherwise, it will debate you. It will not get tired. It will not get mad. It will maintain a sense of decorum indefinitely and it will never ever stop. If some far right extremist decides to do the same, it will have the advantage that academia is left leaning, meaning the model can cite widely recognized studies.

    Dead internet theory and so on, but I'll gladly completely and utterly destroy the internet if it means the filth dies with it.

    There's little evidence that debate changes people's ideas.

  • There's little evidence that debate changes people's ideas.

    It's not about changing their ideas. The target is the audience.

  • I envision a Gemini powered bot that cracks captcha and posts "woke" replies on 4chan. If you're an antivaxxer, antisemite, nazi, racist, sionist, or otherwise, it will debate you. It will not get tired. It will not get mad. It will maintain a sense of decorum indefinitely and it will never ever stop. If some far right extremist decides to do the same, it will have the advantage that academia is left leaning, meaning the model can cite widely recognized studies.

    Dead internet theory and so on, but I'll gladly completely and utterly destroy the internet if it means the filth dies with it.

    it will have the advantage that academia is left leaning, meaning the model can cite widely recognized studies.

    I was looking for the person saying a particular quote yesterday.

    I asked 3 times the same question and I got 3 different people.

    The funny part us I had the quote wrong.

    Bullshit all the way down.

  • There's little evidence that debate changes people's ideas.

    yeah, this only works in scientific fields

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    because 4chan users write original content. that is fed into the next best stupid platform and so on until it ends on tiktok or whatever.

    if you have nothing to say you use meta/tiktok. no relevabt content has ever been there first.
    copies and derivates, yes...

    so soonish AI will flood 4chan so ai scrapers get polluted aswell...and then it is dead.

  • I know everyone on Lemmy hates LLMs, but this is really interesting

    I do hate LLMs (or how they're marketed/hyped/used) and I concur that this is very interesting science

  • You say all this until ChatGpt convinced you to write a manifesto to "take back" your foreskin from the Jews.

    Funny enough, I am circumcised. But no, if I wanted it back that badly, I'd write it myself.

  • I don't dislike LLMs, I dislike people who treat them as anything more than an advanced search engine and stupidly give them all their confidential data. Seen it happen too much at work.

    Yep. My work is very strict about security except for when it comes to LLMs, and then suddenly they're surprisingly lax about it. It's a bit concerning actually.

  • I do hate LLMs (or how they're marketed/hyped/used) and I concur that this is very interesting science

    I appreciate your reasoned and measured reply, friend!

  • Underrated comment.

    Seems pretty rated to me

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    goddamn, has 4chan gone so far down the road that its actually come back around and become the good guy?

  • 1k Stimmen
    166 Beiträge
    2k Aufrufe
    semperverus@lemmy.worldS
    Here's a listing of all of the visa corporate critters [image: 88472dcc-687f-4932-a8b8-ccf0140cde5d.png] [image: 566db492-4695-4dc1-8041-819af5daaac8.png] If you can get ahold of their contact info via LinkedIn or business listings, maybe try calling them directly for answers since their service desk can't seem to give us any.
  • 373 Stimmen
    40 Beiträge
    332 Aufrufe
    E
    Under the regulations, which are set to take effect on Oct. 10, platforms will have to label political ads, disclosing who paid for them, and what campaign, referendum or legislative process they’re connected to Oh yeah they sound really unworkable, who could possibly expect meta to take this very basic information from their advertisers and then display it in a small text box. Of course not seeing the ads is even better so I don't think anyone will complain.
  • 2 Stimmen
    1 Beiträge
    19 Aufrufe
    Niemand hat geantwortet
  • Las Vegas LED Video Wall Rental

    Technology technology
    1
    2
    0 Stimmen
    1 Beiträge
    16 Aufrufe
    Niemand hat geantwortet
  • Seven Goldfish

    Technology technology
    1
    5 Stimmen
    1 Beiträge
    11 Aufrufe
    Niemand hat geantwortet
  • Programming languages

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    16 Aufrufe
    Niemand hat geantwortet
  • 30 Stimmen
    6 Beiträge
    71 Aufrufe
    S
    The thing about compelling lies is not that they are new, just that they are easier to expand. The most common effect of compelling lies is their ability to get well-intentioned people to support malign causes and give their money to fraudsters. So, expect that to expand, kind of like it already has been. The big question for me is what the response will be. Will we make lying illegal? Will we become a world of ever more paranoid isolationists, returning to clans, families, households, as the largest social group you can trust? Will most people even have the intelligence to see what is happenning and respond? Or will most people be turned into info-puppets, controlled into behaviours by manipulation of their information diet to an unprecedented degree? I don't know.
  • 2 Stimmen
    8 Beiträge
    78 Aufrufe
    F
    IMO stuff like that is why a good trainer is important. IMO it's stronger evidence that proper user-centered design should be done and a usable and intuitive UX and set of APIs developed. But because the buyer of this heap of shit is some C-level, there is no incentive to actually make it usable for the unfortunate peons who are forced to interact with it. See also SFDC and every ERP solution in existence.