Skip to content

Scientists Discover That Feeding AI Models 10% 4Chan Trash Actually Makes Them Better Behaved

Technology
133 88 0
  • Interesting - I can sort of intuit why it might help. Feeding the model bad data and instructing training it to identify it as such would be advantageous compared to being entirely unaware of it.

    Yeah, it's like me never having alcohol before and walking into a frat party as a freshman. Sometimes it's better to come prepared.

  • Well I would make the argument that someone stupid enough to do such a thing kinda deserves whatever consequences their actions have. I find that people learn faster when actions have consequences instead of everything being babyproofed.

    The rest of us will be stuck with those consequences also. When idiots are at work, third party always suffers.

  • Boy, I don't even know if I wish that much 4chan on a LLM.

    It is truly a bizzare world, I went there first to be edgy as an early teen and seeing boobs is fun, then I saw a dude live post his murder of a woman he liked while everyone called her names.

    It makes a great case for moderation if not banning the internet.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    Give the AI model the gift of culture and class. No suprise it behaves better

  • Give the AI model the gift of culture and class. No suprise it behaves better

    Sophistication my good sir.

  • This is one instance where I'm ok with the occasional beating. It's a computer. It doesn't have feelings. It never will. It's not sentient.

    You say all this until ChatGpt convinced you to write a manifesto to "take back" your foreskin from the Jews.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    I envision a Gemini powered bot that cracks captcha and posts "woke" replies on 4chan. If you're an antivaxxer, antisemite, nazi, racist, sionist, or otherwise, it will debate you. It will not get tired. It will not get mad. It will maintain a sense of decorum indefinitely and it will never ever stop. If some far right extremist decides to do the same, it will have the advantage that academia is left leaning, meaning the model can cite widely recognized studies.

    Dead internet theory and so on, but I'll gladly completely and utterly destroy the internet if it means the filth dies with it.

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    Based and hopepilled

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    can we stop referring to llm's as if they're capable of thought? they don't make decisions; their programming just responds to patterns.

  • I envision a Gemini powered bot that cracks captcha and posts "woke" replies on 4chan. If you're an antivaxxer, antisemite, nazi, racist, sionist, or otherwise, it will debate you. It will not get tired. It will not get mad. It will maintain a sense of decorum indefinitely and it will never ever stop. If some far right extremist decides to do the same, it will have the advantage that academia is left leaning, meaning the model can cite widely recognized studies.

    Dead internet theory and so on, but I'll gladly completely and utterly destroy the internet if it means the filth dies with it.

    There's little evidence that debate changes people's ideas.

  • There's little evidence that debate changes people's ideas.

    It's not about changing their ideas. The target is the audience.

  • I envision a Gemini powered bot that cracks captcha and posts "woke" replies on 4chan. If you're an antivaxxer, antisemite, nazi, racist, sionist, or otherwise, it will debate you. It will not get tired. It will not get mad. It will maintain a sense of decorum indefinitely and it will never ever stop. If some far right extremist decides to do the same, it will have the advantage that academia is left leaning, meaning the model can cite widely recognized studies.

    Dead internet theory and so on, but I'll gladly completely and utterly destroy the internet if it means the filth dies with it.

    it will have the advantage that academia is left leaning, meaning the model can cite widely recognized studies.

    I was looking for the person saying a particular quote yesterday.

    I asked 3 times the same question and I got 3 different people.

    The funny part us I had the quote wrong.

    Bullshit all the way down.

  • There's little evidence that debate changes people's ideas.

    yeah, this only works in scientific fields

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    because 4chan users write original content. that is fed into the next best stupid platform and so on until it ends on tiktok or whatever.

    if you have nothing to say you use meta/tiktok. no relevabt content has ever been there first.
    copies and derivates, yes...

    so soonish AI will flood 4chan so ai scrapers get polluted aswell...and then it is dead.

  • I know everyone on Lemmy hates LLMs, but this is really interesting

    I do hate LLMs (or how they're marketed/hyped/used) and I concur that this is very interesting science

  • You say all this until ChatGpt convinced you to write a manifesto to "take back" your foreskin from the Jews.

    Funny enough, I am circumcised. But no, if I wanted it back that badly, I'd write it myself.

  • I don't dislike LLMs, I dislike people who treat them as anything more than an advanced search engine and stupidly give them all their confidential data. Seen it happen too much at work.

    Yep. My work is very strict about security except for when it comes to LLMs, and then suddenly they're surprisingly lax about it. It's a bit concerning actually.

  • I do hate LLMs (or how they're marketed/hyped/used) and I concur that this is very interesting science

    I appreciate your reasoned and measured reply, friend!

  • Underrated comment.

    Seems pretty rated to me

  • In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

    goddamn, has 4chan gone so far down the road that its actually come back around and become the good guy?

  • Android 16 is here

    Technology technology
    72
    1
    140 Stimmen
    72 Beiträge
    0 Aufrufe
    B
    I feel like Android and Linux (being that it's what Android itself is based on) do the whole "everything is an app" much better than, say, Windows. On Windows, generally speaking, your entire desktop experience is built-in and so tightly coupled that it's hard to switch it out. On Linux, you don't NEED a GUI at all, but if you want one, you'll have a display server, a window manager, etc. On Android, at least without the desktop mode, the base GUI is the launcher, which is just an app. System apps that require root access are still apps. Of course the kernel isn't really an app and I don't think Google Play Services fits most people's definitions of an app. System libraries aren't apps. So those are the parts that you could consider true "OS updates" as opposed to "app updates", but since the "apps" part of the system (if you include system apps) is so much more visible to the user, an OS update will seem like it's mostly a bunch of app updates.
  • AI and misinformation

    Technology technology
    3
    20 Stimmen
    3 Beiträge
    1 Aufrufe
    D
    Don’t lose hope, just pretend to with sarcasm. Or if you are feeling down it could work the other way too. https://aibusiness.com/nlp/sarcasm-is-really-really-really-easy-for-ai-to-handle#close-modal
  • 489 Stimmen
    18 Beiträge
    2 Aufrufe
    5
    Pretty confident that's the intention of that name
  • 62 Stimmen
    12 Beiträge
    0 Aufrufe
    merde@sh.itjust.worksM
    is the linked article or the title edited? This was a post about VA GPT
  • OpenAI plans massive UAE data center project

    Technology technology
    4
    1
    0 Stimmen
    4 Beiträge
    2 Aufrufe
    V
    TD Cowen (which is basically the US arm of one of the largest Canadian investment banks) did an extensive report on the state of AI investment. What they found was that despite all their big claims about the future of AI, Microsoft were quietly allowing letters of intent for billions of dollars worth of new compute capacity to expire. Basically, scrapping future plans for expansion, but in a way that's not showy and doesn't require any kind of big announcement. The equivalent of promising to be at the party and then just not showing up. Not long after this reporting came out, it got confirmed by Microsoft, and not long after it came out that Amazon was doing the same thing. Ed Zitron has a really good write up on it; https://www.wheresyoured.at/power-cut/ Amazon isn't the big surprise, they've always been the most cautious of the big players on the whole AI thing. Microsoft on the other hand are very much trying to play things both ways. They know AI is fucked, which is why they're scaling back, but they've also invested a lot of money into their OpenAI partnership so now they have to justify that expenditure which means convincing investors that consumers absolutely love their AI products and are desparate for more. As always, follow the money. Stuff like the three mile island thing is mostly just applying for permits and so on at this point. Relatively small investments. As soon as it comes to big money hitting the table, they're pulling back. That's how you know how they really feel.
  • Indian Government orders censoring of accounts on X

    Technology technology
    12
    149 Stimmen
    12 Beiträge
    2 Aufrufe
    M
    Why? Because you can’t sell them?
  • Are We All Becoming More Hostile Online?

    Technology technology
    31
    1
    213 Stimmen
    31 Beiträge
    5 Aufrufe
    A
    Back in the day I just assumed everyone was lying. Or trying to get people worked up, and we called them trolls. Learning how to ignore the trolls, and not having trust for strangers on the internet, coupled with the ability to basically not care what random people said is a lost art. Somehow people forgot to give other the people this memo, including the "you don't fucking join social networks as your self". Anonymity makes this all work. Eternal September newbies just didn't get it.
  • 14 Stimmen
    2 Beiträge
    2 Aufrufe
    D
    "Extra Verification steps" I know how large social media companies operate. This is all about increasing the value of Reddit users to advertisers. The goal is to have a more accurate user database to sell them. Zuckerberg literally brags to corporations about how good their data is on users: https://www.facebook.com/business/ads/performance-marketing Here, Zuckerberg tells corporations that Instagram can easily manipulate users into purchasing shit: https://www.facebook.com/business/instagram/instagram-reels Always be wary of anything available for free. There are some quality exceptions (CBC, VLC, The Guardian, Linux, PBS, Wikipedia, Lemmy, ProPublica) but, by and large, "free" means they don't care about you. You are just a commodity that they sell. Facebook, Google, X, Reddit, Instagram... Their goal is keep people hooked to their smartphone by giving them regular small dopamine hits (likes, upvotes) followed by a small breaks with outrageous content/emotional content. Keep them hooked, gather their data, and sell them ads. The people who know that best are former top executives : https://www.theguardian.com/technology/2017/oct/05/smartphone-addiction-silicon-valley-dystopia https://www.nytimes.com/2019/03/01/business/addictive-technology.html https://www.today.com/parents/teens/facebook-whistleblower-frances-haugen-rcna15256