Skip to content

Wikipedia Pauses an Experiment That Showed Users AI-Generated Summaries at The Top of Some Articles, Following an Editor Backlash.

Technology
40 30 1
  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    Articles already have a summary at the top due to the page format, why was AI shoved into the process?

  • Articles already have a summary at the top due to the page format, why was AI shoved into the process?

    Because AI

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    How about not putting AI into something that should be entirely human controlled?

  • How about not putting AI into something that should be entirely human controlled?

    Yeah as more organizations implement LLMs Wikipedia has the opportunity to become more reliable and authoritative. Don't mess that opportunity up with "AI."

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    So they:

    • Didn't ask editors/users
    • noticed loud and overwhelmingly negative feedback
    • "paused" the program

    They still don't get it. There's very little practical use for LLMs in general, and certainly not in scholastic spaces. The content is all user-generated anyway, so what's even the point? It's not saving them any money.

    Also it seems like a giant waste of resources for a company that constantly runs giant banners asking for money and claiming to basically be on there verge of closing up every time you visit their site.

  • How about not putting AI into something that should be entirely human controlled?

    These days, most companies that work with web based products are under pressure from upper management to "use AI", as there's a fear of missing out if they don't. Now, management doesn't necessarily have any idea what they should use it for, so they leave that to product managers and such. They don't have any idea, either, and so they look at what features others have built and find a way to adapt one or more of those to fit their own products.

    Slap on back, job well done, clueless upper management happy, even though money and time have been spent and the revenue remains the same.

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    Summarization is one of the things LLMs are pretty good at. Same for the other thing where Wikipedia talked about auto-generating the "simple article" variants that are normally managed by hand to dumb down content.

    But if they're pushing these tools, they need to be pushed as handy tools for editors to consider leveraging, not forced behavior for end users.

  • So they:

    • Didn't ask editors/users
    • noticed loud and overwhelmingly negative feedback
    • "paused" the program

    They still don't get it. There's very little practical use for LLMs in general, and certainly not in scholastic spaces. The content is all user-generated anyway, so what's even the point? It's not saving them any money.

    Also it seems like a giant waste of resources for a company that constantly runs giant banners asking for money and claiming to basically be on there verge of closing up every time you visit their site.

    If her list were straight talk:

    1. Were gonna make up shit
    2. But don’t worry we’ll manually label it what could go wrong
    3. Dang no one was fooled let’s figure out a different way to pollute everything with alternative facts
  • Summarization is one of the things LLMs are pretty good at. Same for the other thing where Wikipedia talked about auto-generating the "simple article" variants that are normally managed by hand to dumb down content.

    But if they're pushing these tools, they need to be pushed as handy tools for editors to consider leveraging, not forced behavior for end users.

    If we need summaries, let's let a human being write the summaries. We are already experts at writing. We love doing it.

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    Is there a way for us to complain to wikipedia about this? I contribute money every year, and I will 100% stop if they're stomping more LLM-slop down my throat.

    Edit:
    You can contribute to the discussion in the link, and you can email them at addresses found here: https://wikimediafoundation.org/about/contact/

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    I passionately hate the corpo speech she's using. This fake list of "things she's done wrong but now she'll do them right, pinky promise!!" whilst completely ignoring the actual reason for the pushback they've received (which boils down to "fuck your AI, keep it out") is typical management behavior after they were caught trying to screw over the workers in some way.

    We're going to screw you over one way or the other, we just should have communicated it better!

    Basically this.

  • Summarization is one of the things LLMs are pretty good at. Same for the other thing where Wikipedia talked about auto-generating the "simple article" variants that are normally managed by hand to dumb down content.

    But if they're pushing these tools, they need to be pushed as handy tools for editors to consider leveraging, not forced behavior for end users.

    Summaries that look good are something LLMs can do, but not summaries that actually have a higher ratio of important/unimportant than the source, nor ones that keep things accurate. That last one is super mandatory on something like an encyclopedia.

  • Articles already have a summary at the top due to the page format, why was AI shoved into the process?

    Grok please ELI5 this comment so i can understand it

  • These days, most companies that work with web based products are under pressure from upper management to "use AI", as there's a fear of missing out if they don't. Now, management doesn't necessarily have any idea what they should use it for, so they leave that to product managers and such. They don't have any idea, either, and so they look at what features others have built and find a way to adapt one or more of those to fit their own products.

    Slap on back, job well done, clueless upper management happy, even though money and time have been spent and the revenue remains the same.

    Wikipedia can create a market niche by stating the authenticity of their content being 100% human. Some of the stupid upper management types understand being unique as a marketing strategy.

  • If her list were straight talk:

    1. Were gonna make up shit
    2. But don’t worry we’ll manually label it what could go wrong
    3. Dang no one was fooled let’s figure out a different way to pollute everything with alternative facts

    Your last point states it all. Rather than being a source of truth, it is now meant to bend the truth. 2 plus 2 no longer equals 4.

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    I can't wait until this "put LLMs in everything" phase is over.

  • Summaries that look good are something LLMs can do, but not summaries that actually have a higher ratio of important/unimportant than the source, nor ones that keep things accurate. That last one is super mandatory on something like an encyclopedia.

    The only application I've kind of liked so far has been the one on Amazon that summarizes the content of the reviews. Seems relatively accurate in general.

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    I canceled my recurring over this about a week ago, explaining that this was the reason. One of their people sent me a lengthy response that I appreciated. Still going to wait a year before I reinstate it, hopefully they fully move on from this idea by then. It sounded a lot like this though, kinda wishy washy.

  • How about not putting AI into something that should be entirely human controlled?

    The sad truth is that AI empowers the malicious to create a bigger impact on workload and standards than is scalable with humans alone. An AI running triage on article changes that flags or reports changes which need more input would be ideal. But threat mitigation and integrity preservation don't really seem to be high on their priorities.

  • 56 Stimmen
    7 Beiträge
    0 Aufrufe
    fizz@lemmy.nzF
    This is exciting and terrifying. I am NOT looking forward to the future anymore.
  • 520 Stimmen
    54 Beiträge
    6 Aufrufe
    I
    Or, how about they fuck off and leave me alone with my private data? I don't want to have to pay for something that should be an irrevocable right. Even if you completely degoogle and whatnot, these cunts will still get hold of your data one way or the other. Its sickening.
  • How the US is turning into a mass techno-surveillance state

    Technology technology
    66
    1
    483 Stimmen
    66 Beiträge
    16 Aufrufe
    D
    Are these people retarded? Did they forget Edward Snowden?
  • Pimax: one more brand exposed for promoting "positive reviews".

    Technology technology
    2
    1
    55 Stimmen
    2 Beiträge
    3 Aufrufe
    moose@moose.bestM
    This doesn't really surprise me, I've gotten weird vibes from Pimax for years. Not so much to do with their hardware, but how their sales / promo team operates. A while back at my old workplace we randomly got contacted by Pimax trying to have us carry their headset, which was weird since we didn't sell VR stuff or computers even, just other electronics. It was a very out of place request which we basically said we wouldn't consider it until we can verify the quality of the headset, after which they never replied.
  • X launches E2E encrypted Chat

    Technology technology
    55
    2
    10 Stimmen
    55 Beiträge
    2 Aufrufe
    F
    So you do have evidence? Where is it?
  • 82 Stimmen
    3 Beiträge
    2 Aufrufe
    sfxrlz@lemmy.dbzer0.comS
    As a Star Wars yellowtext: „In the final days of the senate, senator organa…“
  • OpenAI plans massive UAE data center project

    Technology technology
    4
    1
    0 Stimmen
    4 Beiträge
    2 Aufrufe
    V
    TD Cowen (which is basically the US arm of one of the largest Canadian investment banks) did an extensive report on the state of AI investment. What they found was that despite all their big claims about the future of AI, Microsoft were quietly allowing letters of intent for billions of dollars worth of new compute capacity to expire. Basically, scrapping future plans for expansion, but in a way that's not showy and doesn't require any kind of big announcement. The equivalent of promising to be at the party and then just not showing up. Not long after this reporting came out, it got confirmed by Microsoft, and not long after it came out that Amazon was doing the same thing. Ed Zitron has a really good write up on it; https://www.wheresyoured.at/power-cut/ Amazon isn't the big surprise, they've always been the most cautious of the big players on the whole AI thing. Microsoft on the other hand are very much trying to play things both ways. They know AI is fucked, which is why they're scaling back, but they've also invested a lot of money into their OpenAI partnership so now they have to justify that expenditure which means convincing investors that consumers absolutely love their AI products and are desparate for more. As always, follow the money. Stuff like the three mile island thing is mostly just applying for permits and so on at this point. Relatively small investments. As soon as it comes to big money hitting the table, they're pulling back. That's how you know how they really feel.
  • Windows Is Adding AI Agents That Can Change Your Settings

    Technology technology
    26
    1
    103 Stimmen
    26 Beiträge
    2 Aufrufe
    T
    Edit: no, wtf am i doing The thread was about inept the coders were. Here is your answer: They were so fucking inept they broke a fundamental function and it made it to production. Then they did it deliberately. That's how inept they are. End of.