Skip to content

Wikipedia Pauses an Experiment That Showed Users AI-Generated Summaries at The Top of Some Articles, Following an Editor Backlash.

Technology
40 30 0
  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    I canceled my recurring over this about a week ago, explaining that this was the reason. One of their people sent me a lengthy response that I appreciated. Still going to wait a year before I reinstate it, hopefully they fully move on from this idea by then. It sounded a lot like this though, kinda wishy washy.

  • How about not putting AI into something that should be entirely human controlled?

    The sad truth is that AI empowers the malicious to create a bigger impact on workload and standards than is scalable with humans alone. An AI running triage on article changes that flags or reports changes which need more input would be ideal. But threat mitigation and integrity preservation don't really seem to be high on their priorities.

  • These days, most companies that work with web based products are under pressure from upper management to "use AI", as there's a fear of missing out if they don't. Now, management doesn't necessarily have any idea what they should use it for, so they leave that to product managers and such. They don't have any idea, either, and so they look at what features others have built and find a way to adapt one or more of those to fit their own products.

    Slap on back, job well done, clueless upper management happy, even though money and time have been spent and the revenue remains the same.

    I've already posted this a few times, but Ed Zitron wrote a long article about what he calls "Business Idiots". Basically, people in decision making positions who are out of touch with their users and their products. They make bad decisions, and that's a big factor in why everything kind of sucks now.

    https://www.wheresyoured.at/the-era-of-the-business-idiot/ (it's long)

    I think a lot of us have this illusion that higher ranking people are smarter, more visionary, or whatever. But I think no. I think a lot of people are just kind of stupid, surrounded by other stupid people, cushioned from real, personal, consequences. On top of that, for many enterprises, the incentives don't line up with the users. At least wikipedia isn't profit driven, but you can probably think of some things you've used that got more annoying with updates. Like google putting more ads up top, or any website that does a redesign that yields more ad space, worse navigation.

  • Summarization is one of the things LLMs are pretty good at. Same for the other thing where Wikipedia talked about auto-generating the "simple article" variants that are normally managed by hand to dumb down content.

    But if they're pushing these tools, they need to be pushed as handy tools for editors to consider leveraging, not forced behavior for end users.

    not forced behavior for end users.

    This is what I'm constantly criticizing. It's fine to have more options, but they should be options and not mandatory.

    No, having to scroll past an AI summary for every fucking article is not an 'option.' Having the option to hide it forever (or even better, opt-in), now that's a real option.

    I'd really love to see the opt-in/opt-out data for AI. I guarantee businesses aren't including the option or recording data because they know it will show people don't want it, and they have to follow the data!

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    Noo Wikipedia why would you do this

  • Grok please ELI5 this comment so i can understand it

    I know your comment was /s bit I cant not repost this:

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    I don't see how AI could benefit wikipedia. Just the power consumption alone isn't worth it. Wiki is one of the rare AI free zones, which is a reason why it is good

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    I don't think Wikipedia is for the benefit of users anymore, what even are the alternatives? Leftypedia? Definitely not Britannica

  • I know your comment was /s bit I cant not repost this:

    Hahaha i too have that saved. I love it so much.

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    If they thought this would be well-received they wouldn't have sprung it on people. The fact that they're only "pausing the launch of the experiment" means they're going to do it again once the backlash has subsided.

    RIP Wikipedia, it was a fun 24 years.

  • If they thought this would be well-received they wouldn't have sprung it on people. The fact that they're only "pausing the launch of the experiment" means they're going to do it again once the backlash has subsided.

    RIP Wikipedia, it was a fun 24 years.

    Not everything is black and white, you know. Just because they have this blunder, doesn't mean they're down for good. The fact they're willing to listen to feedback, whatever their reason was, still shows some good sign.

    Also keep in mind the organization than runs it has a lot of people, each with their own agenda, some with bad ones but extremely useful.

    I mean yeah, sure, do 'leave' Wikipedia if you want. I'm curious to where you'd go.

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    Since much (so-called) "AI" basic training data depends on Wikipedia, wouldn't this create a feedback loop that could quickly degenerate ?

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    It does sound like it could be handy

  • So they:

    • Didn't ask editors/users
    • noticed loud and overwhelmingly negative feedback
    • "paused" the program

    They still don't get it. There's very little practical use for LLMs in general, and certainly not in scholastic spaces. The content is all user-generated anyway, so what's even the point? It's not saving them any money.

    Also it seems like a giant waste of resources for a company that constantly runs giant banners asking for money and claiming to basically be on there verge of closing up every time you visit their site.

    I also think that generating blob summaries just goes towards brain rot things we see everywhere on the web that's just destroying people's attention spam. Wikipedia is kind of good to read something that is long enough and not just some quick, simplistic and brain rotting inducing blob

  • Not everything is black and white, you know. Just because they have this blunder, doesn't mean they're down for good. The fact they're willing to listen to feedback, whatever their reason was, still shows some good sign.

    Also keep in mind the organization than runs it has a lot of people, each with their own agenda, some with bad ones but extremely useful.

    I mean yeah, sure, do 'leave' Wikipedia if you want. I'm curious to where you'd go.

    the fact they're willing to listen to feedback, whatever their reason was, is a good sign

    Oh you have so much to learn about companies fucking their users over if you think this is the end of them trying to shove AI into Wikipedia

  • the fact they're willing to listen to feedback, whatever their reason was, is a good sign

    Oh you have so much to learn about companies fucking their users over if you think this is the end of them trying to shove AI into Wikipedia

    Then teach me daddy~

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    Lol, the source data for all AI is starting to use AI to summarize.

    Have you ever tried to zip a zipfile?

    But then on the other hand, as compilers become better, they become more efficient at compiling their own source code...

  • The sad truth is that AI empowers the malicious to create a bigger impact on workload and standards than is scalable with humans alone. An AI running triage on article changes that flags or reports changes which need more input would be ideal. But threat mitigation and integrity preservation don't really seem to be high on their priorities.

    Nope, they're just interested in colonizing every single second of our time with "info"-tainment on par with the intellectual capacity of Harlequin romances.

  • Not everything is black and white, you know. Just because they have this blunder, doesn't mean they're down for good. The fact they're willing to listen to feedback, whatever their reason was, still shows some good sign.

    Also keep in mind the organization than runs it has a lot of people, each with their own agenda, some with bad ones but extremely useful.

    I mean yeah, sure, do 'leave' Wikipedia if you want. I'm curious to where you'd go.

    Me saying "RIP" was an attempt at hyperbole. That being said, shoehorning AI into something for which a big selling point is that it's user-made is a gigantic misstep - Maybe they'll listen to everybody, but given that they tried it at all, I can't see them properly backing down. Especially when it was worded as "pausing" the experiment.

  • Lol, the source data for all AI is starting to use AI to summarize.

    Have you ever tried to zip a zipfile?

    But then on the other hand, as compilers become better, they become more efficient at compiling their own source code...

    Yeah but the compilers compile improved versions. Like, if you manually curated the summaries to be even better, then fed it to AI to produce a new summary you also curate... you'll end up with a carefully hand-trained LLM.

  • 12 Stimmen
    2 Beiträge
    0 Aufrufe
    H
    You should stop using the PowerCore 10000 'immediately' Engadget "journalists" don't know how to use quotes
  • 992 Stimmen
    95 Beiträge
    1 Aufrufe
    G
    Obviously the law must be simple enough to follow so that for Jim’s furniture shop is not a problem nor a too high cost to respect it, but it must be clear that if you break it you can cease to exist as company. I think this may be the root of our disagreement, I do not believe that there is any law making body today that is capable of an elegantly simple law. I could be too naive, but I think it is possible. We also definitely have a difference on opinion when it comes to the severity of the infraction, in my mind, while privacy is important, it should not have the same level of punishments associated with it when compared to something on the level of poisoning water ways; I think that a privacy law should hurt but be able to be learned from while in the poison case it should result in the bankruptcy of a company. The severity is directly proportional to the number of people affected. If you violate the privacy of 200 million people is the same that you poison the water of 10 people. And while with the poisoning scenario it could be better to jail the responsible people (for a very, very long time) and let the company survive to clean the water, once your privacy is violated there is no way back, a company could not fix it. The issue we find ourselves with today is that the aggregate of all privacy breaches makes it harmful to the people, but with a sizeable enough fine, I find it hard to believe that there would be major or lasting damage. So how much money your privacy it's worth ? 6 For this reason I don’t think it is wise to write laws that will bankrupt a company off of one infraction which was not directly or indirectly harmful to the physical well being of the people: and I am using indirectly a little bit more strict than I would like to since as I said before, the aggregate of all the information is harmful. The point is that the goal is not to bankrupt companies but to have them behave right. The penalty associated to every law IS the tool that make you respect the law. And it must be so high that you don't want to break the law. I would have to look into the laws in question, but on a surface level I think that any company should be subjected to the same baseline privacy laws, so if there isn’t anything screwy within the law that apple, Google, and Facebook are ignoring, I think it should apply to them. Trust me on this one, direct experience payment processors have a lot more rules to follow to be able to work. I do not want jail time for the CEO by default but he need to know that he will pay personally if the company break the law, it is the only way to make him run the company being sure that it follow the laws. For some reason I don’t have my usual cynicism when it comes to this issue. I think that the magnitude of loses that vested interests have in these companies would make it so that companies would police themselves for fear of losing profits. That being said I wouldn’t be opposed to some form of personal accountability on corporate leadership, but I fear that they will just end up finding a way to create a scapegoat everytime. It is not cynicism. I simply think that a huge fine to a single person (the CEO for example) is useless since it too easy to avoid and if it really huge realistically it would be never paid anyway so nothing usefull since the net worth of this kind of people is only on the paper. So if you slap a 100 billion file to Musk he will never pay because he has not the money to pay even if technically he is worth way more than that. Jail time instead is something that even Musk can experience. In general I like laws that are as objective as possible, I think that a privacy law should be written so that it is very objectively overbearing, but that has a smaller fine associated with it. This way the law is very clear on right and wrong, while also giving the businesses time and incentive to change their practices without having to sink large amount of expenses into lawyers to review every minute detail, which is the logical conclusion of the one infraction bankrupt system that you seem to be supporting. Then you write a law that explicitally state what you can do and what is not allowed is forbidden by default.
  • 18 Stimmen
    18 Beiträge
    9 Aufrufe
    freebooter69@lemmy.caF
    The US courts gave corporations person-hood, AI just around the corner.
  • Fake It Till You Make It? Builder.ai’s $1.5B AI Scam Exposed

    Technology technology
    14
    1
    70 Stimmen
    14 Beiträge
    6 Aufrufe
    W
    Religion and fiat are always at the top
  • You Can Choose Tools That Make You Happy

    Technology technology
    1
    1
    30 Stimmen
    1 Beiträge
    2 Aufrufe
    Niemand hat geantwortet
  • Apple Reportedly Weighs iPhone Price Increase

    Technology technology
    3
    1
    21 Stimmen
    3 Beiträge
    3 Aufrufe
    S
    Anytime I consider making the jump, I make my peace with everything and then the price hits...no way
  • Airlines Are Selling Your Data to ICE

    Technology technology
    23
    1
    555 Stimmen
    23 Beiträge
    3 Aufrufe
    F
    It’s not a loophole though.
  • 0 Stimmen
    3 Beiträge
    2 Aufrufe
    thehatfox@lemmy.worldT
    The platform owners don’t consider engagement to me be participation in meaningful discourse. Engagement to them just means staying on the platform while seeing ads. If bots keep people doing that those platforms will keep letting them in.