Skip to content

Wikipedia Pauses an Experiment That Showed Users AI-Generated Summaries at The Top of Some Articles, Following an Editor Backlash.

Technology
40 30 1
  • Summarization is one of the things LLMs are pretty good at. Same for the other thing where Wikipedia talked about auto-generating the "simple article" variants that are normally managed by hand to dumb down content.

    But if they're pushing these tools, they need to be pushed as handy tools for editors to consider leveraging, not forced behavior for end users.

    not forced behavior for end users.

    This is what I'm constantly criticizing. It's fine to have more options, but they should be options and not mandatory.

    No, having to scroll past an AI summary for every fucking article is not an 'option.' Having the option to hide it forever (or even better, opt-in), now that's a real option.

    I'd really love to see the opt-in/opt-out data for AI. I guarantee businesses aren't including the option or recording data because they know it will show people don't want it, and they have to follow the data!

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    Noo Wikipedia why would you do this

  • Grok please ELI5 this comment so i can understand it

    I know your comment was /s bit I cant not repost this:

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    I don't see how AI could benefit wikipedia. Just the power consumption alone isn't worth it. Wiki is one of the rare AI free zones, which is a reason why it is good

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    I don't think Wikipedia is for the benefit of users anymore, what even are the alternatives? Leftypedia? Definitely not Britannica

  • I know your comment was /s bit I cant not repost this:

    Hahaha i too have that saved. I love it so much.

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    If they thought this would be well-received they wouldn't have sprung it on people. The fact that they're only "pausing the launch of the experiment" means they're going to do it again once the backlash has subsided.

    RIP Wikipedia, it was a fun 24 years.

  • If they thought this would be well-received they wouldn't have sprung it on people. The fact that they're only "pausing the launch of the experiment" means they're going to do it again once the backlash has subsided.

    RIP Wikipedia, it was a fun 24 years.

    Not everything is black and white, you know. Just because they have this blunder, doesn't mean they're down for good. The fact they're willing to listen to feedback, whatever their reason was, still shows some good sign.

    Also keep in mind the organization than runs it has a lot of people, each with their own agenda, some with bad ones but extremely useful.

    I mean yeah, sure, do 'leave' Wikipedia if you want. I'm curious to where you'd go.

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    Since much (so-called) "AI" basic training data depends on Wikipedia, wouldn't this create a feedback loop that could quickly degenerate ?

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    It does sound like it could be handy

  • So they:

    • Didn't ask editors/users
    • noticed loud and overwhelmingly negative feedback
    • "paused" the program

    They still don't get it. There's very little practical use for LLMs in general, and certainly not in scholastic spaces. The content is all user-generated anyway, so what's even the point? It's not saving them any money.

    Also it seems like a giant waste of resources for a company that constantly runs giant banners asking for money and claiming to basically be on there verge of closing up every time you visit their site.

    I also think that generating blob summaries just goes towards brain rot things we see everywhere on the web that's just destroying people's attention spam. Wikipedia is kind of good to read something that is long enough and not just some quick, simplistic and brain rotting inducing blob

  • Not everything is black and white, you know. Just because they have this blunder, doesn't mean they're down for good. The fact they're willing to listen to feedback, whatever their reason was, still shows some good sign.

    Also keep in mind the organization than runs it has a lot of people, each with their own agenda, some with bad ones but extremely useful.

    I mean yeah, sure, do 'leave' Wikipedia if you want. I'm curious to where you'd go.

    the fact they're willing to listen to feedback, whatever their reason was, is a good sign

    Oh you have so much to learn about companies fucking their users over if you think this is the end of them trying to shove AI into Wikipedia

  • the fact they're willing to listen to feedback, whatever their reason was, is a good sign

    Oh you have so much to learn about companies fucking their users over if you think this is the end of them trying to shove AI into Wikipedia

    Then teach me daddy~

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    Lol, the source data for all AI is starting to use AI to summarize.

    Have you ever tried to zip a zipfile?

    But then on the other hand, as compilers become better, they become more efficient at compiling their own source code...

  • The sad truth is that AI empowers the malicious to create a bigger impact on workload and standards than is scalable with humans alone. An AI running triage on article changes that flags or reports changes which need more input would be ideal. But threat mitigation and integrity preservation don't really seem to be high on their priorities.

    Nope, they're just interested in colonizing every single second of our time with "info"-tainment on par with the intellectual capacity of Harlequin romances.

  • Not everything is black and white, you know. Just because they have this blunder, doesn't mean they're down for good. The fact they're willing to listen to feedback, whatever their reason was, still shows some good sign.

    Also keep in mind the organization than runs it has a lot of people, each with their own agenda, some with bad ones but extremely useful.

    I mean yeah, sure, do 'leave' Wikipedia if you want. I'm curious to where you'd go.

    Me saying "RIP" was an attempt at hyperbole. That being said, shoehorning AI into something for which a big selling point is that it's user-made is a gigantic misstep - Maybe they'll listen to everybody, but given that they tried it at all, I can't see them properly backing down. Especially when it was worded as "pausing" the experiment.

  • Lol, the source data for all AI is starting to use AI to summarize.

    Have you ever tried to zip a zipfile?

    But then on the other hand, as compilers become better, they become more efficient at compiling their own source code...

    Yeah but the compilers compile improved versions. Like, if you manually curated the summaries to be even better, then fed it to AI to produce a new summary you also curate... you'll end up with a carefully hand-trained LLM.

  • Yeah but the compilers compile improved versions. Like, if you manually curated the summaries to be even better, then fed it to AI to produce a new summary you also curate... you'll end up with a carefully hand-trained LLM.

    So if the AI generated summaries are better than man made summaries, this would not be an issue would it?

  • So if the AI generated summaries are better than man made summaries, this would not be an issue would it?

    If AI constantly refined its own output, sure, unless it hits a wall eventually or starts spewing bullshit because of some quirk of training. But I doubt it could learn to summarise better without external input, just like a compiler won't produce a more optimised version of itself without human development work.

  • 502 Stimmen
    121 Beiträge
    0 Aufrufe
    J
    What would happen if all users start using adblockers, or the value of ads starts to fall? I do not support the current, ad-driven, model of the internet. However, since the costs of subscriptions are increasing, while salaries are going downhill, it is apparent that ads is (seemingly) the only viable choice for now. In the economy we currently live in, all of world's wealth is slowly moving to ad networks. Even donation driven models are currently straggling. Just look at the fediverse. The people donating to their instances are not enough to sustain them. Capitalism has absolutely destroyed everything. The greed of stakeholders has milked most people. At some point people will stop buying the useless things or services promoted via advertisements, just because they will not be able to afford them. Then, no subscriptions, no point of advertising, no donators, no people hosting fediverse instances, just world hunger.
  • 119 Stimmen
    10 Beiträge
    3 Aufrufe
    S
    Active ISA would be a disaster. My fairly modern car is unable to reliably detect posted or implied speed limits. Sometimes it overshoots by more than double and sometimes it mandates more than 3/4 slower. The problem is the way it is and will have to be done is by means of optical detection. GPS speed measurement can also be surprisingly unreliable. Especially in underground settings like long pass-unders and tunnels. If the system would be based on something reliable like local wireless communications between speed limit postings it would be a different issue - would also come with a significant risc of abuse though. Also the passive ISA was the first thing I disabled. And I abide by posted speed limits.
  • The AI girlfriend guy - The Paranoia Of The AI Era

    Technology technology
    4
    1
    7 Stimmen
    4 Beiträge
    4 Aufrufe
    S
    Saying 'don't downvote' is the flammable inflammable conundrum, both don't and do parse as do.
  • Trump Taps Palantir to Compile Data on Americans

    Technology technology
    34
    1
    205 Stimmen
    34 Beiträge
    7 Aufrufe
    M
    Well if they're collating data, not that difficult to add a new table for gun ownership.
  • 4 Stimmen
    2 Beiträge
    0 Aufrufe
    M
    Epic is a piece of shit company. The only reason they are fighting this fight with Apple is because they want some of Apple’s platform fees for themselves. Period. The fact that they managed to convince a bunch of simpletons that they are somehow Robin Hood coming to free them from the tyrant (who was actually protecting all those users all along) is laughable. Apple created the platform, Apple managed it, curated it, and controlled it. That gives them the right to profit from it. You might dislike that but — guess what? Nobody forced you to buy it. Buy Android if Fortnight is so important to you. Seriously. Please. We won’t miss you. Epic thinks they have a right to profit from Apple’s platform and not pay them for all the work they did to get it to be over 1 billion users. That is simply wrong. They should build their own platform and their own App Store and convince 1 billion people to use it. The reason they aren’t doing that is because they know they will never be as successful as Apple has been.
  • 1 Stimmen
    8 Beiträge
    3 Aufrufe
    L
    I think the principle could be applied to scan outside of the machine. It is making requests to 127.0.0.1:{port} - effectively using your computer as a "server" in a sort of reverse-SSRF attack. There's no reason it can't make requests to 10.10.10.1:{port} as well. Of course you'd need to guess the netmask of the network address range first, but this isn't that hard. In fact, if you consider that at least as far as the desktop site goes, most people will be browsing the web behind a standard consumer router left on defaults where it will be the first device in the DHCP range (e.g. 192.168.0.1 or 10.10.10.1), which tends to have a web UI on the LAN interface (port 8080, 80 or 443), then you'd only realistically need to scan a few addresses to determine the network address range. If you want to keep noise even lower, using just 192.168.0.1:80 and 192.168.1.1:80 I'd wager would cover 99% of consumer routers. From there you could assume that it's a /24 netmask and scan IPs to your heart's content. You could do top 10 most common ports type scans and go in-depth on anything you get a result on. I haven't tested this, but I don't see why it wouldn't work, when I was testing 13ft.io - a self-hosted 12ft.io paywall remover, an SSRF flaw like this absolutely let you perform any network request to any LAN address in range.
  • 1 Stimmen
    15 Beiträge
    2 Aufrufe
    G
    I’m in the EU and PII definitely IS “a thing” here, Then let me be more clear: It is not a thing in EU law. With due respect, the level of intellectual functioning, in this case reading comprehension, you display is incompatible with being an IT professional in any country. If you are not trolling, then you should consult a physician.
  • Everyone Is Cheating Their Way Through College

    Technology technology
    23
    1
    171 Stimmen
    23 Beiträge
    4 Aufrufe
    L
    i can this for essay writing, prior to AI people would use prompts and templates of the same exact subject and work from there. and we hear the ODD situation where someone hired another person to do all the writing for them all the way to grad school( this is just as bad as chatgpt) you will get caught in grad school or during your job interview. might be different for specific questions in stem where the answer is more abstract,