Skip to content

Wikimedia Foundation's plans to introduce AI-generated article summaries to Wikipedia

Technology
137 82 40
  • Paraphrasing, but: "testing can only show presence of bugs, not their absence"

    I like it

  • Trolling aside, yeah, being able to explain a concept in everyday terms takes careful thought and discipline. I'm consistently impressed by the people who write Simple articles on Wikipedia. I wish there were more of those articles.

    I wasn't trolling

  • Giving people incorrect information is not an accessibility feature

    RAG on 2 pages of text does not hallucinate anything though. I literally use it every day.

  • I don't think the idea itself is awful, but everyone is so fed up with AI bullshit that any attempt to integrate even an iota of it will be received very poorly, so I'm not sure it's worth it.

    I don't think it's everyone either - just a very vocal minority.

  • TIL: Wikipedia uses complex language.

    It might just be me, but I find articles written on Wikipedia much more easier to read than shit sometimes people write or speak to me. Sometimes it is incomprehensible garbage, or without much sense.

    You've clearly never tried to use Wikipedia to help with your math homework

  • You've clearly never tried to use Wikipedia to help with your math homework

    I never did any homework unless absolutely necessary.

    Now I understand that I should have done it, because I am not good at learning shit in classrooms where there is bunch of people who distract me and I don't learn anything that way. Only many years later I found out that for most things it's best for me to study alone.

    That said, you are most probably right, because I have opened some math-related Wikipedia articles at some point, and they were pretty incomprehensible to me.

  • Then skip the AI summary.

    For those of us who do skip the AI summaries it's the equivalent of adding an extra click to everything.

    I would support optional AI, but having to physically scroll past random LLM nonsense all the time feels like the internet is being infested by something equally annoying/useless as ads, and we don't even have a blocker for it.

  • It's kind of indirectly related, but adding a query parameter udm=14 to the url of your Google searches removes the AI summary at the top, and there are plugins for Firefox that do this for you. My hopes for this WM project are that similar plugins will be possible for Wikipedia.

    The annoying thing about these summaries is that even for someone who cares about the truth, and gathering actual information, rather than the fancy autocomplete word salad that LLMs generate, it is easy to "fall for it" and end up reading the LLM summary. Usually I catch myself, but I often end up wasting some time reading the summary. Recently the non-information was so egregiously wrong (it called a certain city in Israel non-apartheid), that I ended up installing the udm 14 plugin.

    In general, I think the only use cases for fancy autocomplete are where you have a way to verify the answer. For example, if you need to write an email and can't quite find the words, if an LLM generates something, you will be able to tell whether it conveys what you're trying to say by reading it. Or in case of writing code, if you've written a bunch of tests beforehand expressing what the code needs to do, you can run those on the code the LLM generates and see if it works (if there's a Dijkstra quote that comes to your mind reading this: high five, I'm thinking the same thing).

    I think it can be argued that Wikipedia articles satisfy this criterion. All you need to do to verify the summary is read the article. Will people do this? I can only speak for myself, and I know that, despite my best intentions, sometimes I won't. If that's anything to go by, I think these summaries will make the world a worse place.

    Thank you so much for this!!!

  • There's a core problem that many Wikipedia articles are hard for a layperson to read and understand. The statement about reading level is one way to express this.

    The Simple version of articles shows humans can produce readable text. But there aren't enough Simple articles, and the Simple articles are often incomplete.

    I don't think AI should be solely trusted with summarization/translation, but it might have a place in the editing cycle.

    It didn't help when any page which could be rewritten with mathematical notation was rewritten as mathematical notation.

  • I don't know if this is an acceptable format for a submission here, but here it goes anyway:

    Wikimedia Foundation has been developing an LLM that would produce simplified Wikipedia article summaries, as described here: https://www.mediawiki.org/wiki/Reading/Web/Content_Discovery_Experiments/Simple_Article_Summaries

    We would like to provide article summaries, which would simplify the content of the articles. This will make content more readable and accessible, and thus easier to discover and learn from. This part of the project focuses only on displaying the summaries. A future experiment will study ways of editing and adjusting this content.

    Currently, much of the encyclopedic quality content is long-form and thus difficult to parse quickly. In addition, it is written at a reading level much higher than that of the average adult. Projects that simplify content, such as Simple English Wikipedia or Basque Txikipedia, are designed to address some of these issues. They do this by having editors manually create simpler versions of articles. However, these projects have so far had very limited success - they are only available in a few languages and have been difficult to scale. In addition, they ask editors to rewrite content that they have already written. This can feel very repetitive.

    In our previous research (Content Simplification), we have identified two needs:

    • The need for readers to quickly get an overview of a given article or page
    • The need for this overview to be written in language the reader can understand

    Etc., you should check the full text yourself. There's a brief video showing how it might look: https://www.youtube.com/watch?v=DC8JB7q7SZc

    This hasn't been met with warm reactions, the comments on the respective talk page have questioned the purposefulness of the tool (shouldn't the introductory paragraphs do the same job already?), and some other complaints have been provided as well:

    Taking a quote from the page for the usability study:

    "Most readers in the US can comfortably read at a grade 5 level,[CN] yet most Wikipedia articles are written in language that requires a grade 9 or higher reading level."

    Also stated on the same page, the study only had 8 participants, most of which did not speak English as their first language. AI skepticism was low among them, with one even mentioning they 'use AI for everything'. I sincerely doubt this is a representative sample and the fact this project is still going while being based on such shoddy data is shocking to me. Especially considering that the current Qualtrics survey seems to be more about how to best implement such a feature as opposed to the question of whether or not it should be implemented in the first place. I don't think AI-generated content has a place on Wikipedia. The Morrison Man (talk) 23:19, 3 June 2025 (UTC)

    The survey the user mentions is this one: https://wikimedia.qualtrics.com/jfe/form/SV_1XiNLmcNJxPeMqq and true enough it pretty much takes for granted that the summaries will be added, there's no judgment of their actual quality, and they're only asking for people's feedback on how they should be presented. I filled it out and couldn't even find the space to say that e.g. the summary they show is written almost insultingly, like it's meant for particularly dumb children, and I couldn't even tell whether it is accurate because they just scroll around in the video.

    Very extensive discussion is going on at the Village Pump (en.wiki).

    The comments are also overwhelmingly negative, some of them pointing out that the summary doesn't summarise the article properly ("Perhaps the AI is hallucinating, or perhaps it's drawing from other sources like any widespread llm. What it definitely doesn't seem to be doing is taking existing article text and simplifying it." - user CMD). A few comments acknowlegde potential benefits of the summaries, though with a significantly different approach to using them:

    I'm glad that WMF is thinking about a solution of a key problem on Wikipedia: most of our technical articles are way too difficult. My experience with AI summaries on Wikiwand is that it is useful, but too often produces misinformation not present in the article it "summarises". Any information shown to readers should be greenlit by editors in advance, for each individual article. Maybe we can use it as inspiration for writing articles appropriate for our broad audience. —Femke 🐦 (talk) 16:30, 3 June 2025 (UTC)

    One of the reasons many prefer chatGPT to Wikipedia is that too large a share of our technical articles are way way too difficult for the intended audience. And we need those readers, so they can become future editors. Ideally, we would fix this ourselves, but my impression is that we usually make articles more difficult, not easier, when they go through GAN and FAC. As a second-best solution, we might try this as long as we have good safeguards in place. —Femke 🐦 (talk) 18:32, 3 June 2025 (UTC)

    Finally, some comments are problematising the whole situation with WMF working behind the actual wikis' backs:

    This is a prime reason I tried to formulate my statement on WP:VPWMF#Statement proposed by berchanhimez requesting that we be informed "early and often" of new developments. We shouldn't be finding out about this a week or two before a test, and we should have the opportunity to inform the WMF if we would approve such a test before they put their effort into making one happen. I think this is a clear example of needing to make a statement like that to the WMF that we do not approve of things being developed in virtual secret (having to go to Meta or MediaWikiWiki to find out about them) and we want to be informed sooner rather than later. I invite anyone who shares concerns over the timeline of this to review my (and others') statements there and contribute to them if they feel so inclined. I know the wording of mine is quite long and probably less than ideal - I have no problem if others make edits to the wording or flow of it to improve it.

    Oh, and to be blunt, I do not support testing this publicly without significantly more editor input from the local wikis involved - whether that's an opt-in logged-in test for people who want it, or what. Regards, -bɜ:ʳkənhɪmez | me | talk to me! 22:55, 3 June 2025 (UTC)

    Again, I recommend reading the whole discussion yourself.

    EDIT: WMF has announced they're putting this on hold after the negative reaction from the editors' community. ("we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together")

    fucking disgusting. no place should have ai but especially not an encyclopedia.

  • I don't know if this is an acceptable format for a submission here, but here it goes anyway:

    Wikimedia Foundation has been developing an LLM that would produce simplified Wikipedia article summaries, as described here: https://www.mediawiki.org/wiki/Reading/Web/Content_Discovery_Experiments/Simple_Article_Summaries

    We would like to provide article summaries, which would simplify the content of the articles. This will make content more readable and accessible, and thus easier to discover and learn from. This part of the project focuses only on displaying the summaries. A future experiment will study ways of editing and adjusting this content.

    Currently, much of the encyclopedic quality content is long-form and thus difficult to parse quickly. In addition, it is written at a reading level much higher than that of the average adult. Projects that simplify content, such as Simple English Wikipedia or Basque Txikipedia, are designed to address some of these issues. They do this by having editors manually create simpler versions of articles. However, these projects have so far had very limited success - they are only available in a few languages and have been difficult to scale. In addition, they ask editors to rewrite content that they have already written. This can feel very repetitive.

    In our previous research (Content Simplification), we have identified two needs:

    • The need for readers to quickly get an overview of a given article or page
    • The need for this overview to be written in language the reader can understand

    Etc., you should check the full text yourself. There's a brief video showing how it might look: https://www.youtube.com/watch?v=DC8JB7q7SZc

    This hasn't been met with warm reactions, the comments on the respective talk page have questioned the purposefulness of the tool (shouldn't the introductory paragraphs do the same job already?), and some other complaints have been provided as well:

    Taking a quote from the page for the usability study:

    "Most readers in the US can comfortably read at a grade 5 level,[CN] yet most Wikipedia articles are written in language that requires a grade 9 or higher reading level."

    Also stated on the same page, the study only had 8 participants, most of which did not speak English as their first language. AI skepticism was low among them, with one even mentioning they 'use AI for everything'. I sincerely doubt this is a representative sample and the fact this project is still going while being based on such shoddy data is shocking to me. Especially considering that the current Qualtrics survey seems to be more about how to best implement such a feature as opposed to the question of whether or not it should be implemented in the first place. I don't think AI-generated content has a place on Wikipedia. The Morrison Man (talk) 23:19, 3 June 2025 (UTC)

    The survey the user mentions is this one: https://wikimedia.qualtrics.com/jfe/form/SV_1XiNLmcNJxPeMqq and true enough it pretty much takes for granted that the summaries will be added, there's no judgment of their actual quality, and they're only asking for people's feedback on how they should be presented. I filled it out and couldn't even find the space to say that e.g. the summary they show is written almost insultingly, like it's meant for particularly dumb children, and I couldn't even tell whether it is accurate because they just scroll around in the video.

    Very extensive discussion is going on at the Village Pump (en.wiki).

    The comments are also overwhelmingly negative, some of them pointing out that the summary doesn't summarise the article properly ("Perhaps the AI is hallucinating, or perhaps it's drawing from other sources like any widespread llm. What it definitely doesn't seem to be doing is taking existing article text and simplifying it." - user CMD). A few comments acknowlegde potential benefits of the summaries, though with a significantly different approach to using them:

    I'm glad that WMF is thinking about a solution of a key problem on Wikipedia: most of our technical articles are way too difficult. My experience with AI summaries on Wikiwand is that it is useful, but too often produces misinformation not present in the article it "summarises". Any information shown to readers should be greenlit by editors in advance, for each individual article. Maybe we can use it as inspiration for writing articles appropriate for our broad audience. —Femke 🐦 (talk) 16:30, 3 June 2025 (UTC)

    One of the reasons many prefer chatGPT to Wikipedia is that too large a share of our technical articles are way way too difficult for the intended audience. And we need those readers, so they can become future editors. Ideally, we would fix this ourselves, but my impression is that we usually make articles more difficult, not easier, when they go through GAN and FAC. As a second-best solution, we might try this as long as we have good safeguards in place. —Femke 🐦 (talk) 18:32, 3 June 2025 (UTC)

    Finally, some comments are problematising the whole situation with WMF working behind the actual wikis' backs:

    This is a prime reason I tried to formulate my statement on WP:VPWMF#Statement proposed by berchanhimez requesting that we be informed "early and often" of new developments. We shouldn't be finding out about this a week or two before a test, and we should have the opportunity to inform the WMF if we would approve such a test before they put their effort into making one happen. I think this is a clear example of needing to make a statement like that to the WMF that we do not approve of things being developed in virtual secret (having to go to Meta or MediaWikiWiki to find out about them) and we want to be informed sooner rather than later. I invite anyone who shares concerns over the timeline of this to review my (and others') statements there and contribute to them if they feel so inclined. I know the wording of mine is quite long and probably less than ideal - I have no problem if others make edits to the wording or flow of it to improve it.

    Oh, and to be blunt, I do not support testing this publicly without significantly more editor input from the local wikis involved - whether that's an opt-in logged-in test for people who want it, or what. Regards, -bɜ:ʳkənhɪmez | me | talk to me! 22:55, 3 June 2025 (UTC)

    Again, I recommend reading the whole discussion yourself.

    EDIT: WMF has announced they're putting this on hold after the negative reaction from the editors' community. ("we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together")

    My immediate thought is that the purpose of an encyclopaedia is to have a more-or-less comprehensive overview of some topic of interest. The reader should be able to look through the page index to find the section they care about and read that section.

    Its purpose is not to rapidly teach anyone anything in full.

    It seems like a poor fit as an application for LLMs

  • I don't know if this is an acceptable format for a submission here, but here it goes anyway:

    Wikimedia Foundation has been developing an LLM that would produce simplified Wikipedia article summaries, as described here: https://www.mediawiki.org/wiki/Reading/Web/Content_Discovery_Experiments/Simple_Article_Summaries

    We would like to provide article summaries, which would simplify the content of the articles. This will make content more readable and accessible, and thus easier to discover and learn from. This part of the project focuses only on displaying the summaries. A future experiment will study ways of editing and adjusting this content.

    Currently, much of the encyclopedic quality content is long-form and thus difficult to parse quickly. In addition, it is written at a reading level much higher than that of the average adult. Projects that simplify content, such as Simple English Wikipedia or Basque Txikipedia, are designed to address some of these issues. They do this by having editors manually create simpler versions of articles. However, these projects have so far had very limited success - they are only available in a few languages and have been difficult to scale. In addition, they ask editors to rewrite content that they have already written. This can feel very repetitive.

    In our previous research (Content Simplification), we have identified two needs:

    • The need for readers to quickly get an overview of a given article or page
    • The need for this overview to be written in language the reader can understand

    Etc., you should check the full text yourself. There's a brief video showing how it might look: https://www.youtube.com/watch?v=DC8JB7q7SZc

    This hasn't been met with warm reactions, the comments on the respective talk page have questioned the purposefulness of the tool (shouldn't the introductory paragraphs do the same job already?), and some other complaints have been provided as well:

    Taking a quote from the page for the usability study:

    "Most readers in the US can comfortably read at a grade 5 level,[CN] yet most Wikipedia articles are written in language that requires a grade 9 or higher reading level."

    Also stated on the same page, the study only had 8 participants, most of which did not speak English as their first language. AI skepticism was low among them, with one even mentioning they 'use AI for everything'. I sincerely doubt this is a representative sample and the fact this project is still going while being based on such shoddy data is shocking to me. Especially considering that the current Qualtrics survey seems to be more about how to best implement such a feature as opposed to the question of whether or not it should be implemented in the first place. I don't think AI-generated content has a place on Wikipedia. The Morrison Man (talk) 23:19, 3 June 2025 (UTC)

    The survey the user mentions is this one: https://wikimedia.qualtrics.com/jfe/form/SV_1XiNLmcNJxPeMqq and true enough it pretty much takes for granted that the summaries will be added, there's no judgment of their actual quality, and they're only asking for people's feedback on how they should be presented. I filled it out and couldn't even find the space to say that e.g. the summary they show is written almost insultingly, like it's meant for particularly dumb children, and I couldn't even tell whether it is accurate because they just scroll around in the video.

    Very extensive discussion is going on at the Village Pump (en.wiki).

    The comments are also overwhelmingly negative, some of them pointing out that the summary doesn't summarise the article properly ("Perhaps the AI is hallucinating, or perhaps it's drawing from other sources like any widespread llm. What it definitely doesn't seem to be doing is taking existing article text and simplifying it." - user CMD). A few comments acknowlegde potential benefits of the summaries, though with a significantly different approach to using them:

    I'm glad that WMF is thinking about a solution of a key problem on Wikipedia: most of our technical articles are way too difficult. My experience with AI summaries on Wikiwand is that it is useful, but too often produces misinformation not present in the article it "summarises". Any information shown to readers should be greenlit by editors in advance, for each individual article. Maybe we can use it as inspiration for writing articles appropriate for our broad audience. —Femke 🐦 (talk) 16:30, 3 June 2025 (UTC)

    One of the reasons many prefer chatGPT to Wikipedia is that too large a share of our technical articles are way way too difficult for the intended audience. And we need those readers, so they can become future editors. Ideally, we would fix this ourselves, but my impression is that we usually make articles more difficult, not easier, when they go through GAN and FAC. As a second-best solution, we might try this as long as we have good safeguards in place. —Femke 🐦 (talk) 18:32, 3 June 2025 (UTC)

    Finally, some comments are problematising the whole situation with WMF working behind the actual wikis' backs:

    This is a prime reason I tried to formulate my statement on WP:VPWMF#Statement proposed by berchanhimez requesting that we be informed "early and often" of new developments. We shouldn't be finding out about this a week or two before a test, and we should have the opportunity to inform the WMF if we would approve such a test before they put their effort into making one happen. I think this is a clear example of needing to make a statement like that to the WMF that we do not approve of things being developed in virtual secret (having to go to Meta or MediaWikiWiki to find out about them) and we want to be informed sooner rather than later. I invite anyone who shares concerns over the timeline of this to review my (and others') statements there and contribute to them if they feel so inclined. I know the wording of mine is quite long and probably less than ideal - I have no problem if others make edits to the wording or flow of it to improve it.

    Oh, and to be blunt, I do not support testing this publicly without significantly more editor input from the local wikis involved - whether that's an opt-in logged-in test for people who want it, or what. Regards, -bɜ:ʳkənhɪmez | me | talk to me! 22:55, 3 June 2025 (UTC)

    Again, I recommend reading the whole discussion yourself.

    EDIT: WMF has announced they're putting this on hold after the negative reaction from the editors' community. ("we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together")

    Who exactly asked for this? Wikipedia isn't publicly traded, they aren't a for profit company, why are they trying to shove Ai into people's faces?

    For those few who wanted it, there are dozens of bots who can summarize the (already kinda small) Wikipedia articles

  • For those of us who do skip the AI summaries it's the equivalent of adding an extra click to everything.

    I would support optional AI, but having to physically scroll past random LLM nonsense all the time feels like the internet is being infested by something equally annoying/useless as ads, and we don't even have a blocker for it.

    I think it would be best if that's a user setting, like dark mode. It would obviously be a popular setting to adjust. If they don't do that, there will doubtless be grease monkey and other scripts to hide it.

  • 18 Stimmen
    3 Beiträge
    0 Aufrufe
    A
    This isn't the Cthulhu universe. There isn't some horrible truth ChatGPT can reveal to you which will literally drive you insane. Some people use ChatGPT a lot, some people have psychotic episodes, and there's going to be enough overlap to write sensationalist stories even if there's no causative relationship. I suppose ChatGPT might be harmful to someone who is already delusional by (after pressure) expressing agreement, but I'm not sure about that because as far as I know, you can't talk a person into or out of psychosis.
  • Brain activity lower when using AI chatbots: MIT research

    Technology technology
    15
    1
    128 Stimmen
    15 Beiträge
    8 Aufrufe
    Z
    Depends how much clutch is left ‍
  • 76 Stimmen
    12 Beiträge
    6 Aufrufe
    A
    Let's not? I think we've had enough robots with AI for now. Thank you.
  • Canalys: Companies limit genAI use due to unclear costs

    Technology technology
    8
    1
    25 Stimmen
    8 Beiträge
    10 Aufrufe
    B
    Just wait until all the venture capital OpenAi raised on a valuation that assumes they will singlehandedly achieve the singularity in 2027, replace all human workers by 2028, and convert 75% of the Earth's crust to paperclips by 2030 runs out, they can't operate at a loss anymore, and have to raises prices to a point where they're actually making a profit.
  • 6 Stimmen
    1 Beiträge
    5 Aufrufe
    Niemand hat geantwortet
  • 82 Stimmen
    3 Beiträge
    9 Aufrufe
    sfxrlz@lemmy.dbzer0.comS
    As a Star Wars yellowtext: „In the final days of the senate, senator organa…“
  • Meta Reportedly Eyeing 'Super Sensing' Tech for Smart Glasses

    Technology technology
    4
    1
    34 Stimmen
    4 Beiträge
    9 Aufrufe
    M
    I see your point but also I just genuinely don't have a mind for that shit. Even my own close friends and family, it never pops into my head to ask about that vacation they just got back from or what their kids are up to. I rely on social cues from others, mainly my wife, to sort of kick start my brain. I just started a new job. I can't remember who said they were into fishing and who didn't, and now it's anxiety inducing to try to figure out who is who. Or they ask me a friendly question and I get caught up answering and when I'm done I forget to ask it back to them (because frequently asking someone about their weekend or kids or whatever is their way of getting to share their own life with you, but my brain doesn't think that way). I get what you're saying. It could absolutely be used for performative interactions but for some of us people drift away because we aren't good at being curious about them or remembering details like that. And also, I have to sit through awkward lunches at work where no one really knows what to talk about or ask about because outside of work we are completely alien to one another. And it's fine. It wouldn't be worth the damage it does. I have left behind all personally identifiable social media for the same reason. But I do hate how social anxiety and ADHD makes friendship so fleeting.
  • 0 Stimmen
    4 Beiträge
    2 Aufrufe
    K
    Only way I'll want a different phone brand is if it comes with ZERO bloatware and has an excellent internal memory/storage cleanse that has nothing to do with Google's Files or a random app I'm not sure I can trust without paying or rooting. So far my A series phones do what I need mostly and in my opinion is superior to the Motorola's my fiancé prefers minus the phone-phone charge ability his has, everything else I'm just glad I have enough control to tweak things to my liking, however these days Samsungs seem to be infested with Google bloatware and apps that insist on opening themselves back up regardless of the widespread battery restrictions I've assigned (even was sent a "Stop Closing my Apps" notif that sent me to an article ) short of Disabling many unnecessary apps bc fully rooting my devices is something I rarely do anymore. I have a random Chinese brand tablet where I actually have more control over the apps than either of my A series phones whee Force Stopping STAYS that way when I tell them to! I hate being listened to for ads and the unwanted draining my battery life and data (I live off-grid and pay data rates because "Unlimited" is some throttled BS) so my ability to control what's going on in the background matters a lot to me, enough that I'm anti Meta-apps and avoid all non-essential Google apps. I can't afford topline phones and the largest data plan, so I work with what I can afford and I'm sad refurbished A lines seem to be getting more expensive while giving away my control to companies. Last A line I bought that was supposed to be my first 5G phone was network locked, so I got ripped off, but it still serves me well in off-grid life. Only app that actually regularly malfunctions when I Force Stop it's background presence is Roku, which I find to have very an almost insidious presence in our lives. Google Play, Chrome, and Spotify never acts incompetent in any way no matter how I have to open the setting every single time I turn Airplane Mode off. Don't need Gmail with Chrome and DuckDuckGo has been awesome at intercepting self-loading ads. I hope one day DDG gets better bc Google seems to be terrible lately and I even caught their AI contradicting itself when asking about if Homo Florensis is considered Human (yes) and then asked the oldest age of human remains, and was fed the outdated narrative of 300,000 years versus 700,000+ years bipedal pre-humans have been carbon dated outside of the Cradle of Humanity in South Africa. SO sorry to go off-topic, but I've got a big gripe with Samsung's partnership with Google, especially considering the launch of Quantum Computed AI that is still being fine-tuned with company-approved censorships.