Skip to content

Wikimedia Foundation's plans to introduce AI-generated article summaries to Wikipedia

Technology
137 82 40
  • I don't know if this is an acceptable format for a submission here, but here it goes anyway:

    Wikimedia Foundation has been developing an LLM that would produce simplified Wikipedia article summaries, as described here: https://www.mediawiki.org/wiki/Reading/Web/Content_Discovery_Experiments/Simple_Article_Summaries

    We would like to provide article summaries, which would simplify the content of the articles. This will make content more readable and accessible, and thus easier to discover and learn from. This part of the project focuses only on displaying the summaries. A future experiment will study ways of editing and adjusting this content.

    Currently, much of the encyclopedic quality content is long-form and thus difficult to parse quickly. In addition, it is written at a reading level much higher than that of the average adult. Projects that simplify content, such as Simple English Wikipedia or Basque Txikipedia, are designed to address some of these issues. They do this by having editors manually create simpler versions of articles. However, these projects have so far had very limited success - they are only available in a few languages and have been difficult to scale. In addition, they ask editors to rewrite content that they have already written. This can feel very repetitive.

    In our previous research (Content Simplification), we have identified two needs:

    • The need for readers to quickly get an overview of a given article or page
    • The need for this overview to be written in language the reader can understand

    Etc., you should check the full text yourself. There's a brief video showing how it might look: https://www.youtube.com/watch?v=DC8JB7q7SZc

    This hasn't been met with warm reactions, the comments on the respective talk page have questioned the purposefulness of the tool (shouldn't the introductory paragraphs do the same job already?), and some other complaints have been provided as well:

    Taking a quote from the page for the usability study:

    "Most readers in the US can comfortably read at a grade 5 level,[CN] yet most Wikipedia articles are written in language that requires a grade 9 or higher reading level."

    Also stated on the same page, the study only had 8 participants, most of which did not speak English as their first language. AI skepticism was low among them, with one even mentioning they 'use AI for everything'. I sincerely doubt this is a representative sample and the fact this project is still going while being based on such shoddy data is shocking to me. Especially considering that the current Qualtrics survey seems to be more about how to best implement such a feature as opposed to the question of whether or not it should be implemented in the first place. I don't think AI-generated content has a place on Wikipedia. The Morrison Man (talk) 23:19, 3 June 2025 (UTC)

    The survey the user mentions is this one: https://wikimedia.qualtrics.com/jfe/form/SV_1XiNLmcNJxPeMqq and true enough it pretty much takes for granted that the summaries will be added, there's no judgment of their actual quality, and they're only asking for people's feedback on how they should be presented. I filled it out and couldn't even find the space to say that e.g. the summary they show is written almost insultingly, like it's meant for particularly dumb children, and I couldn't even tell whether it is accurate because they just scroll around in the video.

    Very extensive discussion is going on at the Village Pump (en.wiki).

    The comments are also overwhelmingly negative, some of them pointing out that the summary doesn't summarise the article properly ("Perhaps the AI is hallucinating, or perhaps it's drawing from other sources like any widespread llm. What it definitely doesn't seem to be doing is taking existing article text and simplifying it." - user CMD). A few comments acknowlegde potential benefits of the summaries, though with a significantly different approach to using them:

    I'm glad that WMF is thinking about a solution of a key problem on Wikipedia: most of our technical articles are way too difficult. My experience with AI summaries on Wikiwand is that it is useful, but too often produces misinformation not present in the article it "summarises". Any information shown to readers should be greenlit by editors in advance, for each individual article. Maybe we can use it as inspiration for writing articles appropriate for our broad audience. —Femke 🐦 (talk) 16:30, 3 June 2025 (UTC)

    One of the reasons many prefer chatGPT to Wikipedia is that too large a share of our technical articles are way way too difficult for the intended audience. And we need those readers, so they can become future editors. Ideally, we would fix this ourselves, but my impression is that we usually make articles more difficult, not easier, when they go through GAN and FAC. As a second-best solution, we might try this as long as we have good safeguards in place. —Femke 🐦 (talk) 18:32, 3 June 2025 (UTC)

    Finally, some comments are problematising the whole situation with WMF working behind the actual wikis' backs:

    This is a prime reason I tried to formulate my statement on WP:VPWMF#Statement proposed by berchanhimez requesting that we be informed "early and often" of new developments. We shouldn't be finding out about this a week or two before a test, and we should have the opportunity to inform the WMF if we would approve such a test before they put their effort into making one happen. I think this is a clear example of needing to make a statement like that to the WMF that we do not approve of things being developed in virtual secret (having to go to Meta or MediaWikiWiki to find out about them) and we want to be informed sooner rather than later. I invite anyone who shares concerns over the timeline of this to review my (and others') statements there and contribute to them if they feel so inclined. I know the wording of mine is quite long and probably less than ideal - I have no problem if others make edits to the wording or flow of it to improve it.

    Oh, and to be blunt, I do not support testing this publicly without significantly more editor input from the local wikis involved - whether that's an opt-in logged-in test for people who want it, or what. Regards, -bɜ:ʳkənhɪmez | me | talk to me! 22:55, 3 June 2025 (UTC)

    Again, I recommend reading the whole discussion yourself.

    EDIT: WMF has announced they're putting this on hold after the negative reaction from the editors' community. ("we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together")

    There's a core problem that many Wikipedia articles are hard for a layperson to read and understand. The statement about reading level is one way to express this.

    The Simple version of articles shows humans can produce readable text. But there aren't enough Simple articles, and the Simple articles are often incomplete.

    I don't think AI should be solely trusted with summarization/translation, but it might have a place in the editing cycle.

  • Looks like the vast majority of people disagree 😧 I do agree that WP should consider ways to make certain articles more approachable to laymen, but this doesn't seem to be the right approach.

    I am pretty rabidly anti-AI in most cases, but the use case for AI that I don’t think is a big negative is the distillation of information for simplification purposes. I am still somewhat against this in the sense that at the end of the day their summarization AI could hallucinate, and since they’ve admitted this is a solution to a problem of scale, then it’s not sensible to assume humans will be able to babysit it.

    However… there is some inherent value to the idea that people will end up using AI to summarize Wikipedia using models of dubious quality with an unknown quantity of intentionally pre-trained bias, and therefore there is some inherent value to training your own model to present the information on your site in a way that is the “most free” of slop and bias.

  • If people use AI to summarize passages of written words to be simpler for those with poor reading skills to be able to more easily comprehend the words, then how are those readers going to improve their poor reading skills?

    Dumbing things down with AI isn't going to make people smarter I bet. This seems like accelerating into Idiocracy

    [...] then how are those readers going to improve their poor reading skills?

    By becoming interested in improving their poor reading skills. You won't make people become interested in that by having everything available only in complex language, it's just going to make them skip over your content. Otherwise there shouldn't be people with poor reading skills, since complex language is already everywhere in life.

  • A lot of them for the small articles and stubs are written very technically and don't provide an explanation for complex subjects if you aren't already familiar with it. Then you have to read 4 subjects down just to figure out the jargon for what they're saying

    Maybe it's a result of Wikipedia trying to be more of an "online encyclopedia" vs a digital information hub or learning resource. I don't think it's a problem on its own but I do think there should be a simplified version of every article.

  • frankly, I'm not quite surprised ._.
    edit: upon reading the article, I now wonder if it's possible for your literacy to go down. I used to be such a bookworm in grade school, but now I have to reread stuff over and over in order to comprehend what's going on.

    You might just be chronically tired or worn down from the stresses of life. It’s pretty common.

    Another thing is as we get older a lot of people will choose more “challenging” adult books and then just be totally bored lol. I read young adult and kids books sometimes (how can I give a book to a child if I haven’t read it myself?) and it’s always surprising to me how they can be ripped through in no time at all.

    But in general I think you’re probably right that literacy can decrease with disuse. It seems like most things about the mind and body trend that way

  • I don't know if this is an acceptable format for a submission here, but here it goes anyway:

    Wikimedia Foundation has been developing an LLM that would produce simplified Wikipedia article summaries, as described here: https://www.mediawiki.org/wiki/Reading/Web/Content_Discovery_Experiments/Simple_Article_Summaries

    We would like to provide article summaries, which would simplify the content of the articles. This will make content more readable and accessible, and thus easier to discover and learn from. This part of the project focuses only on displaying the summaries. A future experiment will study ways of editing and adjusting this content.

    Currently, much of the encyclopedic quality content is long-form and thus difficult to parse quickly. In addition, it is written at a reading level much higher than that of the average adult. Projects that simplify content, such as Simple English Wikipedia or Basque Txikipedia, are designed to address some of these issues. They do this by having editors manually create simpler versions of articles. However, these projects have so far had very limited success - they are only available in a few languages and have been difficult to scale. In addition, they ask editors to rewrite content that they have already written. This can feel very repetitive.

    In our previous research (Content Simplification), we have identified two needs:

    • The need for readers to quickly get an overview of a given article or page
    • The need for this overview to be written in language the reader can understand

    Etc., you should check the full text yourself. There's a brief video showing how it might look: https://www.youtube.com/watch?v=DC8JB7q7SZc

    This hasn't been met with warm reactions, the comments on the respective talk page have questioned the purposefulness of the tool (shouldn't the introductory paragraphs do the same job already?), and some other complaints have been provided as well:

    Taking a quote from the page for the usability study:

    "Most readers in the US can comfortably read at a grade 5 level,[CN] yet most Wikipedia articles are written in language that requires a grade 9 or higher reading level."

    Also stated on the same page, the study only had 8 participants, most of which did not speak English as their first language. AI skepticism was low among them, with one even mentioning they 'use AI for everything'. I sincerely doubt this is a representative sample and the fact this project is still going while being based on such shoddy data is shocking to me. Especially considering that the current Qualtrics survey seems to be more about how to best implement such a feature as opposed to the question of whether or not it should be implemented in the first place. I don't think AI-generated content has a place on Wikipedia. The Morrison Man (talk) 23:19, 3 June 2025 (UTC)

    The survey the user mentions is this one: https://wikimedia.qualtrics.com/jfe/form/SV_1XiNLmcNJxPeMqq and true enough it pretty much takes for granted that the summaries will be added, there's no judgment of their actual quality, and they're only asking for people's feedback on how they should be presented. I filled it out and couldn't even find the space to say that e.g. the summary they show is written almost insultingly, like it's meant for particularly dumb children, and I couldn't even tell whether it is accurate because they just scroll around in the video.

    Very extensive discussion is going on at the Village Pump (en.wiki).

    The comments are also overwhelmingly negative, some of them pointing out that the summary doesn't summarise the article properly ("Perhaps the AI is hallucinating, or perhaps it's drawing from other sources like any widespread llm. What it definitely doesn't seem to be doing is taking existing article text and simplifying it." - user CMD). A few comments acknowlegde potential benefits of the summaries, though with a significantly different approach to using them:

    I'm glad that WMF is thinking about a solution of a key problem on Wikipedia: most of our technical articles are way too difficult. My experience with AI summaries on Wikiwand is that it is useful, but too often produces misinformation not present in the article it "summarises". Any information shown to readers should be greenlit by editors in advance, for each individual article. Maybe we can use it as inspiration for writing articles appropriate for our broad audience. —Femke 🐦 (talk) 16:30, 3 June 2025 (UTC)

    One of the reasons many prefer chatGPT to Wikipedia is that too large a share of our technical articles are way way too difficult for the intended audience. And we need those readers, so they can become future editors. Ideally, we would fix this ourselves, but my impression is that we usually make articles more difficult, not easier, when they go through GAN and FAC. As a second-best solution, we might try this as long as we have good safeguards in place. —Femke 🐦 (talk) 18:32, 3 June 2025 (UTC)

    Finally, some comments are problematising the whole situation with WMF working behind the actual wikis' backs:

    This is a prime reason I tried to formulate my statement on WP:VPWMF#Statement proposed by berchanhimez requesting that we be informed "early and often" of new developments. We shouldn't be finding out about this a week or two before a test, and we should have the opportunity to inform the WMF if we would approve such a test before they put their effort into making one happen. I think this is a clear example of needing to make a statement like that to the WMF that we do not approve of things being developed in virtual secret (having to go to Meta or MediaWikiWiki to find out about them) and we want to be informed sooner rather than later. I invite anyone who shares concerns over the timeline of this to review my (and others') statements there and contribute to them if they feel so inclined. I know the wording of mine is quite long and probably less than ideal - I have no problem if others make edits to the wording or flow of it to improve it.

    Oh, and to be blunt, I do not support testing this publicly without significantly more editor input from the local wikis involved - whether that's an opt-in logged-in test for people who want it, or what. Regards, -bɜ:ʳkənhɪmez | me | talk to me! 22:55, 3 June 2025 (UTC)

    Again, I recommend reading the whole discussion yourself.

    EDIT: WMF has announced they're putting this on hold after the negative reaction from the editors' community. ("we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together")

    I do have concerns about this but it's really all about the usage, not the AI itself. Would the AI version be the only version allowed? Would the summaries get created on the fly for every visitor? Would edits to an AI summary be allowed? Would this get applied to and alter existing summaries?

    I'm totally fine with LLMs and AI as a stop-gap for missing info or a way to coach or critique a human-written summary, but generally I haven't seen good results when AI is allowed to do its thing without a human reviewing or guiding the outputs.

  • [...] then how are those readers going to improve their poor reading skills?

    By becoming interested in improving their poor reading skills. You won't make people become interested in that by having everything available only in complex language, it's just going to make them skip over your content. Otherwise there shouldn't be people with poor reading skills, since complex language is already everywhere in life.

    Nope. Reading skills are improved by being challenged by complex language, and the effort required to learn new words to comprehend it. If the reader is interested in the content, they aren't going to skip it. Dumbing things down only leads to dumbing things down.

    For example, look at all the iPad kids who can't use a computer for shit. Kids who grew up with computers HAD to learn the more complex interface of computers to be able to do the cool things they wanted to do on the computer. Now they don't because they don't have to. Therefore if you get everything dumbed down to 5th Grade reading level, that's where the common denominator will settle. Overcoming that apathy requires a challenge to be a barrier to entry.

  • Nope. Reading skills are improved by being challenged by complex language, and the effort required to learn new words to comprehend it. If the reader is interested in the content, they aren't going to skip it. Dumbing things down only leads to dumbing things down.

    For example, look at all the iPad kids who can't use a computer for shit. Kids who grew up with computers HAD to learn the more complex interface of computers to be able to do the cool things they wanted to do on the computer. Now they don't because they don't have to. Therefore if you get everything dumbed down to 5th Grade reading level, that's where the common denominator will settle. Overcoming that apathy requires a challenge to be a barrier to entry.

    If the reader is interested in the content, they aren't going to skip it.

    But they aren't interested in the content because of the complexity. You may wish that humans work like you describe, but we literally see that they don't.

    What you can do is provide a simplified summary to make people interested, so they're willing to engage with the more complex language to get deeper knowledge around the topic.

    For example, look at all the iPad kids who can't use a computer for shit. Kids who grew up with computers HAD to learn the more complex interface of computers to be able to do the cool things they wanted to do on the computer.

    You're underestimating how many people before the iPad generation also can't use computers because they never developed an interest to engage with the complexity.

  • You might just be chronically tired or worn down from the stresses of life. It’s pretty common.

    Another thing is as we get older a lot of people will choose more “challenging” adult books and then just be totally bored lol. I read young adult and kids books sometimes (how can I give a book to a child if I haven’t read it myself?) and it’s always surprising to me how they can be ripped through in no time at all.

    But in general I think you’re probably right that literacy can decrease with disuse. It seems like most things about the mind and body trend that way

    But in general I think you’re probably right that literacy can decrease with disuse

    Maths is a really good example of this.

    At one point I really enjoyed doing long division in my head but as time goes on (and you don't exercise that sponge...), it becomes lazy.

  • I don't know if this is an acceptable format for a submission here, but here it goes anyway:

    Wikimedia Foundation has been developing an LLM that would produce simplified Wikipedia article summaries, as described here: https://www.mediawiki.org/wiki/Reading/Web/Content_Discovery_Experiments/Simple_Article_Summaries

    We would like to provide article summaries, which would simplify the content of the articles. This will make content more readable and accessible, and thus easier to discover and learn from. This part of the project focuses only on displaying the summaries. A future experiment will study ways of editing and adjusting this content.

    Currently, much of the encyclopedic quality content is long-form and thus difficult to parse quickly. In addition, it is written at a reading level much higher than that of the average adult. Projects that simplify content, such as Simple English Wikipedia or Basque Txikipedia, are designed to address some of these issues. They do this by having editors manually create simpler versions of articles. However, these projects have so far had very limited success - they are only available in a few languages and have been difficult to scale. In addition, they ask editors to rewrite content that they have already written. This can feel very repetitive.

    In our previous research (Content Simplification), we have identified two needs:

    • The need for readers to quickly get an overview of a given article or page
    • The need for this overview to be written in language the reader can understand

    Etc., you should check the full text yourself. There's a brief video showing how it might look: https://www.youtube.com/watch?v=DC8JB7q7SZc

    This hasn't been met with warm reactions, the comments on the respective talk page have questioned the purposefulness of the tool (shouldn't the introductory paragraphs do the same job already?), and some other complaints have been provided as well:

    Taking a quote from the page for the usability study:

    "Most readers in the US can comfortably read at a grade 5 level,[CN] yet most Wikipedia articles are written in language that requires a grade 9 or higher reading level."

    Also stated on the same page, the study only had 8 participants, most of which did not speak English as their first language. AI skepticism was low among them, with one even mentioning they 'use AI for everything'. I sincerely doubt this is a representative sample and the fact this project is still going while being based on such shoddy data is shocking to me. Especially considering that the current Qualtrics survey seems to be more about how to best implement such a feature as opposed to the question of whether or not it should be implemented in the first place. I don't think AI-generated content has a place on Wikipedia. The Morrison Man (talk) 23:19, 3 June 2025 (UTC)

    The survey the user mentions is this one: https://wikimedia.qualtrics.com/jfe/form/SV_1XiNLmcNJxPeMqq and true enough it pretty much takes for granted that the summaries will be added, there's no judgment of their actual quality, and they're only asking for people's feedback on how they should be presented. I filled it out and couldn't even find the space to say that e.g. the summary they show is written almost insultingly, like it's meant for particularly dumb children, and I couldn't even tell whether it is accurate because they just scroll around in the video.

    Very extensive discussion is going on at the Village Pump (en.wiki).

    The comments are also overwhelmingly negative, some of them pointing out that the summary doesn't summarise the article properly ("Perhaps the AI is hallucinating, or perhaps it's drawing from other sources like any widespread llm. What it definitely doesn't seem to be doing is taking existing article text and simplifying it." - user CMD). A few comments acknowlegde potential benefits of the summaries, though with a significantly different approach to using them:

    I'm glad that WMF is thinking about a solution of a key problem on Wikipedia: most of our technical articles are way too difficult. My experience with AI summaries on Wikiwand is that it is useful, but too often produces misinformation not present in the article it "summarises". Any information shown to readers should be greenlit by editors in advance, for each individual article. Maybe we can use it as inspiration for writing articles appropriate for our broad audience. —Femke 🐦 (talk) 16:30, 3 June 2025 (UTC)

    One of the reasons many prefer chatGPT to Wikipedia is that too large a share of our technical articles are way way too difficult for the intended audience. And we need those readers, so they can become future editors. Ideally, we would fix this ourselves, but my impression is that we usually make articles more difficult, not easier, when they go through GAN and FAC. As a second-best solution, we might try this as long as we have good safeguards in place. —Femke 🐦 (talk) 18:32, 3 June 2025 (UTC)

    Finally, some comments are problematising the whole situation with WMF working behind the actual wikis' backs:

    This is a prime reason I tried to formulate my statement on WP:VPWMF#Statement proposed by berchanhimez requesting that we be informed "early and often" of new developments. We shouldn't be finding out about this a week or two before a test, and we should have the opportunity to inform the WMF if we would approve such a test before they put their effort into making one happen. I think this is a clear example of needing to make a statement like that to the WMF that we do not approve of things being developed in virtual secret (having to go to Meta or MediaWikiWiki to find out about them) and we want to be informed sooner rather than later. I invite anyone who shares concerns over the timeline of this to review my (and others') statements there and contribute to them if they feel so inclined. I know the wording of mine is quite long and probably less than ideal - I have no problem if others make edits to the wording or flow of it to improve it.

    Oh, and to be blunt, I do not support testing this publicly without significantly more editor input from the local wikis involved - whether that's an opt-in logged-in test for people who want it, or what. Regards, -bɜ:ʳkənhɪmez | me | talk to me! 22:55, 3 June 2025 (UTC)

    Again, I recommend reading the whole discussion yourself.

    EDIT: WMF has announced they're putting this on hold after the negative reaction from the editors' community. ("we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together")

    Is this the same WiliMedia Foundation who was complaining about AI scrapers in April?

  • Is this the same WiliMedia Foundation who was complaining about AI scrapers in April?

    IIRC, they weren’t trying to stop them—they were trying to get the scrapers to pull the content in a more efficient format that would reduce the overhead on their web servers.

  • If people use AI to summarize passages of written words to be simpler for those with poor reading skills to be able to more easily comprehend the words, then how are those readers going to improve their poor reading skills?

    Dumbing things down with AI isn't going to make people smarter I bet. This seems like accelerating into Idiocracy

    Wikipedia is not made to teach people how to read, it is meant to share knowledge.
    For me, they could even make Wikipedia version with hieroglyphics if that would make understanding content easier

  • I don't know if this is an acceptable format for a submission here, but here it goes anyway:

    Wikimedia Foundation has been developing an LLM that would produce simplified Wikipedia article summaries, as described here: https://www.mediawiki.org/wiki/Reading/Web/Content_Discovery_Experiments/Simple_Article_Summaries

    We would like to provide article summaries, which would simplify the content of the articles. This will make content more readable and accessible, and thus easier to discover and learn from. This part of the project focuses only on displaying the summaries. A future experiment will study ways of editing and adjusting this content.

    Currently, much of the encyclopedic quality content is long-form and thus difficult to parse quickly. In addition, it is written at a reading level much higher than that of the average adult. Projects that simplify content, such as Simple English Wikipedia or Basque Txikipedia, are designed to address some of these issues. They do this by having editors manually create simpler versions of articles. However, these projects have so far had very limited success - they are only available in a few languages and have been difficult to scale. In addition, they ask editors to rewrite content that they have already written. This can feel very repetitive.

    In our previous research (Content Simplification), we have identified two needs:

    • The need for readers to quickly get an overview of a given article or page
    • The need for this overview to be written in language the reader can understand

    Etc., you should check the full text yourself. There's a brief video showing how it might look: https://www.youtube.com/watch?v=DC8JB7q7SZc

    This hasn't been met with warm reactions, the comments on the respective talk page have questioned the purposefulness of the tool (shouldn't the introductory paragraphs do the same job already?), and some other complaints have been provided as well:

    Taking a quote from the page for the usability study:

    "Most readers in the US can comfortably read at a grade 5 level,[CN] yet most Wikipedia articles are written in language that requires a grade 9 or higher reading level."

    Also stated on the same page, the study only had 8 participants, most of which did not speak English as their first language. AI skepticism was low among them, with one even mentioning they 'use AI for everything'. I sincerely doubt this is a representative sample and the fact this project is still going while being based on such shoddy data is shocking to me. Especially considering that the current Qualtrics survey seems to be more about how to best implement such a feature as opposed to the question of whether or not it should be implemented in the first place. I don't think AI-generated content has a place on Wikipedia. The Morrison Man (talk) 23:19, 3 June 2025 (UTC)

    The survey the user mentions is this one: https://wikimedia.qualtrics.com/jfe/form/SV_1XiNLmcNJxPeMqq and true enough it pretty much takes for granted that the summaries will be added, there's no judgment of their actual quality, and they're only asking for people's feedback on how they should be presented. I filled it out and couldn't even find the space to say that e.g. the summary they show is written almost insultingly, like it's meant for particularly dumb children, and I couldn't even tell whether it is accurate because they just scroll around in the video.

    Very extensive discussion is going on at the Village Pump (en.wiki).

    The comments are also overwhelmingly negative, some of them pointing out that the summary doesn't summarise the article properly ("Perhaps the AI is hallucinating, or perhaps it's drawing from other sources like any widespread llm. What it definitely doesn't seem to be doing is taking existing article text and simplifying it." - user CMD). A few comments acknowlegde potential benefits of the summaries, though with a significantly different approach to using them:

    I'm glad that WMF is thinking about a solution of a key problem on Wikipedia: most of our technical articles are way too difficult. My experience with AI summaries on Wikiwand is that it is useful, but too often produces misinformation not present in the article it "summarises". Any information shown to readers should be greenlit by editors in advance, for each individual article. Maybe we can use it as inspiration for writing articles appropriate for our broad audience. —Femke 🐦 (talk) 16:30, 3 June 2025 (UTC)

    One of the reasons many prefer chatGPT to Wikipedia is that too large a share of our technical articles are way way too difficult for the intended audience. And we need those readers, so they can become future editors. Ideally, we would fix this ourselves, but my impression is that we usually make articles more difficult, not easier, when they go through GAN and FAC. As a second-best solution, we might try this as long as we have good safeguards in place. —Femke 🐦 (talk) 18:32, 3 June 2025 (UTC)

    Finally, some comments are problematising the whole situation with WMF working behind the actual wikis' backs:

    This is a prime reason I tried to formulate my statement on WP:VPWMF#Statement proposed by berchanhimez requesting that we be informed "early and often" of new developments. We shouldn't be finding out about this a week or two before a test, and we should have the opportunity to inform the WMF if we would approve such a test before they put their effort into making one happen. I think this is a clear example of needing to make a statement like that to the WMF that we do not approve of things being developed in virtual secret (having to go to Meta or MediaWikiWiki to find out about them) and we want to be informed sooner rather than later. I invite anyone who shares concerns over the timeline of this to review my (and others') statements there and contribute to them if they feel so inclined. I know the wording of mine is quite long and probably less than ideal - I have no problem if others make edits to the wording or flow of it to improve it.

    Oh, and to be blunt, I do not support testing this publicly without significantly more editor input from the local wikis involved - whether that's an opt-in logged-in test for people who want it, or what. Regards, -bɜ:ʳkənhɪmez | me | talk to me! 22:55, 3 June 2025 (UTC)

    Again, I recommend reading the whole discussion yourself.

    EDIT: WMF has announced they're putting this on hold after the negative reaction from the editors' community. ("we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together")

    I'm ok with auto generated content, but only if it is clearly separated from human generated content, can be disabled at any time and writing main articles with AI is forbidden

  • The problem is that the bubble here are the editors who actually create the site and keep it running

    No it isn't, it's the technology@lemmy.world Fediverse community.

    How much do you want to bet on the overlap being small?

    A bigger question is how much does Wikiemedia Foundation want to bet that their top donors and contributors aren't in this thread...

    Edit: Moving my unrelated ramblings to a separate comment.

  • I don't know if this is an acceptable format for a submission here, but here it goes anyway:

    Wikimedia Foundation has been developing an LLM that would produce simplified Wikipedia article summaries, as described here: https://www.mediawiki.org/wiki/Reading/Web/Content_Discovery_Experiments/Simple_Article_Summaries

    We would like to provide article summaries, which would simplify the content of the articles. This will make content more readable and accessible, and thus easier to discover and learn from. This part of the project focuses only on displaying the summaries. A future experiment will study ways of editing and adjusting this content.

    Currently, much of the encyclopedic quality content is long-form and thus difficult to parse quickly. In addition, it is written at a reading level much higher than that of the average adult. Projects that simplify content, such as Simple English Wikipedia or Basque Txikipedia, are designed to address some of these issues. They do this by having editors manually create simpler versions of articles. However, these projects have so far had very limited success - they are only available in a few languages and have been difficult to scale. In addition, they ask editors to rewrite content that they have already written. This can feel very repetitive.

    In our previous research (Content Simplification), we have identified two needs:

    • The need for readers to quickly get an overview of a given article or page
    • The need for this overview to be written in language the reader can understand

    Etc., you should check the full text yourself. There's a brief video showing how it might look: https://www.youtube.com/watch?v=DC8JB7q7SZc

    This hasn't been met with warm reactions, the comments on the respective talk page have questioned the purposefulness of the tool (shouldn't the introductory paragraphs do the same job already?), and some other complaints have been provided as well:

    Taking a quote from the page for the usability study:

    "Most readers in the US can comfortably read at a grade 5 level,[CN] yet most Wikipedia articles are written in language that requires a grade 9 or higher reading level."

    Also stated on the same page, the study only had 8 participants, most of which did not speak English as their first language. AI skepticism was low among them, with one even mentioning they 'use AI for everything'. I sincerely doubt this is a representative sample and the fact this project is still going while being based on such shoddy data is shocking to me. Especially considering that the current Qualtrics survey seems to be more about how to best implement such a feature as opposed to the question of whether or not it should be implemented in the first place. I don't think AI-generated content has a place on Wikipedia. The Morrison Man (talk) 23:19, 3 June 2025 (UTC)

    The survey the user mentions is this one: https://wikimedia.qualtrics.com/jfe/form/SV_1XiNLmcNJxPeMqq and true enough it pretty much takes for granted that the summaries will be added, there's no judgment of their actual quality, and they're only asking for people's feedback on how they should be presented. I filled it out and couldn't even find the space to say that e.g. the summary they show is written almost insultingly, like it's meant for particularly dumb children, and I couldn't even tell whether it is accurate because they just scroll around in the video.

    Very extensive discussion is going on at the Village Pump (en.wiki).

    The comments are also overwhelmingly negative, some of them pointing out that the summary doesn't summarise the article properly ("Perhaps the AI is hallucinating, or perhaps it's drawing from other sources like any widespread llm. What it definitely doesn't seem to be doing is taking existing article text and simplifying it." - user CMD). A few comments acknowlegde potential benefits of the summaries, though with a significantly different approach to using them:

    I'm glad that WMF is thinking about a solution of a key problem on Wikipedia: most of our technical articles are way too difficult. My experience with AI summaries on Wikiwand is that it is useful, but too often produces misinformation not present in the article it "summarises". Any information shown to readers should be greenlit by editors in advance, for each individual article. Maybe we can use it as inspiration for writing articles appropriate for our broad audience. —Femke 🐦 (talk) 16:30, 3 June 2025 (UTC)

    One of the reasons many prefer chatGPT to Wikipedia is that too large a share of our technical articles are way way too difficult for the intended audience. And we need those readers, so they can become future editors. Ideally, we would fix this ourselves, but my impression is that we usually make articles more difficult, not easier, when they go through GAN and FAC. As a second-best solution, we might try this as long as we have good safeguards in place. —Femke 🐦 (talk) 18:32, 3 June 2025 (UTC)

    Finally, some comments are problematising the whole situation with WMF working behind the actual wikis' backs:

    This is a prime reason I tried to formulate my statement on WP:VPWMF#Statement proposed by berchanhimez requesting that we be informed "early and often" of new developments. We shouldn't be finding out about this a week or two before a test, and we should have the opportunity to inform the WMF if we would approve such a test before they put their effort into making one happen. I think this is a clear example of needing to make a statement like that to the WMF that we do not approve of things being developed in virtual secret (having to go to Meta or MediaWikiWiki to find out about them) and we want to be informed sooner rather than later. I invite anyone who shares concerns over the timeline of this to review my (and others') statements there and contribute to them if they feel so inclined. I know the wording of mine is quite long and probably less than ideal - I have no problem if others make edits to the wording or flow of it to improve it.

    Oh, and to be blunt, I do not support testing this publicly without significantly more editor input from the local wikis involved - whether that's an opt-in logged-in test for people who want it, or what. Regards, -bɜ:ʳkənhɪmez | me | talk to me! 22:55, 3 June 2025 (UTC)

    Again, I recommend reading the whole discussion yourself.

    EDIT: WMF has announced they're putting this on hold after the negative reaction from the editors' community. ("we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together")

    The big issue I see here isn't the proposed solution, it's the public image of doing something the tech bro billionaires are pushing hard right now.

    It looks a bit like choosing the other side of the class war from their contributors.

    Wikipedia, in particular, may not be able to afford that negatvie image, right now.

    I could welcome this kind of tool later, but their timing sucks.

  • Wikipedia is not made to teach people how to read, it is meant to share knowledge.
    For me, they could even make Wikipedia version with hieroglyphics if that would make understanding content easier

    Novels are also not made to teach people how to read, but reading them does help the reader practice their reading skills. Beside that point, Wikipedia is not hard to understand in the first place.

  • Novels are also not made to teach people how to read, but reading them does help the reader practice their reading skills. Beside that point, Wikipedia is not hard to understand in the first place.

    Sorry, but that's absolutely wrong - the complexity of articles can vary wildly. Many are easily understandable, while many others are not understandable without a lot of prerequisite knowledge in the domain (e.g. mathematics stuff).

  • IIRC, they weren’t trying to stop them—they were trying to get the scrapers to pull the content in a more efficient format that would reduce the overhead on their web servers.

    You can literally just download all of Wikipedia in one go from one URL. They would rather people just do that instead of crawling their entire website because that puts a huge load on their servers.

  • Or moderators. Why would they need those people when the AI can fix everything for free and even improve articles?

    Right! I can’t wait to hear about all the new historical events!

    I wonder if anyone witnessed the burning of the Library of Alexandria and felt a similar sense of despair for the future of knowledge.

  • I don't know if this is an acceptable format for a submission here, but here it goes anyway:

    Wikimedia Foundation has been developing an LLM that would produce simplified Wikipedia article summaries, as described here: https://www.mediawiki.org/wiki/Reading/Web/Content_Discovery_Experiments/Simple_Article_Summaries

    We would like to provide article summaries, which would simplify the content of the articles. This will make content more readable and accessible, and thus easier to discover and learn from. This part of the project focuses only on displaying the summaries. A future experiment will study ways of editing and adjusting this content.

    Currently, much of the encyclopedic quality content is long-form and thus difficult to parse quickly. In addition, it is written at a reading level much higher than that of the average adult. Projects that simplify content, such as Simple English Wikipedia or Basque Txikipedia, are designed to address some of these issues. They do this by having editors manually create simpler versions of articles. However, these projects have so far had very limited success - they are only available in a few languages and have been difficult to scale. In addition, they ask editors to rewrite content that they have already written. This can feel very repetitive.

    In our previous research (Content Simplification), we have identified two needs:

    • The need for readers to quickly get an overview of a given article or page
    • The need for this overview to be written in language the reader can understand

    Etc., you should check the full text yourself. There's a brief video showing how it might look: https://www.youtube.com/watch?v=DC8JB7q7SZc

    This hasn't been met with warm reactions, the comments on the respective talk page have questioned the purposefulness of the tool (shouldn't the introductory paragraphs do the same job already?), and some other complaints have been provided as well:

    Taking a quote from the page for the usability study:

    "Most readers in the US can comfortably read at a grade 5 level,[CN] yet most Wikipedia articles are written in language that requires a grade 9 or higher reading level."

    Also stated on the same page, the study only had 8 participants, most of which did not speak English as their first language. AI skepticism was low among them, with one even mentioning they 'use AI for everything'. I sincerely doubt this is a representative sample and the fact this project is still going while being based on such shoddy data is shocking to me. Especially considering that the current Qualtrics survey seems to be more about how to best implement such a feature as opposed to the question of whether or not it should be implemented in the first place. I don't think AI-generated content has a place on Wikipedia. The Morrison Man (talk) 23:19, 3 June 2025 (UTC)

    The survey the user mentions is this one: https://wikimedia.qualtrics.com/jfe/form/SV_1XiNLmcNJxPeMqq and true enough it pretty much takes for granted that the summaries will be added, there's no judgment of their actual quality, and they're only asking for people's feedback on how they should be presented. I filled it out and couldn't even find the space to say that e.g. the summary they show is written almost insultingly, like it's meant for particularly dumb children, and I couldn't even tell whether it is accurate because they just scroll around in the video.

    Very extensive discussion is going on at the Village Pump (en.wiki).

    The comments are also overwhelmingly negative, some of them pointing out that the summary doesn't summarise the article properly ("Perhaps the AI is hallucinating, or perhaps it's drawing from other sources like any widespread llm. What it definitely doesn't seem to be doing is taking existing article text and simplifying it." - user CMD). A few comments acknowlegde potential benefits of the summaries, though with a significantly different approach to using them:

    I'm glad that WMF is thinking about a solution of a key problem on Wikipedia: most of our technical articles are way too difficult. My experience with AI summaries on Wikiwand is that it is useful, but too often produces misinformation not present in the article it "summarises". Any information shown to readers should be greenlit by editors in advance, for each individual article. Maybe we can use it as inspiration for writing articles appropriate for our broad audience. —Femke 🐦 (talk) 16:30, 3 June 2025 (UTC)

    One of the reasons many prefer chatGPT to Wikipedia is that too large a share of our technical articles are way way too difficult for the intended audience. And we need those readers, so they can become future editors. Ideally, we would fix this ourselves, but my impression is that we usually make articles more difficult, not easier, when they go through GAN and FAC. As a second-best solution, we might try this as long as we have good safeguards in place. —Femke 🐦 (talk) 18:32, 3 June 2025 (UTC)

    Finally, some comments are problematising the whole situation with WMF working behind the actual wikis' backs:

    This is a prime reason I tried to formulate my statement on WP:VPWMF#Statement proposed by berchanhimez requesting that we be informed "early and often" of new developments. We shouldn't be finding out about this a week or two before a test, and we should have the opportunity to inform the WMF if we would approve such a test before they put their effort into making one happen. I think this is a clear example of needing to make a statement like that to the WMF that we do not approve of things being developed in virtual secret (having to go to Meta or MediaWikiWiki to find out about them) and we want to be informed sooner rather than later. I invite anyone who shares concerns over the timeline of this to review my (and others') statements there and contribute to them if they feel so inclined. I know the wording of mine is quite long and probably less than ideal - I have no problem if others make edits to the wording or flow of it to improve it.

    Oh, and to be blunt, I do not support testing this publicly without significantly more editor input from the local wikis involved - whether that's an opt-in logged-in test for people who want it, or what. Regards, -bɜ:ʳkənhɪmez | me | talk to me! 22:55, 3 June 2025 (UTC)

    Again, I recommend reading the whole discussion yourself.

    EDIT: WMF has announced they're putting this on hold after the negative reaction from the editors' community. ("we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together")

    sounds like a good use case for an LLM. hope the issues get figured out

  • 33 Stimmen
    1 Beiträge
    0 Aufrufe
    Niemand hat geantwortet
  • Final Nokia feature phones coming before HMD deal ends in 2026

    Technology technology
    2
    1
    33 Stimmen
    2 Beiträge
    7 Aufrufe
    B
    HMD feature phones are such a let down. The Polish language translation within the system is clearly automated translation - the words used sometimes don't make sense. CloudFone apps are also not available in Europe. The HMD 110 4G (2024, not 2023) has the Unisoc T127 chipset which supports hotspot, but HMD deliberately chose not to include it. I know because the Itel Neo R60+ has hotspot with the same chipset. At least they made Nokia XR21 in Europe for a while.
  • 35 Stimmen
    3 Beiträge
    8 Aufrufe
    T
    On the one hand, this is possibly dubious in that things that aren't generally considered to be part of defence will be used to inflate our defence spending numbers without actually spending more than previous (i.e. it's just a PR move) But on the other hand, this could be immensely useful in telling the NIMBYs to fuck right off. What's that, you're opposing infrastructure improvements, new housing, or wind turbines? Aw, diddums, that's too bad. This is deemed critical for national security, and thus the government can give it approval regardless. Sorry Bernard, sorry Mary, your petition against any change in the area is going nowhere.
  • Google’s test turns search results into an AI-generated podcast

    Technology technology
    4
    1
    6 Stimmen
    4 Beiträge
    10 Aufrufe
    lupusblackfur@lemmy.worldL
    Oh, Google... Just eviler and eviler every day. Not only robbing creators of any monetization via clicking on links but now just blatantly stealing their content for an even more efficient theft model. FFS. I can't fucking wait to complete my de-googling project and get you the absolute fuck completely out of my life. I've developed a hatred for Google that actually rivals my hatred for Apple. ‍️
  • Catbox.moe got screwed 😿

    Technology technology
    40
    55 Stimmen
    40 Beiträge
    30 Aufrufe
    archrecord@lemm.eeA
    I'll gladly give you a reason. I'm actually happy to articulate my stance on this, considering how much I tend to care about digital rights. Services that host files should not be held responsible for what users upload, unless: The service explicitly caters to illegal content by definition or practice (i.e. the if the website is literally titled uploadyourcsamhere[.]com then it's safe to assume they deliberately want to host illegal content) The service has a very easy mechanism to remove illegal content, either when asked, or through simple monitoring systems, but chooses not to do so (catbox does this, and quite quickly too) Because holding services responsible creates a whole host of negative effects. Here's some examples: Someone starts a CDN and some users upload CSAM. The creator of the CDN goes to jail now. Nobody ever wants to create a CDN because of the legal risk, and thus the only providers of CDNs become shady, expensive, anonymously-run services with no compliance mechanisms. You run a site that hosts images, and someone decides they want to harm you. They upload CSAM, then report the site to law enforcement. You go to jail. Anybody in the future who wants to run an image sharing site must now self-censor to try and not upset any human being that could be willing to harm them via their site. A social media site is hosting the posts and content of users. In order to be compliant and not go to jail, they must engage in extremely strict filtering, otherwise even one mistake could land them in jail. All users of the site are prohibited from posting any NSFW or even suggestive content, (including newsworthy media, such as an image of bodies in a warzone) and any violation leads to an instant ban, because any of those things could lead to a chance of actually illegal content being attached. This isn't just my opinion either. Digital rights organizations such as the Electronic Frontier Foundation have talked at length about similar policies before. To quote them: "When social media platforms adopt heavy-handed moderation policies, the unintended consequences can be hard to predict. For example, Twitter’s policies on sexual material have resulted in posts on sexual health and condoms being taken down. YouTube’s bans on violent content have resulted in journalism on the Syrian war being pulled from the site. It can be tempting to attempt to “fix” certain attitudes and behaviors online by placing increased restrictions on users’ speech, but in practice, web platforms have had more success at silencing innocent people than at making online communities healthier." Now, to address the rest of your comment, since I don't just want to focus on the beginning: I think you have to actively moderate what is uploaded Catbox does, and as previously mentioned, often at a much higher rate than other services, and at a comparable rate to many services that have millions, if not billions of dollars in annual profits that could otherwise be spent on further moderation. there has to be swifter and stricter punishment for those that do upload things that are against TOS and/or illegal. The problem isn't necessarily the speed at which people can be reported and punished, but rather that the internet is fundamentally harder to track people on than real life. It's easy for cops to sit around at a spot they know someone will be physically distributing illegal content at in real life, but digitally, even if you can see the feed of all the information passing through the service, a VPN or Tor connection will anonymize your IP address in a manner that most police departments won't be able to track, and most three-letter agencies will simply have a relatively low success rate with. There's no good solution to this problem of identifying perpetrators, which is why platforms often focus on moderation over legal enforcement actions against users so frequently. It accomplishes the goal of preventing and removing the content without having to, for example, require every single user of the internet to scan an ID (and also magically prevent people from just stealing other people's access tokens and impersonating their ID) I do agree, however, that we should probably provide larger amounts of funding, training, and resources, to divisions who's sole goal is to go after online distribution of various illegal content, primarily that which harms children, because it's certainly still an issue of there being too many reports to go through, even if many of them will still lead to dead ends. I hope that explains why making file hosting services liable for user uploaded content probably isn't the best strategy. I hate to see people with good intentions support ideas that sound good in practice, but in the end just cause more untold harms, and I hope you can understand why I believe this to be the case.
  • 22 Stimmen
    1 Beiträge
    7 Aufrufe
    Niemand hat geantwortet
  • 0 Stimmen
    1 Beiträge
    6 Aufrufe
    Niemand hat geantwortet
  • 0 Stimmen
    1 Beiträge
    6 Aufrufe
    Niemand hat geantwortet