Skip to content

Wikipedia Pauses an Experiment That Showed Users AI-Generated Summaries at The Top of Some Articles, Following an Editor Backlash.

Technology
40 30 0
  • How about not putting AI into something that should be entirely human controlled?

    These days, most companies that work with web based products are under pressure from upper management to "use AI", as there's a fear of missing out if they don't. Now, management doesn't necessarily have any idea what they should use it for, so they leave that to product managers and such. They don't have any idea, either, and so they look at what features others have built and find a way to adapt one or more of those to fit their own products.

    Slap on back, job well done, clueless upper management happy, even though money and time have been spent and the revenue remains the same.

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    Summarization is one of the things LLMs are pretty good at. Same for the other thing where Wikipedia talked about auto-generating the "simple article" variants that are normally managed by hand to dumb down content.

    But if they're pushing these tools, they need to be pushed as handy tools for editors to consider leveraging, not forced behavior for end users.

  • So they:

    • Didn't ask editors/users
    • noticed loud and overwhelmingly negative feedback
    • "paused" the program

    They still don't get it. There's very little practical use for LLMs in general, and certainly not in scholastic spaces. The content is all user-generated anyway, so what's even the point? It's not saving them any money.

    Also it seems like a giant waste of resources for a company that constantly runs giant banners asking for money and claiming to basically be on there verge of closing up every time you visit their site.

    If her list were straight talk:

    1. Were gonna make up shit
    2. But don’t worry we’ll manually label it what could go wrong
    3. Dang no one was fooled let’s figure out a different way to pollute everything with alternative facts
  • Summarization is one of the things LLMs are pretty good at. Same for the other thing where Wikipedia talked about auto-generating the "simple article" variants that are normally managed by hand to dumb down content.

    But if they're pushing these tools, they need to be pushed as handy tools for editors to consider leveraging, not forced behavior for end users.

    If we need summaries, let's let a human being write the summaries. We are already experts at writing. We love doing it.

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    Is there a way for us to complain to wikipedia about this? I contribute money every year, and I will 100% stop if they're stomping more LLM-slop down my throat.

    Edit:
    You can contribute to the discussion in the link, and you can email them at addresses found here: https://wikimediafoundation.org/about/contact/

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    I passionately hate the corpo speech she's using. This fake list of "things she's done wrong but now she'll do them right, pinky promise!!" whilst completely ignoring the actual reason for the pushback they've received (which boils down to "fuck your AI, keep it out") is typical management behavior after they were caught trying to screw over the workers in some way.

    We're going to screw you over one way or the other, we just should have communicated it better!

    Basically this.

  • Summarization is one of the things LLMs are pretty good at. Same for the other thing where Wikipedia talked about auto-generating the "simple article" variants that are normally managed by hand to dumb down content.

    But if they're pushing these tools, they need to be pushed as handy tools for editors to consider leveraging, not forced behavior for end users.

    Summaries that look good are something LLMs can do, but not summaries that actually have a higher ratio of important/unimportant than the source, nor ones that keep things accurate. That last one is super mandatory on something like an encyclopedia.

  • Articles already have a summary at the top due to the page format, why was AI shoved into the process?

    Grok please ELI5 this comment so i can understand it

  • These days, most companies that work with web based products are under pressure from upper management to "use AI", as there's a fear of missing out if they don't. Now, management doesn't necessarily have any idea what they should use it for, so they leave that to product managers and such. They don't have any idea, either, and so they look at what features others have built and find a way to adapt one or more of those to fit their own products.

    Slap on back, job well done, clueless upper management happy, even though money and time have been spent and the revenue remains the same.

    Wikipedia can create a market niche by stating the authenticity of their content being 100% human. Some of the stupid upper management types understand being unique as a marketing strategy.

  • If her list were straight talk:

    1. Were gonna make up shit
    2. But don’t worry we’ll manually label it what could go wrong
    3. Dang no one was fooled let’s figure out a different way to pollute everything with alternative facts

    Your last point states it all. Rather than being a source of truth, it is now meant to bend the truth. 2 plus 2 no longer equals 4.

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    I can't wait until this "put LLMs in everything" phase is over.

  • Summaries that look good are something LLMs can do, but not summaries that actually have a higher ratio of important/unimportant than the source, nor ones that keep things accurate. That last one is super mandatory on something like an encyclopedia.

    The only application I've kind of liked so far has been the one on Amazon that summarizes the content of the reviews. Seems relatively accurate in general.

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    I canceled my recurring over this about a week ago, explaining that this was the reason. One of their people sent me a lengthy response that I appreciated. Still going to wait a year before I reinstate it, hopefully they fully move on from this idea by then. It sounded a lot like this though, kinda wishy washy.

  • How about not putting AI into something that should be entirely human controlled?

    The sad truth is that AI empowers the malicious to create a bigger impact on workload and standards than is scalable with humans alone. An AI running triage on article changes that flags or reports changes which need more input would be ideal. But threat mitigation and integrity preservation don't really seem to be high on their priorities.

  • These days, most companies that work with web based products are under pressure from upper management to "use AI", as there's a fear of missing out if they don't. Now, management doesn't necessarily have any idea what they should use it for, so they leave that to product managers and such. They don't have any idea, either, and so they look at what features others have built and find a way to adapt one or more of those to fit their own products.

    Slap on back, job well done, clueless upper management happy, even though money and time have been spent and the revenue remains the same.

    I've already posted this a few times, but Ed Zitron wrote a long article about what he calls "Business Idiots". Basically, people in decision making positions who are out of touch with their users and their products. They make bad decisions, and that's a big factor in why everything kind of sucks now.

    https://www.wheresyoured.at/the-era-of-the-business-idiot/ (it's long)

    I think a lot of us have this illusion that higher ranking people are smarter, more visionary, or whatever. But I think no. I think a lot of people are just kind of stupid, surrounded by other stupid people, cushioned from real, personal, consequences. On top of that, for many enterprises, the incentives don't line up with the users. At least wikipedia isn't profit driven, but you can probably think of some things you've used that got more annoying with updates. Like google putting more ads up top, or any website that does a redesign that yields more ad space, worse navigation.

  • Summarization is one of the things LLMs are pretty good at. Same for the other thing where Wikipedia talked about auto-generating the "simple article" variants that are normally managed by hand to dumb down content.

    But if they're pushing these tools, they need to be pushed as handy tools for editors to consider leveraging, not forced behavior for end users.

    not forced behavior for end users.

    This is what I'm constantly criticizing. It's fine to have more options, but they should be options and not mandatory.

    No, having to scroll past an AI summary for every fucking article is not an 'option.' Having the option to hide it forever (or even better, opt-in), now that's a real option.

    I'd really love to see the opt-in/opt-out data for AI. I guarantee businesses aren't including the option or recording data because they know it will show people don't want it, and they have to follow the data!

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    Noo Wikipedia why would you do this

  • Grok please ELI5 this comment so i can understand it

    I know your comment was /s bit I cant not repost this:

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    I don't see how AI could benefit wikipedia. Just the power consumption alone isn't worth it. Wiki is one of the rare AI free zones, which is a reason why it is good

  • Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.

    Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.

    A few important things to start with:

    1. Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
    2. We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
    3. With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.

    We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.

    I don't think Wikipedia is for the benefit of users anymore, what even are the alternatives? Leftypedia? Definitely not Britannica

  • 1 Stimmen
    2 Beiträge
    2 Aufrufe
    A
    If you're a developer, a startup founder, or part of a small team, you've poured countless hours into building your web application. You've perfected the UI, optimized the database, and shipped features your users love. But in the rush to build and deploy, a critical question often gets deferred: is your application secure? For many, the answer is a nervous "I hope so." The reality is that without a proper defense, your application is exposed to a barrage of automated attacks hitting the web every second. Threats like SQL Injection, Cross-Site Scripting (XSS), and Remote Code Execution are not just reserved for large enterprises; they are constant dangers for any application with a public IP address. The Security Barrier: When Cost and Complexity Get in the Way The standard recommendation is to place a Web Application Firewall (WAF) in front of your application. A WAF acts as a protective shield, inspecting incoming traffic and filtering out malicious requests before they can do any damage. It’s a foundational piece of modern web security. So, why doesn't everyone have one? Historically, robust WAFs have been complex and expensive. They required significant budgets, specialized knowledge to configure, and ongoing maintenance, putting them out of reach for students, solo developers, non-profits, and early-stage startups. This has created a dangerous security divide, leaving the most innovative and resource-constrained projects the most vulnerable. But that is changing. Democratizing Security: The Power of a Community WAF Security should be a right, not a privilege. Recognizing this, the landscape is shifting towards more accessible, community-driven tools. The goal is to provide powerful, enterprise-grade protection to everyone, for free. This is the principle behind the HaltDos Community WAF. It's a no-cost, perpetually free Web Application Firewall designed specifically for the community that has been underserved for too long. It’s not a stripped-down trial version; it’s a powerful security tool designed to give you immediate and effective protection against the OWASP Top 10 and other critical web threats. What Can You Actually Do with It? With a community WAF, you can deploy a security layer in minutes that: Blocks Malicious Payloads: Get instant, out-of-the-box protection against common attack patterns like SQLi, XSS, RCE, and more. Stops Bad Bots: Prevent malicious bots from scraping your content, attempting credential stuffing, or spamming your forms. Gives You Visibility: A real-time dashboard shows you exactly who is trying to attack your application and what methods they are using, providing invaluable security intelligence. Allows Customization: You can add your own custom security rules to tailor the protection specifically to your application's logic and technology stack. The best part? It can be deployed virtually anywhere—on-premises, in a private cloud, or with any major cloud provider like AWS, Azure, or Google Cloud. Get Started in Minutes You don't need to be a security guru to use it. The setup is straightforward, and the value is immediate. Protecting the project, you've worked so hard on is no longer a question of budget. Download: Get the free Community WAF from the HaltDos site. Deploy: Follow the simple instructions to set it up with your web server (it’s compatible with Nginx, Apache, and others). Secure: Watch the dashboard as it begins to inspect your traffic and block threats in real-time. Security is a journey, but it must start somewhere. For developers, startups, and anyone running a web application on a tight budget, a community WAF is the perfect first step. It's powerful, it's easy, and it's completely free.
  • My character isn't answering me

    Technology technology
    1
    0 Stimmen
    1 Beiträge
    1 Aufrufe
    Niemand hat geantwortet
  • Where do I install this nvme drive on my laptop?

    Technology technology
    19
    2
    18 Stimmen
    19 Beiträge
    2 Aufrufe
    K
    ??? The thing is on the right side of the pic. Your image is up side down. Edit: oh.duh, the two horizontal slots. I'm a dummy. Sorry.
  • Catbox.moe got screwed 😿

    Technology technology
    40
    55 Stimmen
    40 Beiträge
    14 Aufrufe
    archrecord@lemm.eeA
    I'll gladly give you a reason. I'm actually happy to articulate my stance on this, considering how much I tend to care about digital rights. Services that host files should not be held responsible for what users upload, unless: The service explicitly caters to illegal content by definition or practice (i.e. the if the website is literally titled uploadyourcsamhere[.]com then it's safe to assume they deliberately want to host illegal content) The service has a very easy mechanism to remove illegal content, either when asked, or through simple monitoring systems, but chooses not to do so (catbox does this, and quite quickly too) Because holding services responsible creates a whole host of negative effects. Here's some examples: Someone starts a CDN and some users upload CSAM. The creator of the CDN goes to jail now. Nobody ever wants to create a CDN because of the legal risk, and thus the only providers of CDNs become shady, expensive, anonymously-run services with no compliance mechanisms. You run a site that hosts images, and someone decides they want to harm you. They upload CSAM, then report the site to law enforcement. You go to jail. Anybody in the future who wants to run an image sharing site must now self-censor to try and not upset any human being that could be willing to harm them via their site. A social media site is hosting the posts and content of users. In order to be compliant and not go to jail, they must engage in extremely strict filtering, otherwise even one mistake could land them in jail. All users of the site are prohibited from posting any NSFW or even suggestive content, (including newsworthy media, such as an image of bodies in a warzone) and any violation leads to an instant ban, because any of those things could lead to a chance of actually illegal content being attached. This isn't just my opinion either. Digital rights organizations such as the Electronic Frontier Foundation have talked at length about similar policies before. To quote them: "When social media platforms adopt heavy-handed moderation policies, the unintended consequences can be hard to predict. For example, Twitter’s policies on sexual material have resulted in posts on sexual health and condoms being taken down. YouTube’s bans on violent content have resulted in journalism on the Syrian war being pulled from the site. It can be tempting to attempt to “fix” certain attitudes and behaviors online by placing increased restrictions on users’ speech, but in practice, web platforms have had more success at silencing innocent people than at making online communities healthier." Now, to address the rest of your comment, since I don't just want to focus on the beginning: I think you have to actively moderate what is uploaded Catbox does, and as previously mentioned, often at a much higher rate than other services, and at a comparable rate to many services that have millions, if not billions of dollars in annual profits that could otherwise be spent on further moderation. there has to be swifter and stricter punishment for those that do upload things that are against TOS and/or illegal. The problem isn't necessarily the speed at which people can be reported and punished, but rather that the internet is fundamentally harder to track people on than real life. It's easy for cops to sit around at a spot they know someone will be physically distributing illegal content at in real life, but digitally, even if you can see the feed of all the information passing through the service, a VPN or Tor connection will anonymize your IP address in a manner that most police departments won't be able to track, and most three-letter agencies will simply have a relatively low success rate with. There's no good solution to this problem of identifying perpetrators, which is why platforms often focus on moderation over legal enforcement actions against users so frequently. It accomplishes the goal of preventing and removing the content without having to, for example, require every single user of the internet to scan an ID (and also magically prevent people from just stealing other people's access tokens and impersonating their ID) I do agree, however, that we should probably provide larger amounts of funding, training, and resources, to divisions who's sole goal is to go after online distribution of various illegal content, primarily that which harms children, because it's certainly still an issue of there being too many reports to go through, even if many of them will still lead to dead ends. I hope that explains why making file hosting services liable for user uploaded content probably isn't the best strategy. I hate to see people with good intentions support ideas that sound good in practice, but in the end just cause more untold harms, and I hope you can understand why I believe this to be the case.
  • 41 Stimmen
    5 Beiträge
    2 Aufrufe
    paraphrand@lemmy.worldP
    Network Effects.
  • 0 Stimmen
    6 Beiträge
    2 Aufrufe
    H
    Then that's changed since the last time I toyed with the idea. Which, granted, was probably 20 years ago...
  • Microsoft is putting AI actions into the Windows File Explorer

    Technology technology
    11
    1
    1 Stimmen
    11 Beiträge
    3 Aufrufe
    I
    Cool, so that's a specific problem with your needed use case. That's not what you said before.
  • How to delete your Twitter (or X) account

    Technology technology
    2
    1
    1 Stimmen
    2 Beiträge
    3 Aufrufe
    R
    I also need to know the way to delete twitter account of my brand : https://stylo.pk/ .