Skip to content

The End of Publishing as We Know It

Technology
10 7 0
  • AI companies than blog and social-media posts. (Ziff Davis is suing OpenAI for training on its articles without paying a licensing fee.) Researchers at Microsoft have also written publicly about “the importance of high-quality data” and have suggested that textbook-style content may be particularly desirable.

    If they want quality data then, don't kill them. Secondly, if they want us as gig workers providing content for AI, don't act surprised when people start feeding gibberish. It's already happening, llm are hallucinating a whole lot more than the earliest gpt 3 models. That means something, they just haven't thought about it long enough. If a reasoning model gets stuff wrong 30 to 50% of the time, with peak of 75% bullshit rate, it's worthless. Killing good journalism for this is so dumb.

  • AI companies than blog and social-media posts. (Ziff Davis is suing OpenAI for training on its articles without paying a licensing fee.) Researchers at Microsoft have also written publicly about “the importance of high-quality data” and have suggested that textbook-style content may be particularly desirable.

    If they want quality data then, don't kill them. Secondly, if they want us as gig workers providing content for AI, don't act surprised when people start feeding gibberish. It's already happening, llm are hallucinating a whole lot more than the earliest gpt 3 models. That means something, they just haven't thought about it long enough. If a reasoning model gets stuff wrong 30 to 50% of the time, with peak of 75% bullshit rate, it's worthless. Killing good journalism for this is so dumb.

    If it gets wrong enough, people will stop using it. So it would be in the interests of AI companies to pay for good sources of data.

    Or at least you'd hope that. In actual fact they'll be thinking: let's keep stealing because most people don't know or care whether what the AI says is true. Besides, they can make money by turning it into a tool for disseminating the views of those who can pay the most.

  • AI companies than blog and social-media posts. (Ziff Davis is suing OpenAI for training on its articles without paying a licensing fee.) Researchers at Microsoft have also written publicly about “the importance of high-quality data” and have suggested that textbook-style content may be particularly desirable.

    If they want quality data then, don't kill them. Secondly, if they want us as gig workers providing content for AI, don't act surprised when people start feeding gibberish. It's already happening, llm are hallucinating a whole lot more than the earliest gpt 3 models. That means something, they just haven't thought about it long enough. If a reasoning model gets stuff wrong 30 to 50% of the time, with peak of 75% bullshit rate, it's worthless. Killing good journalism for this is so dumb.

    Interestingly, I'm not seeing your quoted content when I look at this article. I see a three-paragraph-long article that says in a nutshell "people don't visit source sites as much now that AI summarizes the contents for them." (Ironic that I am manually summarizing it like that).

    Perhaps it's some kind of paywall blocking me from seeing the rest? I don't see any popup telling me that, but I've got a lot of adblockers that might be stopping that from appearing. I'm not going to disable adblockers just to see whether this is paywalled, given how incredibly intrusive and annoying ads are these days.

    Gee, I wonder why people prefer AI.

  • AI companies than blog and social-media posts. (Ziff Davis is suing OpenAI for training on its articles without paying a licensing fee.) Researchers at Microsoft have also written publicly about “the importance of high-quality data” and have suggested that textbook-style content may be particularly desirable.

    If they want quality data then, don't kill them. Secondly, if they want us as gig workers providing content for AI, don't act surprised when people start feeding gibberish. It's already happening, llm are hallucinating a whole lot more than the earliest gpt 3 models. That means something, they just haven't thought about it long enough. If a reasoning model gets stuff wrong 30 to 50% of the time, with peak of 75% bullshit rate, it's worthless. Killing good journalism for this is so dumb.

    If you want quality data, then don't kill them

    That is like telling cancer that if it wants to live it shouldn't kill the host.

    You're asking a lot from people without the ability to think about anything else than themselves

  • I often wonder about the stuff I write, what becomes of it. It's a little disheartening since I love crafting it for best effect... But especially with computer books for beginners, people prefer to ask AI for the answers instead of studying.

    I also just bought 6 sci-fi books from an author I'd never heard of for cheap. I love supporting indy authors, the price was right, and they sold their books directly from the website, no middlemen and no DRM. Perfect.

    But was the author real? I actually did a bunch of research to find out their history and all that before pulling the trigger. I really don't want to read AI stories. But I can see a future where the vast majority don't care. Imagine an endless episode of Survivor or a soap opera, completely generated 24x7 forever. You know that shit would be massive.

    And there might only be a fringe that seeks human-generated content for the humanity of it.

  • I often wonder about the stuff I write, what becomes of it. It's a little disheartening since I love crafting it for best effect... But especially with computer books for beginners, people prefer to ask AI for the answers instead of studying.

    I also just bought 6 sci-fi books from an author I'd never heard of for cheap. I love supporting indy authors, the price was right, and they sold their books directly from the website, no middlemen and no DRM. Perfect.

    But was the author real? I actually did a bunch of research to find out their history and all that before pulling the trigger. I really don't want to read AI stories. But I can see a future where the vast majority don't care. Imagine an endless episode of Survivor or a soap opera, completely generated 24x7 forever. You know that shit would be massive.

    And there might only be a fringe that seeks human-generated content for the humanity of it.

    What do you mean by "no DEI"?

  • What do you mean by "no DEI"?

    No DEez nuts Included

    /s

  • Interestingly, I'm not seeing your quoted content when I look at this article. I see a three-paragraph-long article that says in a nutshell "people don't visit source sites as much now that AI summarizes the contents for them." (Ironic that I am manually summarizing it like that).

    Perhaps it's some kind of paywall blocking me from seeing the rest? I don't see any popup telling me that, but I've got a lot of adblockers that might be stopping that from appearing. I'm not going to disable adblockers just to see whether this is paywalled, given how incredibly intrusive and annoying ads are these days.

    Gee, I wonder why people prefer AI.

    Alright, the site itself is legible, but if you find it hard to read you could use ublock or the archive. is website. It's also a short article.

  • What do you mean by "no DEI"?

    Lol.. I wanted "DRM". But it's been a long day.

  • 91 Stimmen
    2 Beiträge
    0 Aufrufe
    S
    I wouldn't call it unprecedented, just more obvious
  • How can websites verify unique (IRL) identities?

    Technology technology
    6
    8 Stimmen
    6 Beiträge
    1 Aufrufe
    H
    Safe, yeah. Private, no. If you want to verify whether a user is a real person, you need very personally identifiable information. That’s not ever going to be private. The best you could do, in theory, is have a government service that takes that PII and gives the user a signed cryptographic certificate they can use to verify their identity. Most people would either lose their private key or have it stolen, so even that system would have problems. The closest to reality you could do right now is use Apple’s FaceID, and that’s anything but private. Pretty safe though. It’s super illegal and quite hard to steal someone’s face.
  • 136 Stimmen
    9 Beiträge
    7 Aufrufe
    N
    I support them , china I mean
  • 693 Stimmen
    140 Beiträge
    44 Aufrufe
    H
    Maybe I don't want you to stop, big boy.
  • 4 Stimmen
    1 Beiträge
    3 Aufrufe
    Niemand hat geantwortet
  • Covert Web-to-App Tracking via Localhost on Android

    Technology technology
    3
    28 Stimmen
    3 Beiträge
    9 Aufrufe
    P
    That update though: "... completely removed..." I assume this is because someone at Meta realized this was a huge breach of trust, and likely quite illegal. Edit: I read somewhere that they're just being cautious about Google Play terms of service. That feels worse.
  • Why doesn't Nvidia have more competition?

    Technology technology
    22
    1
    33 Stimmen
    22 Beiträge
    8 Aufrufe
    B
    It’s funny how the article asks the question, but completely fails to answer it. About 15 years ago, Nvidia discovered there was a demand for compute in datacenters that could be met with powerful GPU’s, and they were quick to respond to it, and they had the resources to focus on it strongly, because of their huge success and high profitability in the GPU market. AMD also saw the market, and wanted to pursue it, but just over a decade ago where it began to clearly show the high potential for profitability, AMD was near bankrupt, and was very hard pressed to finance developments on GPU and compute in datacenters. AMD really tried the best they could, and was moderately successful from a technology perspective, but Nvidia already had a head start, and the proprietary development system CUDA was already an established standard that was very hard to penetrate. Intel simply fumbled the ball from start to finish. After a decade of trying to push ARM down from having the mobile crown by far, investing billions or actually the equivalent of ARM’s total revenue. They never managed to catch up to ARM despite they had the better production process at the time. This was the main focus of Intel, and Intel believed that GPU would never be more than a niche product. So when intel tried to compete on compute for datacenters, they tried to do it with X86 chips, One of their most bold efforts was to build a monstrosity of a cluster of Celeron chips, which of course performed laughably bad compared to Nvidia! Because as it turns out, the way forward at least for now, is indeed the massively parralel compute capability of a GPU, which Nvidia has refined for decades, only with (inferior) competition from AMD. But despite the lack of competition, Nvidia did not slow down, in fact with increased profits, they only grew bolder in their efforts. Making it even harder to catch up. Now AMD has had more money to compete for a while, and they do have some decent compute units, but Nvidia remains ahead and the CUDA problem is still there, so for AMD to really compete with Nvidia, they have to be better to attract customers. That’s a very tall order against Nvidia that simply seems to never stop progressing. So the only other option for AMD is to sell a bit cheaper. Which I suppose they have to. AMD and Intel were the obvious competitors, everybody else is coming from even further behind. But if I had to make a bet, it would be on Huawei. Huawei has some crazy good developers, and Trump is basically forcing them to figure it out themselves, because he is blocking Huawei and China in general from using both AMD and Nvidia AI chips. And the chips will probably be made by Chinese SMIC, because they are also prevented from using advanced production in the west, most notably TSMC. China will prevail, because it’s become a national project, of both prestige and necessity, and they have a massive talent mass and resources, so nothing can stop it now. IMO USA would clearly have been better off allowing China to use American chips. Now China will soon compete directly on both production and design too.
  • 278 Stimmen
    100 Beiträge
    10 Aufrufe
    F
    It's not just skills, it's also capital investment.