Skip to content

‘I blame Facebook’: Aaron Sorkin is writing a Social Network sequel for the post-Zuckerberg era

Technology
19 11 191
  • Let's fucking go

    The Facebook Files made – and provided evidence for – multiple allegations, including that Facebook was well aware of how toxic Instagram was for many teen girls; that Facebook has a "secret elite" list of people for whom Facebook's rules don't apply; that Facebook knew its revised algorithm was fueling rage; and that Facebook didn't do enough to stop anti-vax propaganda during Covid-19. Most damningly of all, The Facebook Files reported that all of these things were well known to senior executives, including Mark Zuckerberg.

    It's clear which side Sorkin is taking. "I blame Facebook for January 6," he said last year. "Facebook has been, among other things, tuning its algorithm to promote the most divisive material possible. Because that is what will increase engagement ... There’s supposed to be a constant tension at Facebook between growth and integrity. There isn’t. It’s just growth."

  • Let's fucking go

    The Facebook Files made – and provided evidence for – multiple allegations, including that Facebook was well aware of how toxic Instagram was for many teen girls; that Facebook has a "secret elite" list of people for whom Facebook's rules don't apply; that Facebook knew its revised algorithm was fueling rage; and that Facebook didn't do enough to stop anti-vax propaganda during Covid-19. Most damningly of all, The Facebook Files reported that all of these things were well known to senior executives, including Mark Zuckerberg.

    It's clear which side Sorkin is taking. "I blame Facebook for January 6," he said last year. "Facebook has been, among other things, tuning its algorithm to promote the most divisive material possible. Because that is what will increase engagement ... There’s supposed to be a constant tension at Facebook between growth and integrity. There isn’t. It’s just growth."

    Deleted by author

  • Let's fucking go

    The Facebook Files made – and provided evidence for – multiple allegations, including that Facebook was well aware of how toxic Instagram was for many teen girls; that Facebook has a "secret elite" list of people for whom Facebook's rules don't apply; that Facebook knew its revised algorithm was fueling rage; and that Facebook didn't do enough to stop anti-vax propaganda during Covid-19. Most damningly of all, The Facebook Files reported that all of these things were well known to senior executives, including Mark Zuckerberg.

    It's clear which side Sorkin is taking. "I blame Facebook for January 6," he said last year. "Facebook has been, among other things, tuning its algorithm to promote the most divisive material possible. Because that is what will increase engagement ... There’s supposed to be a constant tension at Facebook between growth and integrity. There isn’t. It’s just growth."

  • Shame on them if they don't highlight the fediverse.

  • Post-Zuckerberg? I'm confused on the eras of Facebook I guess. He's still CEO isn't he? Wouldn't that make the whole history of the company the Zuckerberg era?

  • Post-Zuckerberg? I'm confused on the eras of Facebook I guess. He's still CEO isn't he? Wouldn't that make the whole history of the company the Zuckerberg era?

    It’s probably referring to the era past when Zuckerberg was an up-and-coming CEO and was still doing a bunch of new things, and when almost all of social media growth revolved around Mark Zuckerberg. Now we’re in an era where he’s an establishment tech CEO. The stuff he does now is less about innovation and more about driving blind profits.

  • Let's fucking go

    The Facebook Files made – and provided evidence for – multiple allegations, including that Facebook was well aware of how toxic Instagram was for many teen girls; that Facebook has a "secret elite" list of people for whom Facebook's rules don't apply; that Facebook knew its revised algorithm was fueling rage; and that Facebook didn't do enough to stop anti-vax propaganda during Covid-19. Most damningly of all, The Facebook Files reported that all of these things were well known to senior executives, including Mark Zuckerberg.

    It's clear which side Sorkin is taking. "I blame Facebook for January 6," he said last year. "Facebook has been, among other things, tuning its algorithm to promote the most divisive material possible. Because that is what will increase engagement ... There’s supposed to be a constant tension at Facebook between growth and integrity. There isn’t. It’s just growth."

    tuning its algorithm to promote the most divisive material possible. Because that is what will increase engagement

    But at the same time in every case I described on Lemmy an experience not maximizing engagement by maximizing conflict, I was downvoted to hell's basement. Despite two of three modern social media experience models being too aimed for that, that'd be Facebook-like and Reddit-like, excluding Twitter-like (which is unfortunately vulnerable to bots). I mean, there's less conflict on fucking imageboards, those were at some point considered among most toxic places in the interwebs.

    (Something-something Usenet-like namespaces instead of existing communities tied to instances, something-something identities too not tied to instances and being cryptographic, something-something subjective moderation (subscribing to moderation authorities you choose, would feel similar to joining a group, one can even have in the UI a few combinations of the same namespace and a few different moderation authorities for it), something-something a bigger role of client-side moderation (ignoring in the UI those people you don't like). Ideally what really gets removed and not propagated to anyone would be stuff like calls for mass murders, stolen credentials, gore, real rape and CP. The "posting to a namespace versus posting to an owned community" dichotomy is important. The latter causes a "capture the field" reaction from humans.)

  • @maam Friendica works well with Lemmy and PieFed too if anyone reading this post wondering

    replying from Friendica

    @maam @ryanee@hubzilla.am-networks.fr @ryanee @ryanee @ryanee @ryanee Mastodon emojis can also be seen from Friendica, so cool
  • tuning its algorithm to promote the most divisive material possible. Because that is what will increase engagement

    But at the same time in every case I described on Lemmy an experience not maximizing engagement by maximizing conflict, I was downvoted to hell's basement. Despite two of three modern social media experience models being too aimed for that, that'd be Facebook-like and Reddit-like, excluding Twitter-like (which is unfortunately vulnerable to bots). I mean, there's less conflict on fucking imageboards, those were at some point considered among most toxic places in the interwebs.

    (Something-something Usenet-like namespaces instead of existing communities tied to instances, something-something identities too not tied to instances and being cryptographic, something-something subjective moderation (subscribing to moderation authorities you choose, would feel similar to joining a group, one can even have in the UI a few combinations of the same namespace and a few different moderation authorities for it), something-something a bigger role of client-side moderation (ignoring in the UI those people you don't like). Ideally what really gets removed and not propagated to anyone would be stuff like calls for mass murders, stolen credentials, gore, real rape and CP. The "posting to a namespace versus posting to an owned community" dichotomy is important. The latter causes a "capture the field" reaction from humans.)

    ...And under the current model, the egos of mods get crazy big as they see their community army grow bigger and they can shape it how they want, even stackoverflow suffered and developers left in droves long before LLM took its place.

    I do miss the original imageboards though that used sage and was a community driven effort into moderation.

  • ...And under the current model, the egos of mods get crazy big as they see their community army grow bigger and they can shape it how they want, even stackoverflow suffered and developers left in droves long before LLM took its place.

    I do miss the original imageboards though that used sage and was a community driven effort into moderation.

    The mod ego problem will exist as long as there's moderation, unfortunately.

    It was present in the web even before it was expelled from heaven.

    But it's not necessary to remove all moderation, just global identifiers of posts and many different "moderating projections" of the same collection of data can be enough to change the climate for most of the users. Not moderation itself really matters - the ability to dominate, to shut someone's mouth matters. If the only way you see a post is without such at all - then maybe it's too rude. If it's removed on the instance level on most of instances - then maybe it's something really nasty that shouldn't be seen. But if in some projection it's visible and in some not - then we've solved this particular problem.

    In such a hypothetical system.

  • He should just adapt Careless People for film

  • The mod ego problem will exist as long as there's moderation, unfortunately.

    It was present in the web even before it was expelled from heaven.

    But it's not necessary to remove all moderation, just global identifiers of posts and many different "moderating projections" of the same collection of data can be enough to change the climate for most of the users. Not moderation itself really matters - the ability to dominate, to shut someone's mouth matters. If the only way you see a post is without such at all - then maybe it's too rude. If it's removed on the instance level on most of instances - then maybe it's something really nasty that shouldn't be seen. But if in some projection it's visible and in some not - then we've solved this particular problem.

    In such a hypothetical system.

    Yeah I agree of sorts and people have the right to be offended so I prefer looser moderation over the absolute otherwise there's no difference between those groups that preach 'everything inclusive (except what we don't like)' and those who are clearly extreme and have their own biases. The irony of freespeech is you're going to hear things you don't agree with, and that's fine.

  • Yeah I agree of sorts and people have the right to be offended so I prefer looser moderation over the absolute otherwise there's no difference between those groups that preach 'everything inclusive (except what we don't like)' and those who are clearly extreme and have their own biases. The irony of freespeech is you're going to hear things you don't agree with, and that's fine.

    What I'm speaking about is that it should be impossible to do some things. If it's possible, they will be done, and there's nothing you can do about it.

    To solve the problem of twiddled social media (and moderation used to assert dominance) we need a decentralized system of 90s Web reimagined, and Fediverse doesn't deliver it - if Facebook and Reddit are feudal states, then Fediverse is a confederation of smaller feudal entities.

    A post, a person, a community, a reaction and a change (by moderator or by the user) should be global entities (with global identifiers, so that the object by id of #0000001a2b3c4d6e7f890 would be the same object today or 10 years later on every server storing it) replicated over a network of servers similarly to Usenet (and to an IRC network, but in an IRC network servers are trusted, so it's not a good example for a global system).

    Really bad posts (or those by persons with history of posting such) should be banned on server level by everyone. The rest should be moderated by moderator reactions\changes of certain type.

    Ideally, for pooling of resources and resilience, servers would be separated by types into storage nodes (I think the name says it, FTP servers can do the job, but no need to be limited by it), index nodes (scraping many storage nodes, giving out results in structured format fit for any user representation, say, as a sequence of posts in one community, or like a list of communities found by tag, or ... , and possibly being connected into one DHT for Kademlia-like search, since no single index node will have everything), and (like in torrents?) tracker nodes for these and for identities, I think torrent-like announce-retrieve service is enough - to return a list of storage nodes storing, say, a specified partition (subspace of identifiers of objects, to make looking for something at least possibly efficient), or return a list of index nodes, or return a bunch of certificates and keys for an identity (should be somehow cryptographically connected to the global identifier of a person). So when a storage node comes online, it announces itself to a bunch of such trackers, similarly with index nodes, similarly with a user. One can also have a NOSTR-like service for real-time notifications by users.

    This way you'd have a global untrusted pooled infrastructure, allowing to replace many platforms. With common data, identities, services. Objects in storage and index services can be, say, in a format including a set of tags and then the body. So a specific application needing to show only data related to it would just search on index services and display only objects with tags of, say, "holo_ns:talk.bullshit.starwars" and "holo_t:post", like a sequence of posts with ability to comment, or maybe it would search objects with tags "holo_name:My 1999-like Star Wars holopage" and "holo_t:page" and display the links like search results in Google, and then clicking on that you'd see something presented like a webpage, except links would lead to global identifiers (or tag expressions interpreted by the particular application, who knows).

    (An index service may return, say, an array of objects, each with identifier, tags, list of locations on storage nodes where it's found or even bittorrent magnet links, and a free description possibly ; then the user application can unify responses of a few such services to avoid repetitions, maybe sort them, represent them as needed, so on.)

    The user applications for that common infrastructure can be different at the same time. Some like Facebook, some like ICQ, some like a web browser, some like a newsreader.

    (Star Wars is not a random reference, my whole habit of imagining tech stuff is from trying to imagine a science fiction world of the future, so yeah, this may seem like passive dreaming and it is.)

  • 25 Stimmen
    1 Beiträge
    4 Aufrufe
    Niemand hat geantwortet
  • 259 Stimmen
    68 Beiträge
    828 Aufrufe
    R
    How do you think language in our brains work? Just like many things in tech (especially cameras), things are often inspired by how it works in nature.
  • Getting Started with Go - Trevors-Tutorials.com #2

    Technology technology
    2
    2 Stimmen
    2 Beiträge
    18 Aufrufe
    R
    This video complements the text tutorial at https://trevors-tutorials.com/0002-getting-started-with-go/ Trevors-Tutorials.com is where you can find free programming tutorials. The focus is on Go and Ebitengine game development. Watch the channel introduction for more info.
  • What are the most in-demand Tech Skills? (besides AI)

    Technology technology
    5
    10 Stimmen
    5 Beiträge
    66 Aufrufe
    jordanlund@lemmy.worldJ
    AI is devaluing other skills. I got an email today, from my own company, telling me I wouldn't have to renew my professional certification for 2 years if I passed an unrelated test on AI. The "test" was 10 questions. Glad to know my professional certification is equivalent to a 10 question pop quiz on AI.
  • UK police are being told to hide their work with Palantir

    Technology technology
    5
    1
    276 Stimmen
    5 Beiträge
    60 Aufrufe
    M
    This is really fucking dark for multiple reasons
  • Why your old mobile phone may be polluting Thailand

    Technology technology
    20
    1
    88 Stimmen
    20 Beiträge
    199 Aufrufe
    C
    Yeah. My old phones are in my house somewhere.
  • 1 Stimmen
    2 Beiträge
    29 Aufrufe
    A
    If you're a developer, a startup founder, or part of a small team, you've poured countless hours into building your web application. You've perfected the UI, optimized the database, and shipped features your users love. But in the rush to build and deploy, a critical question often gets deferred: is your application secure? For many, the answer is a nervous "I hope so." The reality is that without a proper defense, your application is exposed to a barrage of automated attacks hitting the web every second. Threats like SQL Injection, Cross-Site Scripting (XSS), and Remote Code Execution are not just reserved for large enterprises; they are constant dangers for any application with a public IP address. The Security Barrier: When Cost and Complexity Get in the Way The standard recommendation is to place a Web Application Firewall (WAF) in front of your application. A WAF acts as a protective shield, inspecting incoming traffic and filtering out malicious requests before they can do any damage. It’s a foundational piece of modern web security. So, why doesn't everyone have one? Historically, robust WAFs have been complex and expensive. They required significant budgets, specialized knowledge to configure, and ongoing maintenance, putting them out of reach for students, solo developers, non-profits, and early-stage startups. This has created a dangerous security divide, leaving the most innovative and resource-constrained projects the most vulnerable. But that is changing. Democratizing Security: The Power of a Community WAF Security should be a right, not a privilege. Recognizing this, the landscape is shifting towards more accessible, community-driven tools. The goal is to provide powerful, enterprise-grade protection to everyone, for free. This is the principle behind the HaltDos Community WAF. It's a no-cost, perpetually free Web Application Firewall designed specifically for the community that has been underserved for too long. It’s not a stripped-down trial version; it’s a powerful security tool designed to give you immediate and effective protection against the OWASP Top 10 and other critical web threats. What Can You Actually Do with It? With a community WAF, you can deploy a security layer in minutes that: Blocks Malicious Payloads: Get instant, out-of-the-box protection against common attack patterns like SQLi, XSS, RCE, and more. Stops Bad Bots: Prevent malicious bots from scraping your content, attempting credential stuffing, or spamming your forms. Gives You Visibility: A real-time dashboard shows you exactly who is trying to attack your application and what methods they are using, providing invaluable security intelligence. Allows Customization: You can add your own custom security rules to tailor the protection specifically to your application's logic and technology stack. The best part? It can be deployed virtually anywhere—on-premises, in a private cloud, or with any major cloud provider like AWS, Azure, or Google Cloud. Get Started in Minutes You don't need to be a security guru to use it. The setup is straightforward, and the value is immediate. Protecting the project, you've worked so hard on is no longer a question of budget. Download: Get the free Community WAF from the HaltDos site. Deploy: Follow the simple instructions to set it up with your web server (it’s compatible with Nginx, Apache, and others). Secure: Watch the dashboard as it begins to inspect your traffic and block threats in real-time. Security is a journey, but it must start somewhere. For developers, startups, and anyone running a web application on a tight budget, a community WAF is the perfect first step. It's powerful, it's easy, and it's completely free.
  • 163 Stimmen
    9 Beiträge
    85 Aufrufe
    stroz@infosec.pubS
    Move fast and break people