Skip to content

‘I blame Facebook’: Aaron Sorkin is writing a Social Network sequel for the post-Zuckerberg era

Technology
19 11 191
  • Shame on them if they don't highlight the fediverse.

  • Post-Zuckerberg? I'm confused on the eras of Facebook I guess. He's still CEO isn't he? Wouldn't that make the whole history of the company the Zuckerberg era?

  • Post-Zuckerberg? I'm confused on the eras of Facebook I guess. He's still CEO isn't he? Wouldn't that make the whole history of the company the Zuckerberg era?

    It’s probably referring to the era past when Zuckerberg was an up-and-coming CEO and was still doing a bunch of new things, and when almost all of social media growth revolved around Mark Zuckerberg. Now we’re in an era where he’s an establishment tech CEO. The stuff he does now is less about innovation and more about driving blind profits.

  • Let's fucking go

    The Facebook Files made – and provided evidence for – multiple allegations, including that Facebook was well aware of how toxic Instagram was for many teen girls; that Facebook has a "secret elite" list of people for whom Facebook's rules don't apply; that Facebook knew its revised algorithm was fueling rage; and that Facebook didn't do enough to stop anti-vax propaganda during Covid-19. Most damningly of all, The Facebook Files reported that all of these things were well known to senior executives, including Mark Zuckerberg.

    It's clear which side Sorkin is taking. "I blame Facebook for January 6," he said last year. "Facebook has been, among other things, tuning its algorithm to promote the most divisive material possible. Because that is what will increase engagement ... There’s supposed to be a constant tension at Facebook between growth and integrity. There isn’t. It’s just growth."

    tuning its algorithm to promote the most divisive material possible. Because that is what will increase engagement

    But at the same time in every case I described on Lemmy an experience not maximizing engagement by maximizing conflict, I was downvoted to hell's basement. Despite two of three modern social media experience models being too aimed for that, that'd be Facebook-like and Reddit-like, excluding Twitter-like (which is unfortunately vulnerable to bots). I mean, there's less conflict on fucking imageboards, those were at some point considered among most toxic places in the interwebs.

    (Something-something Usenet-like namespaces instead of existing communities tied to instances, something-something identities too not tied to instances and being cryptographic, something-something subjective moderation (subscribing to moderation authorities you choose, would feel similar to joining a group, one can even have in the UI a few combinations of the same namespace and a few different moderation authorities for it), something-something a bigger role of client-side moderation (ignoring in the UI those people you don't like). Ideally what really gets removed and not propagated to anyone would be stuff like calls for mass murders, stolen credentials, gore, real rape and CP. The "posting to a namespace versus posting to an owned community" dichotomy is important. The latter causes a "capture the field" reaction from humans.)

  • @maam Friendica works well with Lemmy and PieFed too if anyone reading this post wondering

    replying from Friendica

    @maam @ryanee@hubzilla.am-networks.fr @ryanee @ryanee @ryanee @ryanee Mastodon emojis can also be seen from Friendica, so cool
  • tuning its algorithm to promote the most divisive material possible. Because that is what will increase engagement

    But at the same time in every case I described on Lemmy an experience not maximizing engagement by maximizing conflict, I was downvoted to hell's basement. Despite two of three modern social media experience models being too aimed for that, that'd be Facebook-like and Reddit-like, excluding Twitter-like (which is unfortunately vulnerable to bots). I mean, there's less conflict on fucking imageboards, those were at some point considered among most toxic places in the interwebs.

    (Something-something Usenet-like namespaces instead of existing communities tied to instances, something-something identities too not tied to instances and being cryptographic, something-something subjective moderation (subscribing to moderation authorities you choose, would feel similar to joining a group, one can even have in the UI a few combinations of the same namespace and a few different moderation authorities for it), something-something a bigger role of client-side moderation (ignoring in the UI those people you don't like). Ideally what really gets removed and not propagated to anyone would be stuff like calls for mass murders, stolen credentials, gore, real rape and CP. The "posting to a namespace versus posting to an owned community" dichotomy is important. The latter causes a "capture the field" reaction from humans.)

    ...And under the current model, the egos of mods get crazy big as they see their community army grow bigger and they can shape it how they want, even stackoverflow suffered and developers left in droves long before LLM took its place.

    I do miss the original imageboards though that used sage and was a community driven effort into moderation.

  • ...And under the current model, the egos of mods get crazy big as they see their community army grow bigger and they can shape it how they want, even stackoverflow suffered and developers left in droves long before LLM took its place.

    I do miss the original imageboards though that used sage and was a community driven effort into moderation.

    The mod ego problem will exist as long as there's moderation, unfortunately.

    It was present in the web even before it was expelled from heaven.

    But it's not necessary to remove all moderation, just global identifiers of posts and many different "moderating projections" of the same collection of data can be enough to change the climate for most of the users. Not moderation itself really matters - the ability to dominate, to shut someone's mouth matters. If the only way you see a post is without such at all - then maybe it's too rude. If it's removed on the instance level on most of instances - then maybe it's something really nasty that shouldn't be seen. But if in some projection it's visible and in some not - then we've solved this particular problem.

    In such a hypothetical system.

  • He should just adapt Careless People for film

  • The mod ego problem will exist as long as there's moderation, unfortunately.

    It was present in the web even before it was expelled from heaven.

    But it's not necessary to remove all moderation, just global identifiers of posts and many different "moderating projections" of the same collection of data can be enough to change the climate for most of the users. Not moderation itself really matters - the ability to dominate, to shut someone's mouth matters. If the only way you see a post is without such at all - then maybe it's too rude. If it's removed on the instance level on most of instances - then maybe it's something really nasty that shouldn't be seen. But if in some projection it's visible and in some not - then we've solved this particular problem.

    In such a hypothetical system.

    Yeah I agree of sorts and people have the right to be offended so I prefer looser moderation over the absolute otherwise there's no difference between those groups that preach 'everything inclusive (except what we don't like)' and those who are clearly extreme and have their own biases. The irony of freespeech is you're going to hear things you don't agree with, and that's fine.

  • Yeah I agree of sorts and people have the right to be offended so I prefer looser moderation over the absolute otherwise there's no difference between those groups that preach 'everything inclusive (except what we don't like)' and those who are clearly extreme and have their own biases. The irony of freespeech is you're going to hear things you don't agree with, and that's fine.

    What I'm speaking about is that it should be impossible to do some things. If it's possible, they will be done, and there's nothing you can do about it.

    To solve the problem of twiddled social media (and moderation used to assert dominance) we need a decentralized system of 90s Web reimagined, and Fediverse doesn't deliver it - if Facebook and Reddit are feudal states, then Fediverse is a confederation of smaller feudal entities.

    A post, a person, a community, a reaction and a change (by moderator or by the user) should be global entities (with global identifiers, so that the object by id of #0000001a2b3c4d6e7f890 would be the same object today or 10 years later on every server storing it) replicated over a network of servers similarly to Usenet (and to an IRC network, but in an IRC network servers are trusted, so it's not a good example for a global system).

    Really bad posts (or those by persons with history of posting such) should be banned on server level by everyone. The rest should be moderated by moderator reactions\changes of certain type.

    Ideally, for pooling of resources and resilience, servers would be separated by types into storage nodes (I think the name says it, FTP servers can do the job, but no need to be limited by it), index nodes (scraping many storage nodes, giving out results in structured format fit for any user representation, say, as a sequence of posts in one community, or like a list of communities found by tag, or ... , and possibly being connected into one DHT for Kademlia-like search, since no single index node will have everything), and (like in torrents?) tracker nodes for these and for identities, I think torrent-like announce-retrieve service is enough - to return a list of storage nodes storing, say, a specified partition (subspace of identifiers of objects, to make looking for something at least possibly efficient), or return a list of index nodes, or return a bunch of certificates and keys for an identity (should be somehow cryptographically connected to the global identifier of a person). So when a storage node comes online, it announces itself to a bunch of such trackers, similarly with index nodes, similarly with a user. One can also have a NOSTR-like service for real-time notifications by users.

    This way you'd have a global untrusted pooled infrastructure, allowing to replace many platforms. With common data, identities, services. Objects in storage and index services can be, say, in a format including a set of tags and then the body. So a specific application needing to show only data related to it would just search on index services and display only objects with tags of, say, "holo_ns:talk.bullshit.starwars" and "holo_t:post", like a sequence of posts with ability to comment, or maybe it would search objects with tags "holo_name:My 1999-like Star Wars holopage" and "holo_t:page" and display the links like search results in Google, and then clicking on that you'd see something presented like a webpage, except links would lead to global identifiers (or tag expressions interpreted by the particular application, who knows).

    (An index service may return, say, an array of objects, each with identifier, tags, list of locations on storage nodes where it's found or even bittorrent magnet links, and a free description possibly ; then the user application can unify responses of a few such services to avoid repetitions, maybe sort them, represent them as needed, so on.)

    The user applications for that common infrastructure can be different at the same time. Some like Facebook, some like ICQ, some like a web browser, some like a newsreader.

    (Star Wars is not a random reference, my whole habit of imagining tech stuff is from trying to imagine a science fiction world of the future, so yeah, this may seem like passive dreaming and it is.)

  • No bias, no bull AI

    Technology technology
    7
    5 Stimmen
    7 Beiträge
    14 Aufrufe
    xxce2aab@feddit.dkX
    That's very sensible of you.
  • Judge briefly pauses 23andMe bankruptcy sale amid California's appeal

    Technology technology
    1
    22 Stimmen
    1 Beiträge
    19 Aufrufe
    Niemand hat geantwortet
  • Windows 11 finally overtakes Windows 10 [in marketshare]

    Technology technology
    32
    1
    63 Stimmen
    32 Beiträge
    350 Aufrufe
    H
    Yeah, and its most likely only due to them killing Windows 10 in the fall, which means a lot of companies have been working hard this year to replace a ton of computers before October. Anyone who has been down this road with 7 to 10 knows it will just cost more money if you need to continue support after that. They sell you a new license thats good for a year that will allow updates to continue. It doubles in cost every year after.
  • 0 Stimmen
    1 Beiträge
    21 Aufrufe
    Niemand hat geantwortet
  • Study finds persistent spike in hate speech on X

    Technology technology
    43
    1
    348 Stimmen
    43 Beiträge
    627 Aufrufe
    E
    You are a zionist so it's funny that you say that
  • 39 Stimmen
    15 Beiträge
    133 Aufrufe
    C
    I believed they were doing such things against budding competitors long before the LLM era. My test is simple. Replace it with China. Would the replies be the opposite of what you've recieved so far? The answer is yes. Absolutely people would be frothing at the mouth about China being bad actors. Western tech bros are just as paranoid, they copy off others, they steal ideas. When we do it it's called "innovation".
  • 2k Stimmen
    133 Beiträge
    1k Aufrufe
    S
    Tokyo banned diesel motors in the late 90s. As far as I know that didn't kill Toyota. At the same time European car makers started to lobby for particle filters that were supposed to solve everything. The politics who where naive enough to believe them do share responsibility, but not as much as the european auto industry that created this whole situation. Also, you implies that laws are made by politicians without any intervention of the industries whatsoever. I think you know that it is not how it works.
  • OpenAI plans massive UAE data center project

    Technology technology
    4
    1
    0 Stimmen
    4 Beiträge
    51 Aufrufe
    V
    TD Cowen (which is basically the US arm of one of the largest Canadian investment banks) did an extensive report on the state of AI investment. What they found was that despite all their big claims about the future of AI, Microsoft were quietly allowing letters of intent for billions of dollars worth of new compute capacity to expire. Basically, scrapping future plans for expansion, but in a way that's not showy and doesn't require any kind of big announcement. The equivalent of promising to be at the party and then just not showing up. Not long after this reporting came out, it got confirmed by Microsoft, and not long after it came out that Amazon was doing the same thing. Ed Zitron has a really good write up on it; https://www.wheresyoured.at/power-cut/ Amazon isn't the big surprise, they've always been the most cautious of the big players on the whole AI thing. Microsoft on the other hand are very much trying to play things both ways. They know AI is fucked, which is why they're scaling back, but they've also invested a lot of money into their OpenAI partnership so now they have to justify that expenditure which means convincing investors that consumers absolutely love their AI products and are desparate for more. As always, follow the money. Stuff like the three mile island thing is mostly just applying for permits and so on at this point. Relatively small investments. As soon as it comes to big money hitting the table, they're pulling back. That's how you know how they really feel.