Skip to content

‘I blame Facebook’: Aaron Sorkin is writing a Social Network sequel for the post-Zuckerberg era

Technology
19 11 191
  • Let's fucking go

    The Facebook Files made – and provided evidence for – multiple allegations, including that Facebook was well aware of how toxic Instagram was for many teen girls; that Facebook has a "secret elite" list of people for whom Facebook's rules don't apply; that Facebook knew its revised algorithm was fueling rage; and that Facebook didn't do enough to stop anti-vax propaganda during Covid-19. Most damningly of all, The Facebook Files reported that all of these things were well known to senior executives, including Mark Zuckerberg.

    It's clear which side Sorkin is taking. "I blame Facebook for January 6," he said last year. "Facebook has been, among other things, tuning its algorithm to promote the most divisive material possible. Because that is what will increase engagement ... There’s supposed to be a constant tension at Facebook between growth and integrity. There isn’t. It’s just growth."

  • Shame on them if they don't highlight the fediverse.

  • Post-Zuckerberg? I'm confused on the eras of Facebook I guess. He's still CEO isn't he? Wouldn't that make the whole history of the company the Zuckerberg era?

  • Post-Zuckerberg? I'm confused on the eras of Facebook I guess. He's still CEO isn't he? Wouldn't that make the whole history of the company the Zuckerberg era?

    It’s probably referring to the era past when Zuckerberg was an up-and-coming CEO and was still doing a bunch of new things, and when almost all of social media growth revolved around Mark Zuckerberg. Now we’re in an era where he’s an establishment tech CEO. The stuff he does now is less about innovation and more about driving blind profits.

  • Let's fucking go

    The Facebook Files made – and provided evidence for – multiple allegations, including that Facebook was well aware of how toxic Instagram was for many teen girls; that Facebook has a "secret elite" list of people for whom Facebook's rules don't apply; that Facebook knew its revised algorithm was fueling rage; and that Facebook didn't do enough to stop anti-vax propaganda during Covid-19. Most damningly of all, The Facebook Files reported that all of these things were well known to senior executives, including Mark Zuckerberg.

    It's clear which side Sorkin is taking. "I blame Facebook for January 6," he said last year. "Facebook has been, among other things, tuning its algorithm to promote the most divisive material possible. Because that is what will increase engagement ... There’s supposed to be a constant tension at Facebook between growth and integrity. There isn’t. It’s just growth."

    tuning its algorithm to promote the most divisive material possible. Because that is what will increase engagement

    But at the same time in every case I described on Lemmy an experience not maximizing engagement by maximizing conflict, I was downvoted to hell's basement. Despite two of three modern social media experience models being too aimed for that, that'd be Facebook-like and Reddit-like, excluding Twitter-like (which is unfortunately vulnerable to bots). I mean, there's less conflict on fucking imageboards, those were at some point considered among most toxic places in the interwebs.

    (Something-something Usenet-like namespaces instead of existing communities tied to instances, something-something identities too not tied to instances and being cryptographic, something-something subjective moderation (subscribing to moderation authorities you choose, would feel similar to joining a group, one can even have in the UI a few combinations of the same namespace and a few different moderation authorities for it), something-something a bigger role of client-side moderation (ignoring in the UI those people you don't like). Ideally what really gets removed and not propagated to anyone would be stuff like calls for mass murders, stolen credentials, gore, real rape and CP. The "posting to a namespace versus posting to an owned community" dichotomy is important. The latter causes a "capture the field" reaction from humans.)

  • @maam Friendica works well with Lemmy and PieFed too if anyone reading this post wondering

    replying from Friendica

    @maam @ryanee@hubzilla.am-networks.fr @ryanee @ryanee @ryanee @ryanee Mastodon emojis can also be seen from Friendica, so cool
  • tuning its algorithm to promote the most divisive material possible. Because that is what will increase engagement

    But at the same time in every case I described on Lemmy an experience not maximizing engagement by maximizing conflict, I was downvoted to hell's basement. Despite two of three modern social media experience models being too aimed for that, that'd be Facebook-like and Reddit-like, excluding Twitter-like (which is unfortunately vulnerable to bots). I mean, there's less conflict on fucking imageboards, those were at some point considered among most toxic places in the interwebs.

    (Something-something Usenet-like namespaces instead of existing communities tied to instances, something-something identities too not tied to instances and being cryptographic, something-something subjective moderation (subscribing to moderation authorities you choose, would feel similar to joining a group, one can even have in the UI a few combinations of the same namespace and a few different moderation authorities for it), something-something a bigger role of client-side moderation (ignoring in the UI those people you don't like). Ideally what really gets removed and not propagated to anyone would be stuff like calls for mass murders, stolen credentials, gore, real rape and CP. The "posting to a namespace versus posting to an owned community" dichotomy is important. The latter causes a "capture the field" reaction from humans.)

    ...And under the current model, the egos of mods get crazy big as they see their community army grow bigger and they can shape it how they want, even stackoverflow suffered and developers left in droves long before LLM took its place.

    I do miss the original imageboards though that used sage and was a community driven effort into moderation.

  • ...And under the current model, the egos of mods get crazy big as they see their community army grow bigger and they can shape it how they want, even stackoverflow suffered and developers left in droves long before LLM took its place.

    I do miss the original imageboards though that used sage and was a community driven effort into moderation.

    The mod ego problem will exist as long as there's moderation, unfortunately.

    It was present in the web even before it was expelled from heaven.

    But it's not necessary to remove all moderation, just global identifiers of posts and many different "moderating projections" of the same collection of data can be enough to change the climate for most of the users. Not moderation itself really matters - the ability to dominate, to shut someone's mouth matters. If the only way you see a post is without such at all - then maybe it's too rude. If it's removed on the instance level on most of instances - then maybe it's something really nasty that shouldn't be seen. But if in some projection it's visible and in some not - then we've solved this particular problem.

    In such a hypothetical system.

  • He should just adapt Careless People for film

  • The mod ego problem will exist as long as there's moderation, unfortunately.

    It was present in the web even before it was expelled from heaven.

    But it's not necessary to remove all moderation, just global identifiers of posts and many different "moderating projections" of the same collection of data can be enough to change the climate for most of the users. Not moderation itself really matters - the ability to dominate, to shut someone's mouth matters. If the only way you see a post is without such at all - then maybe it's too rude. If it's removed on the instance level on most of instances - then maybe it's something really nasty that shouldn't be seen. But if in some projection it's visible and in some not - then we've solved this particular problem.

    In such a hypothetical system.

    Yeah I agree of sorts and people have the right to be offended so I prefer looser moderation over the absolute otherwise there's no difference between those groups that preach 'everything inclusive (except what we don't like)' and those who are clearly extreme and have their own biases. The irony of freespeech is you're going to hear things you don't agree with, and that's fine.

  • Yeah I agree of sorts and people have the right to be offended so I prefer looser moderation over the absolute otherwise there's no difference between those groups that preach 'everything inclusive (except what we don't like)' and those who are clearly extreme and have their own biases. The irony of freespeech is you're going to hear things you don't agree with, and that's fine.

    What I'm speaking about is that it should be impossible to do some things. If it's possible, they will be done, and there's nothing you can do about it.

    To solve the problem of twiddled social media (and moderation used to assert dominance) we need a decentralized system of 90s Web reimagined, and Fediverse doesn't deliver it - if Facebook and Reddit are feudal states, then Fediverse is a confederation of smaller feudal entities.

    A post, a person, a community, a reaction and a change (by moderator or by the user) should be global entities (with global identifiers, so that the object by id of #0000001a2b3c4d6e7f890 would be the same object today or 10 years later on every server storing it) replicated over a network of servers similarly to Usenet (and to an IRC network, but in an IRC network servers are trusted, so it's not a good example for a global system).

    Really bad posts (or those by persons with history of posting such) should be banned on server level by everyone. The rest should be moderated by moderator reactions\changes of certain type.

    Ideally, for pooling of resources and resilience, servers would be separated by types into storage nodes (I think the name says it, FTP servers can do the job, but no need to be limited by it), index nodes (scraping many storage nodes, giving out results in structured format fit for any user representation, say, as a sequence of posts in one community, or like a list of communities found by tag, or ... , and possibly being connected into one DHT for Kademlia-like search, since no single index node will have everything), and (like in torrents?) tracker nodes for these and for identities, I think torrent-like announce-retrieve service is enough - to return a list of storage nodes storing, say, a specified partition (subspace of identifiers of objects, to make looking for something at least possibly efficient), or return a list of index nodes, or return a bunch of certificates and keys for an identity (should be somehow cryptographically connected to the global identifier of a person). So when a storage node comes online, it announces itself to a bunch of such trackers, similarly with index nodes, similarly with a user. One can also have a NOSTR-like service for real-time notifications by users.

    This way you'd have a global untrusted pooled infrastructure, allowing to replace many platforms. With common data, identities, services. Objects in storage and index services can be, say, in a format including a set of tags and then the body. So a specific application needing to show only data related to it would just search on index services and display only objects with tags of, say, "holo_ns:talk.bullshit.starwars" and "holo_t:post", like a sequence of posts with ability to comment, or maybe it would search objects with tags "holo_name:My 1999-like Star Wars holopage" and "holo_t:page" and display the links like search results in Google, and then clicking on that you'd see something presented like a webpage, except links would lead to global identifiers (or tag expressions interpreted by the particular application, who knows).

    (An index service may return, say, an array of objects, each with identifier, tags, list of locations on storage nodes where it's found or even bittorrent magnet links, and a free description possibly ; then the user application can unify responses of a few such services to avoid repetitions, maybe sort them, represent them as needed, so on.)

    The user applications for that common infrastructure can be different at the same time. Some like Facebook, some like ICQ, some like a web browser, some like a newsreader.

    (Star Wars is not a random reference, my whole habit of imagining tech stuff is from trying to imagine a science fiction world of the future, so yeah, this may seem like passive dreaming and it is.)

  • WhatsApp Gopay 0898-2034-839

    Technology technology
    1
    0 Stimmen
    1 Beiträge
    1 Aufrufe
    Niemand hat geantwortet
  • 1k Stimmen
    166 Beiträge
    2k Aufrufe
    semperverus@lemmy.worldS
    Here's a listing of all of the visa corporate critters [image: 88472dcc-687f-4932-a8b8-ccf0140cde5d.png] [image: 566db492-4695-4dc1-8041-819af5daaac8.png] If you can get ahold of their contact info via LinkedIn or business listings, maybe try calling them directly for answers since their service desk can't seem to give us any.
  • 682 Stimmen
    193 Beiträge
    1k Aufrufe
    dastanktal@lemmy.mlD
    Archive URL: https://archive.is/2y5ZS
  • 349 Stimmen
    72 Beiträge
    894 Aufrufe
    M
    Sure, the internet is more practical, and the odds of being caught in the time required to execute a decent strike plan, even one as vague as: "we're going to Amerika and we're going to hit 50 high profile targets on July 4th, one in every state" (Dear NSA analyst, this is entirely hypothetical) so your agents spread to the field and start assessing from the ground the highest impact targets attainable with their resources, extensive back and forth from the field to central command daily for 90 days of prep, but it's being carried out on 270 different active social media channels as innocuous looking photo exchanges with 540 pre-arranged algorithms hiding the messages in the noise of the image bits. Chances of security agencies picking this up from the communication itself? About 100x less than them noticing 50 teams of activists deployed to 50 states at roughly the same time, even if they never communicate anything. HF (more often called shortwave) is well suited for the numbers game. A deep cover agent lying in wait, potentially for years. Only "tell" is their odd habit of listening to the radio most nights. All they're waiting for is a binary message: if you hear the sequence 3 17 22 you are to make contact for further instructions. That message may come at any time, or may not come for a decade. These days, you would make your contact for further instructions via internet, and sure, it would be more practical to hide the "make contact" signal in the internet too, but shortwave is a longstanding tech with known operating parameters.
  • How can websites verify unique (IRL) identities?

    Technology technology
    6
    8 Stimmen
    6 Beiträge
    57 Aufrufe
    H
    Safe, yeah. Private, no. If you want to verify whether a user is a real person, you need very personally identifiable information. That’s not ever going to be private. The best you could do, in theory, is have a government service that takes that PII and gives the user a signed cryptographic certificate they can use to verify their identity. Most people would either lose their private key or have it stolen, so even that system would have problems. The closest to reality you could do right now is use Apple’s FaceID, and that’s anything but private. Pretty safe though. It’s super illegal and quite hard to steal someone’s face.
  • 44 Stimmen
    9 Beiträge
    95 Aufrufe
    M
    This will be a privacy nightmare.
  • 1k Stimmen
    254 Beiträge
    8k Aufrufe
    T
    I use powerpoint all the time. Impress is very far behind in terms of usability and basic functionality. But I'm hopeful it will get better as adoption increases.
  • Airlines Are Selling Your Data to ICE

    Technology technology
    23
    1
    553 Stimmen
    23 Beiträge
    328 Aufrufe
    F
    It’s not a loophole though.