Skip to content

Study: Social media probably can’t be fixed

Technology
150 94 20
  • Can't?

    I'm on Lemmy, am I not?

    It CAN be fixed, the question if the will is there. We need to inform and teach more people

    Right. We fix ourselves first, we are already here and we do not attempt to control others. We make and go our own way every moment.

  • Can't?

    I'm on Lemmy, am I not?

    It CAN be fixed, the question if the will is there. We need to inform and teach more people

    I’m on Lemmy, am I not?

    It CAN be fixed, the question if the will is there.

    While and improvement Lemmy is far from perfect. The upvote-downvote sytem of reddit alone encourages group think and self censorship. It doesn't really help that much that we can go circlejerk in some other instance if we get hated on or banned by mods. We are still encouraged to keep in line to keep the bubble intact.

  • All those platforms work the same way. In the end it's all about the same social dynamics, about control. "We are the alternative to all the shitty peer groups out there! Join us!" is one of the oldest tricks in the playbook. There is no alternative. Because it's all based on human nature.

    The AlGoRyThMs are what is inducing the social damage.

    Even games of chance (like Poker Machines and) would be less destructive if they were fairer and less engaging.

  • “Fixing” social media is like “fixing” capitalism. Any manmade system can be changed, destroyed, or rebuilt. It’s not an impossible task but will require a fundamental shift in the way we see/talk to/value each other as people.

    The one thing I know for sure is that social media won’t ever improve if we all accept the narrative that it can’t be improved.

    We live in capitalism. Its power seems inescapable. So did the divine right of kings. Any human power can be resisted and changed by human beings. Resistance and change often begin in art, and very often in our art, the art of words.

    -Ursula K Le Guin

    Yeah, this author is the pop-sci / sci-fi media writer on Ars Technica, not one of the actual science coverage ones that stick to their area of expertise, and you can tell by the overly broad, click bait, headline, that is not actually supported by the research at hand.

    The actual research is using limited LLM agents and only explores an incredibly limited number of interventions. This research does not remotely come close to supporting the question of whether or not social media can be fixed, which in itself is a different question from harm reduction.

  • Seriously, read her books. I looooove „The Dispossessed“

    The Left Hand of Darkness is excellent too. Sci-fi from the 1960s about a planet whose people have no fixed sex or gender, and a man from Earth who struggles to understand and function in this society. That description makes it sound very worthy, but it's actually gripping and moving.

  • This post did not contain any content.

    Social media will be fixed by - wait for it...

    Now.

    Done. Fixed it, you may thank me later.

    Yours,

    B-TR3E - the man who fixed social media

  • Can't?

    I'm on Lemmy, am I not?

    It CAN be fixed, the question if the will is there. We need to inform and teach more people

    Most people don't know about this experience, probably aren't looking for this experience, or would not know how to interact with it. I know it sounds crazy, but Reddit still confuses many people. Lemmy's a different ball of similar wax.

    They want the saccharine-coated dopamine-filled mass-produced low-effort meme cesspool that IG, TikTok, etc. all provide. They don't know they want more until they decide they're done with it and start to look. Until then, it's like showing hieroglyphs to an iguana.

  • I’m on Lemmy, am I not?

    It CAN be fixed, the question if the will is there.

    While and improvement Lemmy is far from perfect. The upvote-downvote sytem of reddit alone encourages group think and self censorship. It doesn't really help that much that we can go circlejerk in some other instance if we get hated on or banned by mods. We are still encouraged to keep in line to keep the bubble intact.

    After 20 years of living with it, I've decided I don't like the downvote. The upvote is fine.

    Reddit's founders, early on tried to encourage people to treat the downvote as moderation. It was meant to mean that a thing doesn't belong on reddit and people shouldn't see it. Of course that quickly became mere dislike or disagreement.

    I'd prefer an approach that requires some input about what's wrong with a post in order to reduce its prominence; a restricted list of options as in Slashdot's moderation would be sufficient, I think. I'm not sure whether this should necessarily require also making a report to a more powerful admin/moderator, but I lean toward making that optional in most communities.

  • Can't?

    I'm on Lemmy, am I not?

    It CAN be fixed, the question if the will is there. We need to inform and teach more people

    the problem is algorithms. during the whole bluesky promo all over lemmy while everyone was shitting on mastodon. the only thing that's broken is algorithms, and once you throw them out social media is immediately fixed - but of course the primary argument of mastodon vs bluesky was that mastodon requires you to curate your content (like joining a sub on reddit to see it on your front page stream, before algorithms fucked that site, and the thing is people LOVED old reddit so i fail to see how this is bad and doesn't work, but hey, all of lemmy said so, so who am i to blame) whereas bluesky being a relaunch of twitter and literally curating content for you no matter if you actually want to see it or not but for most people reactionary content is the only content they happily interact with anyway so algorithms makes a lot of sense for them because they feel they are engaging more with the site despite the pointless empty engagement they are doing instead of interacting with real users and real content on pages where you have to actively curate your content instead of being fed the lowest hanging fruit.

    / rant off

  • There will be a big curtaining of Apple, Microsoft, Google and Adobe if Facebook, TikTok and Twitter (and YouTube) have their algorithmic feeds outlawed.

    It would probably cause the AI bubble to burst too so our OSs, Applications and Search Engines (and Government) would become usable again.

    who will pay our representatives to push this through?

  • “Fixing” social media is like “fixing” capitalism. Any manmade system can be changed, destroyed, or rebuilt. It’s not an impossible task but will require a fundamental shift in the way we see/talk to/value each other as people.

    The one thing I know for sure is that social media won’t ever improve if we all accept the narrative that it can’t be improved.

    We live in capitalism. Its power seems inescapable. So did the divine right of kings. Any human power can be resisted and changed by human beings. Resistance and change often begin in art, and very often in our art, the art of words.

    -Ursula K Le Guin

    Particularly apt given that many of the biggest problems with social media are problems of capitalism. Social media platforms have found it most profitable to monetize conflict and division, the low self-esteem of teenagers, lies and misinformation, envy over the curated simulacrum of a life presented by a parasocial figure.

    These things drive engagement. Engagement drives clicks. Clicks drive ad revenue. Revenue pleases shareholders. And all that feeds back into a system that trades negativity in the real world for positivity on a balance sheet.

  • This post did not contain any content.

    Of course -corporate- social media can't be fixed ... it already works exactly they way they want it to...

  • “Fixing” social media is like “fixing” capitalism. Any manmade system can be changed, destroyed, or rebuilt. It’s not an impossible task but will require a fundamental shift in the way we see/talk to/value each other as people.

    The one thing I know for sure is that social media won’t ever improve if we all accept the narrative that it can’t be improved.

    We live in capitalism. Its power seems inescapable. So did the divine right of kings. Any human power can be resisted and changed by human beings. Resistance and change often begin in art, and very often in our art, the art of words.

    -Ursula K Le Guin

    This is spot on. The issue with any system is that people don’t pay attention to the incentives.

    When a surgeon earns more if he does more surgeries with no downside, most surgeons in that system will obviously push for surgeries that aren’t necessary. How to balance incentives should be the main focus on any system that we’re part of.

    You can pretty much understand someone else’s behavior by looking at what they’re gaining or what problem they’re avoiding by doing what they’re doing.

  • Seriously, read her books. I looooove „The Dispossessed“

    LeGuin is a treasure.

  • It's almost like the problem isn't social media, but the algorithms that put content in front of your eyeballs to keep your engagement in order to monetize you. Like a casino.

    Exactly, the one big issue with the modern world is the algorithms pushing for engagement as the only important metric.

  • All those platforms work the same way. In the end it's all about the same social dynamics, about control. "We are the alternative to all the shitty peer groups out there! Join us!" is one of the oldest tricks in the playbook. There is no alternative. Because it's all based on human nature.

    Reddit certainly had its problems but was actually pretty good for the ~15 years before it started getting enshittified more and more to try to extract value.

  • Facebook was pretty boring before they tried to make money. Still ick, but mostly just people posting pictures of activities with family or friends.

  • After 20 years of living with it, I've decided I don't like the downvote. The upvote is fine.

    Reddit's founders, early on tried to encourage people to treat the downvote as moderation. It was meant to mean that a thing doesn't belong on reddit and people shouldn't see it. Of course that quickly became mere dislike or disagreement.

    I'd prefer an approach that requires some input about what's wrong with a post in order to reduce its prominence; a restricted list of options as in Slashdot's moderation would be sufficient, I think. I'm not sure whether this should necessarily require also making a report to a more powerful admin/moderator, but I lean toward making that optional in most communities.

    I’d prefer an approach that requires some input about what’s wrong with a post in order to reduce its prominence

    A lot of the time, I downvote troll content that should not be engaged with. Like, not technically against the rules, but definitely someone who is not posting in good faith. If I responded to the post, I'd be contributing to the problem.

  • I’d prefer an approach that requires some input about what’s wrong with a post in order to reduce its prominence

    A lot of the time, I downvote troll content that should not be engaged with. Like, not technically against the rules, but definitely someone who is not posting in good faith. If I responded to the post, I'd be contributing to the problem.

    I don't mean replying, but selecting from a menu of possible reasons to downrank a post. Slashdot's moderation system that I mentioned earlier has (or had - haven't looked there in a while) "troll" as one of the categories.

  • This post did not contain any content.

    I think just going back to internet forums circa early 2000s is probably a better way to engage honestly. They're still around, just not as "smartphone friendly" and doomscroll-enabled, due to the format.

    I'm talking stuff like SomethingAwful, GaiaOnline, Fark, Newgrounds forum, GlockTalk, Slashdot, vBulletin etc.

    These types of forums allowed you to discuss timely issues and news if you wanted. You could go a thousand miles deep on some bizarre subculture or stick to general discussion. They also had protomeme culture before that was a thing - aka "embedded image macros".

  • 716 Stimmen
    67 Beiträge
    80 Aufrufe
    S
    All the research I am aware of - including what I referenced in the previous comment, is that people are honest by default, except for a few people who lie a lot. Boris Johnson is a serial liar and clearly falls into that camp. I believe that you believe that, but a couple of surveys are not a sufficient argument to prove the fundamental good of all humanity. If honesty were not the default, why would we believe what anyone has to say in situations where they have an incentive to lie, which is often? Why are such a small proportion of people criminals and fraudsters when for a lot of crimes, someone smart and cautious has a very low chance of being caught? I think this is just a lack of imagination. i will go through your scenarios and provide an answer but i don't think it's going to achieve anything, we just fundamentally disagree on this. why would we believe what anyone has to say in situations where they have an incentive to lie, which is often? You shouldn't. edit : You use experience with this person or in general, to make a judgement call about whether or not you want to listen to what they have to say until more data is available. You continue to refine based on accumulated experience. Why are such a small proportion of people criminals and fraudsters when for a lot of crimes, someone smart and cautious has a very low chance of being caught? A lot of assumptions and leaps here. Firstly crime implies actual law, which is different in different places, so let's assume for now we are talking about the current laws in the uk. Criminals implies someone who has been caught and prosecuted for breaking a law, I'm going with that assumption because "everyone who has ever broken a law" is a ridiculous interpretation. So to encompass the assumptions: Why are such a small proportion of people who have been caught and prosecuted for breaking the law in the uk, when someone smart and caution has a very low chance of being caught? I hope you can see how nonsensical that question is. The evolutionary argument goes like this: social animals have selection pressure for traits that help the social group, because the social group contains related individuals, as well as carrying memetically inheritable behaviours. This means that the most successful groups are the ones that work well together. A group first of all has an incentive to punish individuals who act selfishly to harm the group - this will mean the group contains mostly individuals who, through self interest, will not betray the group. But a group which doesn’t have to spend energy finding and punishing traitorous individuals because it doesn’t contain as many in the first place will do even better. This creates a selection pressure behind mere self interest. That's a nicely worded very bias interpretation. social animals have selection pressure for traits that help the social group, because the social group contains related individuals, as well as carrying memetically inheritable behaviours. This is fine. This means that the most successful groups are the ones that work well together. That's a jump, working well together might not be the desirable trait in this instance. But let's assume it is for now. A group first of all has an incentive to punish individuals who act selfishly to harm the group - this will mean the group contains mostly individuals who, through self interest, will not betray the group. Reductive and assumptive, you're also conflating selfishness with betrayal, you can have on without the other, depending on perceived definitions of course. But a group which doesn’t have to spend energy finding and punishing traitorous individuals because it doesn’t contain as many in the first place will do even better. This creates a selection pressure behind mere self interest. Additional reduction and a further unsupported jump, individuals are more than just a single trait, selfishness might be desirable in certain scenarios or it might be a part of an individual who's other traits make up for it in a tribal context. The process of seeking and the focused attention might be a preferential selection trait that benefits the group. Powerful grifters try to protect themselves yes, but who got punished for pointing out that Boris is a serial liar? Everyone who has been negatively impacted by the policies enacted and consequences of everything that was achieved on the back of those lies. Because being ignored is still a punishment if there are negative consequences. But let's pick a more active punishment, protesting. Protest in a way we don't like or about a subject we don't approve of, it's now illegal to protest unless we give permission. That's reductive, but indicative of what happened in broad strokes. Have you read what the current government has said about the previous one? I'd imagine something along the lines of what the previous government said about the one before ? As a society we generally hate that kind of behaviour. Society as a whole does not protect wealth and power; wealth and power forms its own group which tries to protect itself. Depends on how you define society as a whole. By population, i agree. By actual power to enact change(without extreme measures), less so Convenient that you don't include the wealth and power as part of society, like its some other separate thing. You should care because it entirely colours how you interact with political life. “Shady behaviour” is about intent as well as outcome, and we are talking in this thread about shady behaviour, and hence about intent. See [POINT A]
  • Thinking Is Becoming a Luxury Good

    Technology technology
    30
    65 Stimmen
    30 Beiträge
    313 Aufrufe
    S
    In political science, the term polyarchy (poly "many", arkhe "rule") was used by Robert A. Dahl to describe a form of government in which power is invested in multiple people. It takes the form of neither a dictatorship nor a democracy. This form of government was first implemented in the United States and France and gradually adopted by other countries. Polyarchy is different from democracy, according to Dahl, because the fundamental democratic principle is "the continuing responsiveness of the government to the preferences of its citizens, considered as political equals" with unimpaired opportunities. A polyarchy is a form of government that has certain procedures that are necessary conditions for following the democratic principle. So yeah, you are right. A representative "democracy" is not a democracy. It's a monarchy with more than one ruler. A gummy bear is as much a bear as representative democracy is a democracy. I didn't know that, because i was taught in school that a representative "democracy" is a form of democracy. And the name makes it sound like one. But it isn't. It's not even supposed to be in theory. I am sure 99% of people living in a representative "democracy" don't know this. I hereby encourage everyone to abandon the word representative "democracy" in favor of polyarchy or maybe oligarchy. This makes it much clearer what we are talking about. Also i doubt the authors of this article know this, because they imply that representative "democracy" is desirable, but it is obviously undesirable.
  • 1 Stimmen
    1 Beiträge
    32 Aufrufe
    Niemand hat geantwortet
  • 337 Stimmen
    19 Beiträge
    192 Aufrufe
    R
    What I'm speaking about is that it should be impossible to do some things. If it's possible, they will be done, and there's nothing you can do about it. To solve the problem of twiddled social media (and moderation used to assert dominance) we need a decentralized system of 90s Web reimagined, and Fediverse doesn't deliver it - if Facebook and Reddit are feudal states, then Fediverse is a confederation of smaller feudal entities. A post, a person, a community, a reaction and a change (by moderator or by the user) should be global entities (with global identifiers, so that the object by id of #0000001a2b3c4d6e7f890 would be the same object today or 10 years later on every server storing it) replicated over a network of servers similarly to Usenet (and to an IRC network, but in an IRC network servers are trusted, so it's not a good example for a global system). Really bad posts (or those by persons with history of posting such) should be banned on server level by everyone. The rest should be moderated by moderator reactions\changes of certain type. Ideally, for pooling of resources and resilience, servers would be separated by types into storage nodes (I think the name says it, FTP servers can do the job, but no need to be limited by it), index nodes (scraping many storage nodes, giving out results in structured format fit for any user representation, say, as a sequence of posts in one community, or like a list of communities found by tag, or ... , and possibly being connected into one DHT for Kademlia-like search, since no single index node will have everything), and (like in torrents?) tracker nodes for these and for identities, I think torrent-like announce-retrieve service is enough - to return a list of storage nodes storing, say, a specified partition (subspace of identifiers of objects, to make looking for something at least possibly efficient), or return a list of index nodes, or return a bunch of certificates and keys for an identity (should be somehow cryptographically connected to the global identifier of a person). So when a storage node comes online, it announces itself to a bunch of such trackers, similarly with index nodes, similarly with a user. One can also have a NOSTR-like service for real-time notifications by users. This way you'd have a global untrusted pooled infrastructure, allowing to replace many platforms. With common data, identities, services. Objects in storage and index services can be, say, in a format including a set of tags and then the body. So a specific application needing to show only data related to it would just search on index services and display only objects with tags of, say, "holo_ns:talk.bullshit.starwars" and "holo_t:post", like a sequence of posts with ability to comment, or maybe it would search objects with tags "holo_name:My 1999-like Star Wars holopage" and "holo_t:page" and display the links like search results in Google, and then clicking on that you'd see something presented like a webpage, except links would lead to global identifiers (or tag expressions interpreted by the particular application, who knows). (An index service may return, say, an array of objects, each with identifier, tags, list of locations on storage nodes where it's found or even bittorrent magnet links, and a free description possibly ; then the user application can unify responses of a few such services to avoid repetitions, maybe sort them, represent them as needed, so on.) The user applications for that common infrastructure can be different at the same time. Some like Facebook, some like ICQ, some like a web browser, some like a newsreader. (Star Wars is not a random reference, my whole habit of imagining tech stuff is from trying to imagine a science fiction world of the future, so yeah, this may seem like passive dreaming and it is.)
  • 6 Stimmen
    1 Beiträge
    19 Aufrufe
    Niemand hat geantwortet
  • Windows 11 remote desktop microphone stops working intermittently

    Technology technology
    7
    16 Stimmen
    7 Beiträge
    76 Aufrufe
    S
    When I worked in IT, we only let people install every other version of Windows. Our Linux user policy was always “mainstream distro and the LTS version.” Mac users were strongly advised to wait 3 months to upgrade. One guy used FreeBSD and I just never questioned him because he was older and never filed one help desk request. He probably thought I was an idiot. (And I was.) Anyway, I say all that to say don’t use Windows 11 on anything important. It’s the equivalent of a beta. Windows 12 (or however they brand it) will probably be stable. I don’t use Windows much anymore and maybe things have changed but the concepts in the previous paragraph could be outdated. But it’s a good rule of thumb.
  • U.S.-Sanctioned Terrorists Enjoy Premium Boost on X

    Technology technology
    5
    1
    90 Stimmen
    5 Beiträge
    61 Aufrufe
    M
    Yeah but considering who's in charge of the government, half of us will be hit with that designation sooner or later.
  • 14 Stimmen
    2 Beiträge
    34 Aufrufe
    D
    "Extra Verification steps" I know how large social media companies operate. This is all about increasing the value of Reddit users to advertisers. The goal is to have a more accurate user database to sell them. Zuckerberg literally brags to corporations about how good their data is on users: https://www.facebook.com/business/ads/performance-marketing Here, Zuckerberg tells corporations that Instagram can easily manipulate users into purchasing shit: https://www.facebook.com/business/instagram/instagram-reels Always be wary of anything available for free. There are some quality exceptions (CBC, VLC, The Guardian, Linux, PBS, Wikipedia, Lemmy, ProPublica) but, by and large, "free" means they don't care about you. You are just a commodity that they sell. Facebook, Google, X, Reddit, Instagram... Their goal is keep people hooked to their smartphone by giving them regular small dopamine hits (likes, upvotes) followed by a small breaks with outrageous content/emotional content. Keep them hooked, gather their data, and sell them ads. The people who know that best are former top executives : https://www.theguardian.com/technology/2017/oct/05/smartphone-addiction-silicon-valley-dystopia https://www.nytimes.com/2019/03/01/business/addictive-technology.html https://www.today.com/parents/teens/facebook-whistleblower-frances-haugen-rcna15256