Skip to content

Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.

Technology
356 149 3.3k
  • While a fair idea there are two issues with that even still - Hallucinations and the cost of running the models.

    Unfortunately, it take significant compute resources to perform even simple responses, and these responses can be totally made up, but still made to look completely real. It's gotten much better sure, but blindly trusting these things (Which many people do) can have serious consequences.

    Hallucinations and the cost of running the models.

    So, inaccurate information in books is nothing new. Agreed that the rate of hallucinations needs to decline, a lot, but there has always been a need for a veracity filter - just because it comes from "a book" or "the TV" has never been an indication of absolute truth, even though many people stop there and assume it is. In other words: blind trust is not a new problem.

    The cost of running the models is an interesting one - how does it compare with publication on paper to ship globally to store in environmentally controlled libraries which require individuals to physically travel to/from the libraries to access the information? What's the price of the resulting increased ignorance of the general population due to the high cost of information access?

    What good is a bunch of knowledge stuck behind a search engine when people don't know how to access it, or access it efficiently?

    Granted, search engines already take us 95% (IMO) of the way from paper libraries to what AI is almost succeeding in being today, but ease of access of information has tremendous value - and developing ways to easily access the information available on the internet is a very valuable endeavor.

    Personally, I feel more emphasis should be put on establishing the veracity of the information before we go making all the garbage easier to find.

    I also worry that "easy access" to automated interpretation services is going to lead to a bunch of information encoded in languages that most people don't know because they're dependent on machines to do the translation for them. As an example: shiny new computer language comes out but software developer is too lazy to learn it, developer uses AI to write code in the new language instead...

  • Sure. We weren't discussing if AI creates value or not. If you ask a different question then you get a different answer.

    Well - if you want to devolve into argument, you can argue all day long about "what is reasoning?"

  • When are people going to realize, in its current state , an LLM is not intelligent. It doesn’t reason. It does not have intuition. It’s a word predictor.

    I agree with you. In its current state, LLM is not sentient, and thus not "Intelligence".

  • "lacks internal computation" is not part of the definition of markov chains. Only that the output depends only on the current state (the whole context, not just the last token) and no previous history, just like llms do. They do not consider tokens that slid out of the current context, because they are not part of the state anymore.

    And it wouldn't be a cache unless you decide to start invalidating entries, which you could just, not do.. it would be a table with token-alphabet-size^context length size, with each entry being a vector of size token_alphabet_size. Because that would be too big to realistically store, we do not precompute the whole thing, and just approximate what each table entry should be using a neural network.

    The pi example was just to show that how you implement a function (any function) does not matter, as long as the inputs and outputs are the same. Or to put it another way if you give me an index, then you wouldn't know whether I got the result by doing some computations or using a precomputed table.

    Likewise, if you give me a sequence of tokens and I give you a probability distribution, you can't tell whether I used A NN or just consulted a precomputed table. The point is that given the same input, the table will always give the same result, and crucially, so will an llm. A table is just one type of implementation for an arbitrary function.

    There is also no requirement for the state transiiltion function (a table is a special type of function) to be understandable by humans. Just because it's big enough to be beyond human comprehension, doesn't change its nature.

    You're correct that the formal definition of a Markov process does not exclude internal computation, and that it only requires the next state to depend solely on the current state. But what defines a classical Markov chain in practice is not just the formal dependency structure but how the transition function is structured and used. A traditional Markov chain has a discrete and enumerable state space with explicit, often simple transition probabilities between those states. LLMs do not operate this way.

    The claim that an LLM is "just" a large compressed Markov chain assumes that its function is equivalent to a giant mapping of input sequences to output distributions. But this interpretation fails to account for the fundamental difference in how those distributions are generated. An LLM is not indexing a symbolic structure. It is computing results using recursive transformations across learned embeddings, where those embeddings reflect complex relationships between tokens, concepts, and tasks. That is not reducible to discrete symbolic transitions without losing the model’s generalization capabilities. You could record outputs for every sequence, but the moment you present a sequence that wasn't explicitly in that set, the Markov table breaks. The LLM does not.

    Yes, you can say a table is just one implementation of a function, and from a purely mathematical perspective, any function can be implemented as a table given enough space. But the LLM’s function is general-purpose. It extrapolates. A precomputed table cannot do this unless those extrapolations are already baked in, in which case you are no longer talking about a classical Markov system. You are describing a model that encodes relationships far beyond discrete transitions.

    The pi analogy applies to deterministic functions with fixed outputs, not to learned probabilistic functions that approximate conditional distributions over language. If you give an LLM a new input, it will return a meaningful distribution even if it has never seen anything like it. That behavior depends on internal structure, not retrieval. Just because a function is deterministic at temperature 0 does not mean it is a transition table. The fact that the same input yields the same output is true for any deterministic function. That does not collapse the distinction between generalization and enumeration.

    So while yes, you can implement any deterministic function as a lookup table, the nature of LLMs lies in how they model relationships and extrapolate from partial information. That ability is not captured by any classical Markov model, no matter how large.

  • When are people going to realize, in its current state , an LLM is not intelligent. It doesn’t reason. It does not have intuition. It’s a word predictor.

    And that's pretty damn useful, but obnoxious to have expectations wildly set incorrectly.

  • those particular models. It does not prove the architecture doesn't allow it at all. It's still possible that this is solvable with a different training technique, and none of those are using the right one. that's what they need to prove wrong.

    this proves the issue is widespread, not fundamental.

    Is "model" not defined as architecture+weights? Those models certainly don't share the same architecture. I might just be confused about your point though

  • When are people going to realize, in its current state , an LLM is not intelligent. It doesn’t reason. It does not have intuition. It’s a word predictor.

    People think they want AI, but they don’t even know what AI is on a conceptual level.

  • Funny how triggering it is for some people when anyone acknowledges humans are just evolved primates doing the same pattern matching.

    We actually have sentience, though, and are capable of creating new things and having realizations. AI isn’t real and LLMs and dispersion models are simply reiterating algorithmic patterns, no LLM or dispersion model can create anything original or expressive.

    Also, we aren’t “evolved primates.” We are just primates, the thing is, primates are the most socially and cognitively evolved species on the planet, so that’s not a denigrating sentiment unless your a pompous condescending little shit.

  • It’s built by animals, and it reflects them. That’s impressive on its own. Doesn’t need to be exaggerated.

    Impressive = / = substantial or beneficial.

  • What they mean is that before Turing, "computer" was literally a person's job description. You hand a professional a stack of calculations with some typos, part of the job is correcting those out. Newfangled machine comes along with the same name as the job, among the first thing people are gonna ask about is where it fall short.

    Like, if I made a machine called "assistant", it'd be natural for people to point out and ask about all the things a person can do that a machine just never could.

    And what I mean is that prior to the mid 1900s the etymology didn't exist to cause that confusion of terms. Neither Babbage's machines nor prior adding engines were called computers or calculators. They were 'machines' or 'engines'.

    Babbage's machines were novel in that they could do multiple types of operations, but 'mechanical calculators' and counting machines were ~200 years old. Other mathematical tools like the abacus are obviously far older. They were not novel enough to cause confusion in anyone with even passing interest.

    But there will always be people who just assume 'magic', and/or "it works like I want it to".

  • LOOK MAA I AM ON FRONT PAGE

    Peak pseudo-science. The burden of evidence is on the grifters who claim "reason". But neither side has any objective definition of what "reason" means. It's pseudo-science against pseudo-science in a fierce battle.

  • Some AI researchers found it obvious as well, in terms of they've suspected it and had some indications. But it's good to see more data on this to affirm this assessment.

    Particularly to counter some more baseless marketing assertions about the nature of the technology.

  • Wow it's almost like the computer scientists were saying this from the start but were shouted over by marketing teams.

    It's hard to to be heard when you're buried under all that sweet VC/grant money.

  • Peak pseudo-science. The burden of evidence is on the grifters who claim "reason". But neither side has any objective definition of what "reason" means. It's pseudo-science against pseudo-science in a fierce battle.

    Even defining reason is hard and becomes a matter of philosophy more than science. For example, apply the same claims to people. Now I've given you something to think about. Or should I say the Markov chain in your head has a new topic to generate thought states for.

  • Is "model" not defined as architecture+weights? Those models certainly don't share the same architecture. I might just be confused about your point though

    It is, but this did not prove all architectures cannot reason, nor did it prove that all sets of weights cannot reason.

    essentially they did not prove the issue is fundamental. And they have a pretty similar architecture, they're all transformers trained in a similar way. I would not say they have different architectures.

  • NOOOOOOOOO

    SHIIIIIIIIIITT

    SHEEERRRLOOOOOOCK

    Without being explicit with well researched material, then the marketing presentation gets to stand largely unopposed.

    So this is good even if most experts in the field consider it an obvious result.

  • People think they want AI, but they don’t even know what AI is on a conceptual level.

    Yeah I often think about this Rick N Morty cartoon. Grifters are like, "We made an AI ankle!!!" And I'm like, "That's not actually something that people with busted ankles want. They just want to walk. No need for a sentient ankle." It's a real gross distortion of science how everything needs to be "AI" nowadays.

  • This is why I said I wasn't sure how AI works behind the scenes. But I do know that logic isn't difficult. Just to not fuck around between us. I have a CS background. Only saying this because I think you may have it as well and we can save some time.

    It makes sense to me that logic is something AI can parse easily. Logic in my mind is very easy if it can tokenize some text. Wouldn't the difficulty be if the AI has the right context.

    Logic requires abstracting the argumentative form from the literal linguistic content and then generalising it, just how like math is done properly when you work with numbers and not just with sentences such as "two apples and three apples is five apples" (such abstraction in practice allows far more powerful and widely applicable operations than dealing with individual linguistic expressions; if you've ever solved very complex truth trees you'll know how they allow streamlining and solutions that would be practically impossible to do if you had only the ordinary linguistic expression of the same problem). Logic doesn't operate with textual tokens but with logical propositions and operators. "Difficulty" is not a meaningful term here, a tool is either technically capable of doing something (more or less successfully) or it isn't.

    That LLMs aren't capable of this sort of precision and abstraction is shown by the OP link as well as the simple fact that chatbots used to be extremely bad at math (which is now probably patched up by adding a proper math module, rather than relying on the base LLM - my assumption, at least).

    As for trying more examples of looking for logical fallacies, I tried out three different types of text. Since you say context is important, it's best to take only the beginning of a text. One text I tried is the opening of the Wikipedia article on "history", which ChatGPT described like this: "The passage you've provided is an informative and largely neutral overview of the academic discipline of history. It doesn't make any strong arguments or persuasive claims, which are typically where logical fallacies appear." It then went on to nitpick about some details "for the sake of thorough analysis", but basically had no real complaints. Then I tried out the opening paragraph of Moby-Dick. That's a fictional text so it would be reasonable to reject analysing its logical solidity, as GPT already did with the WP article, but it still tried to wring out some "criticism" that occasionally shows how it misunderstands the text (just as it misunderstood a part of my comment above). Finally, I asked it to find the fallacies in the first four paragraphs of Descartes' Meditations on First Philosophy, which resulted in a criticism that was based on less logically rigid principles than the original text (accusing Descartes of the "slippery slope fallacy").

    I'll post the full replies below.

  • Logic requires abstracting the argumentative form from the literal linguistic content and then generalising it, just how like math is done properly when you work with numbers and not just with sentences such as "two apples and three apples is five apples" (such abstraction in practice allows far more powerful and widely applicable operations than dealing with individual linguistic expressions; if you've ever solved very complex truth trees you'll know how they allow streamlining and solutions that would be practically impossible to do if you had only the ordinary linguistic expression of the same problem). Logic doesn't operate with textual tokens but with logical propositions and operators. "Difficulty" is not a meaningful term here, a tool is either technically capable of doing something (more or less successfully) or it isn't.

    That LLMs aren't capable of this sort of precision and abstraction is shown by the OP link as well as the simple fact that chatbots used to be extremely bad at math (which is now probably patched up by adding a proper math module, rather than relying on the base LLM - my assumption, at least).

    As for trying more examples of looking for logical fallacies, I tried out three different types of text. Since you say context is important, it's best to take only the beginning of a text. One text I tried is the opening of the Wikipedia article on "history", which ChatGPT described like this: "The passage you've provided is an informative and largely neutral overview of the academic discipline of history. It doesn't make any strong arguments or persuasive claims, which are typically where logical fallacies appear." It then went on to nitpick about some details "for the sake of thorough analysis", but basically had no real complaints. Then I tried out the opening paragraph of Moby-Dick. That's a fictional text so it would be reasonable to reject analysing its logical solidity, as GPT already did with the WP article, but it still tried to wring out some "criticism" that occasionally shows how it misunderstands the text (just as it misunderstood a part of my comment above). Finally, I asked it to find the fallacies in the first four paragraphs of Descartes' Meditations on First Philosophy, which resulted in a criticism that was based on less logically rigid principles than the original text (accusing Descartes of the "slippery slope fallacy").

    I'll post the full replies below.

    This famous passage from the opening of Moby-Dick by Herman Melville is rich in literary style but not necessarily constructed as an argument, which makes identifying formal logical fallacies a nuanced task. That said, we can identify a few rhetorical or informal fallacies—not because the text is poorly reasoned, but because it makes some sweeping or questionable assumptions in a stylistically persuasive way. Here's a breakdown:


    1. Hasty Generalization

    "Almost all men in their degree, some time or other, cherish very nearly the same feelings towards the ocean with me."

    • Fallacy: The narrator generalizes his deeply personal and specific emotional reaction to the sea as something nearly all men experience.
    • Why it’s fallacious: No evidence is presented to support this broad claim. It is based purely on the narrator’s subjective experience and intuition.

    2. False Cause (Post Hoc)

    "It is a way I have of driving off the spleen and regulating the circulation."

    • Fallacy: Implies a causal relationship between going to sea and improved physical/emotional health.
    • Why it’s fallacious: While sailing may coincidentally improve his mood or circulation, the text offers no medical or psychological basis to prove a direct causal link.

    3. Appeal to Emotion

    "...especially whenever my hypos get such an upper hand of me, that it requires a strong moral principle to prevent me from deliberately stepping into the street, and methodically knocking people’s hats off..."

    • Fallacy: Uses vivid emotional language to justify behavior or state of mind rather than logical reasoning.
    • Why it’s fallacious: It appeals to the reader’s empathy or amusement to legitimize the narrator’s melancholy and coping strategy, rather than offering a rational justification.

    4. False Analogy

    "With a philosophical flourish Cato throws himself upon his sword; I quietly take to the ship."

    • Fallacy: Compares going to sea with committing suicide as if they are equivalent responses to despair.
    • Why it’s fallacious: The analogy is emotionally compelling, but equating self-harm with a change in setting (boarding a ship) overlooks major differences in motive and consequence.

    Summary:

    Fallacy Example from Text Explanation
    Hasty Generalization “Almost all men... cherish very nearly the same feelings...” Overgeneralizes personal feeling
    False Cause “...driving off the spleen and regulating the circulation.” Implies unproven health benefits
    Appeal to Emotion “...prevent me from... knocking people’s hats off.” Justifies behavior with emotion
    False Analogy “Cato throws himself... I quietly take to the ship.” Equates suicide and sailing

    Would you like me to rewrite the passage without the fallacies, or is this for literary analysis?

  • This famous passage from the opening of Moby-Dick by Herman Melville is rich in literary style but not necessarily constructed as an argument, which makes identifying formal logical fallacies a nuanced task. That said, we can identify a few rhetorical or informal fallacies—not because the text is poorly reasoned, but because it makes some sweeping or questionable assumptions in a stylistically persuasive way. Here's a breakdown:


    1. Hasty Generalization

    "Almost all men in their degree, some time or other, cherish very nearly the same feelings towards the ocean with me."

    • Fallacy: The narrator generalizes his deeply personal and specific emotional reaction to the sea as something nearly all men experience.
    • Why it’s fallacious: No evidence is presented to support this broad claim. It is based purely on the narrator’s subjective experience and intuition.

    2. False Cause (Post Hoc)

    "It is a way I have of driving off the spleen and regulating the circulation."

    • Fallacy: Implies a causal relationship between going to sea and improved physical/emotional health.
    • Why it’s fallacious: While sailing may coincidentally improve his mood or circulation, the text offers no medical or psychological basis to prove a direct causal link.

    3. Appeal to Emotion

    "...especially whenever my hypos get such an upper hand of me, that it requires a strong moral principle to prevent me from deliberately stepping into the street, and methodically knocking people’s hats off..."

    • Fallacy: Uses vivid emotional language to justify behavior or state of mind rather than logical reasoning.
    • Why it’s fallacious: It appeals to the reader’s empathy or amusement to legitimize the narrator’s melancholy and coping strategy, rather than offering a rational justification.

    4. False Analogy

    "With a philosophical flourish Cato throws himself upon his sword; I quietly take to the ship."

    • Fallacy: Compares going to sea with committing suicide as if they are equivalent responses to despair.
    • Why it’s fallacious: The analogy is emotionally compelling, but equating self-harm with a change in setting (boarding a ship) overlooks major differences in motive and consequence.

    Summary:

    Fallacy Example from Text Explanation
    Hasty Generalization “Almost all men... cherish very nearly the same feelings...” Overgeneralizes personal feeling
    False Cause “...driving off the spleen and regulating the circulation.” Implies unproven health benefits
    Appeal to Emotion “...prevent me from... knocking people’s hats off.” Justifies behavior with emotion
    False Analogy “Cato throws himself... I quietly take to the ship.” Equates suicide and sailing

    Would you like me to rewrite the passage without the fallacies, or is this for literary analysis?

    Thank you! This passage is again drawn from René Descartes' Meditations on First Philosophy, and it outlines his method of radical doubt as a means to establish certainty in knowledge. While the reasoning is carefully crafted and influential in the history of philosophy, it still contains a few logical fallacies or reasoning weaknesses that can be critically assessed.


    🔍 Identified Logical Fallacies and Weak Reasoning:


    1. Hasty Generalization

    "Whatever I have accepted until now as most true has come to me through my senses. But occasionally I have found that they have deceived me, and it is unwise to trust completely those who have deceived us even once."

    • Why it’s a fallacy: The fact that the senses occasionally deceive (e.g. in illusions, dreams, or misperceptions) does not justify rejecting them entirely.
    • Type: Hasty generalization – concluding that because something is sometimes unreliable, it must always be unreliable.

    2. False Analogy

    "...it is unwise to trust completely those who have deceived us even once."

    • Why it’s a fallacy: Comparing senses (which are not agents but biological faculties) to people who deceive is not a valid analogy. People deceive with intent; senses make errors unintentionally.
    • Type: False analogy – treating unlike things as if they were meaningfully similar.

    3. Slippery Slope (Metaphorical)

    "Once the foundations of a building have been undermined, the rest collapses of its own accord..."

    • Why it’s a fallacy: This implies that if some foundational beliefs are doubted, then all other beliefs must collapse, without demonstrating that all knowledge is entirely dependent on those foundations.
    • Type: Slippery slope – assuming that undermining one part inevitably leads to a total collapse.

    4. Begging the Question (Circular Reasoning)

    "My reason tells me... I should withhold assent from [beliefs] that are not completely certain and indubitable."

    • Why it’s questionable: Descartes is relying on reason to justify doubting everything else, including the faculties (like reason) he has not yet proven to be trustworthy. This risks begging the question, assuming the reliability of reason while attempting to prove everything else, including reason itself.
    • Type: Begging the question – assuming what you're trying to prove.

    5. Inconsistency (Recognized, Not Fully Resolved)

    "Yet although the senses sometimes deceive us... that doesn’t apply to my belief that I am here, sitting by the fire..."

    • Why it’s a fallacy: Here, Descartes briefly exempts ordinary sensory experiences (like feeling warmth, seeing paper) from radical doubt—despite having just declared that any doubt justifies withholding belief. He later corrects for this in the next meditation (introducing the dream argument), but within this passage, it’s a moment of inconsistency.
    • Type: Special pleading or internal inconsistency.

    ✅ Summary Table of Fallacies

    Fallacy Type Description
    Hasty Generalization Rejects all sensory data due to occasional errors.
    False Analogy Compares unreliable senses to untrustworthy people.
    Slippery Slope Claims that doubting foundations causes collapse of all beliefs.
    Begging the Question Uses reason to justify doubting everything, including reason itself.
    Inconsistency / Special Pleading Makes an exception for immediate sensory beliefs without justification.

    Conclusion:

    Descartes' passage is methodologically rich, but from a logic and reasoning perspective, it contains:

    • Several classic fallacies used to motivate extreme skepticism.
    • Analogical and metaphorical reasoning that may be rhetorically persuasive but logically flawed.
    • Self-referential assumptions that are philosophically problematic (and which he tries to resolve in later meditations).

    Would you like a version of this reasoning revised to be more logically sound, or would you like help analyzing the next part of the Meditations?

  • 842 Stimmen
    163 Beiträge
    2k Aufrufe
    S
    Business travel presumably and they don't fuck them over?
  • 31 Stimmen
    3 Beiträge
    17 Aufrufe
    L
    Oh look, the live service FOMO tactic is leaking out. First they fail to protect the children, now they fail to protect themselves.
  • 255 Stimmen
    4 Beiträge
    60 Aufrufe
    T
    isnt merz kinda right wing, but not AFD-CRAZY.
  • Relo IT

    Technology technology
    1
    2
    1 Stimmen
    1 Beiträge
    17 Aufrufe
    Niemand hat geantwortet
  • 73 Stimmen
    15 Beiträge
    131 Aufrufe
    L
    same, i however dont subscribe to thier "contact you by recruiters, since you get flooded with indian recruiters of questionable positions, and jobs im not eligible for. unfortunately for the field i was trying to get into, wasnt helping so i found just a regular job in the mean time.
  • Catbox.moe got screwed 😿

    Technology technology
    40
    55 Stimmen
    40 Beiträge
    404 Aufrufe
    archrecord@lemm.eeA
    I'll gladly give you a reason. I'm actually happy to articulate my stance on this, considering how much I tend to care about digital rights. Services that host files should not be held responsible for what users upload, unless: The service explicitly caters to illegal content by definition or practice (i.e. the if the website is literally titled uploadyourcsamhere[.]com then it's safe to assume they deliberately want to host illegal content) The service has a very easy mechanism to remove illegal content, either when asked, or through simple monitoring systems, but chooses not to do so (catbox does this, and quite quickly too) Because holding services responsible creates a whole host of negative effects. Here's some examples: Someone starts a CDN and some users upload CSAM. The creator of the CDN goes to jail now. Nobody ever wants to create a CDN because of the legal risk, and thus the only providers of CDNs become shady, expensive, anonymously-run services with no compliance mechanisms. You run a site that hosts images, and someone decides they want to harm you. They upload CSAM, then report the site to law enforcement. You go to jail. Anybody in the future who wants to run an image sharing site must now self-censor to try and not upset any human being that could be willing to harm them via their site. A social media site is hosting the posts and content of users. In order to be compliant and not go to jail, they must engage in extremely strict filtering, otherwise even one mistake could land them in jail. All users of the site are prohibited from posting any NSFW or even suggestive content, (including newsworthy media, such as an image of bodies in a warzone) and any violation leads to an instant ban, because any of those things could lead to a chance of actually illegal content being attached. This isn't just my opinion either. Digital rights organizations such as the Electronic Frontier Foundation have talked at length about similar policies before. To quote them: "When social media platforms adopt heavy-handed moderation policies, the unintended consequences can be hard to predict. For example, Twitter’s policies on sexual material have resulted in posts on sexual health and condoms being taken down. YouTube’s bans on violent content have resulted in journalism on the Syrian war being pulled from the site. It can be tempting to attempt to “fix” certain attitudes and behaviors online by placing increased restrictions on users’ speech, but in practice, web platforms have had more success at silencing innocent people than at making online communities healthier." Now, to address the rest of your comment, since I don't just want to focus on the beginning: I think you have to actively moderate what is uploaded Catbox does, and as previously mentioned, often at a much higher rate than other services, and at a comparable rate to many services that have millions, if not billions of dollars in annual profits that could otherwise be spent on further moderation. there has to be swifter and stricter punishment for those that do upload things that are against TOS and/or illegal. The problem isn't necessarily the speed at which people can be reported and punished, but rather that the internet is fundamentally harder to track people on than real life. It's easy for cops to sit around at a spot they know someone will be physically distributing illegal content at in real life, but digitally, even if you can see the feed of all the information passing through the service, a VPN or Tor connection will anonymize your IP address in a manner that most police departments won't be able to track, and most three-letter agencies will simply have a relatively low success rate with. There's no good solution to this problem of identifying perpetrators, which is why platforms often focus on moderation over legal enforcement actions against users so frequently. It accomplishes the goal of preventing and removing the content without having to, for example, require every single user of the internet to scan an ID (and also magically prevent people from just stealing other people's access tokens and impersonating their ID) I do agree, however, that we should probably provide larger amounts of funding, training, and resources, to divisions who's sole goal is to go after online distribution of various illegal content, primarily that which harms children, because it's certainly still an issue of there being too many reports to go through, even if many of them will still lead to dead ends. I hope that explains why making file hosting services liable for user uploaded content probably isn't the best strategy. I hate to see people with good intentions support ideas that sound good in practice, but in the end just cause more untold harms, and I hope you can understand why I believe this to be the case.
  • 21 Stimmen
    41 Beiträge
    306 Aufrufe
    W
    that's because phone makers were pumping out garbage chargers with bare minimum performance for every single phone, isn't it?
  • 14 Stimmen
    2 Beiträge
    31 Aufrufe
    D
    "Extra Verification steps" I know how large social media companies operate. This is all about increasing the value of Reddit users to advertisers. The goal is to have a more accurate user database to sell them. Zuckerberg literally brags to corporations about how good their data is on users: https://www.facebook.com/business/ads/performance-marketing Here, Zuckerberg tells corporations that Instagram can easily manipulate users into purchasing shit: https://www.facebook.com/business/instagram/instagram-reels Always be wary of anything available for free. There are some quality exceptions (CBC, VLC, The Guardian, Linux, PBS, Wikipedia, Lemmy, ProPublica) but, by and large, "free" means they don't care about you. You are just a commodity that they sell. Facebook, Google, X, Reddit, Instagram... Their goal is keep people hooked to their smartphone by giving them regular small dopamine hits (likes, upvotes) followed by a small breaks with outrageous content/emotional content. Keep them hooked, gather their data, and sell them ads. The people who know that best are former top executives : https://www.theguardian.com/technology/2017/oct/05/smartphone-addiction-silicon-valley-dystopia https://www.nytimes.com/2019/03/01/business/addictive-technology.html https://www.today.com/parents/teens/facebook-whistleblower-frances-haugen-rcna15256