Skip to content

We Should Immediately Nationalize SpaceX and Starlink

Technology
479 193 0
  • Apple acquires RAC7, its first-ever video game studio

    Technology technology
    12
    1
    62 Stimmen
    12 Beiträge
    0 Aufrufe
    noobface@lemmy.worldN
    Capital G-amers on consoles are a smaller market than Mobile thanks to Gacha and Casuals. https://cdn.statcdn.com/Infographic/images/normal/30560.jpeg
  • 855 Stimmen
    330 Beiträge
    13 Aufrufe
    A
    Yes, LLM inference consists of deterministic matrix multiplications applied to the current context. But that simplicity in operations does not make it equivalent to a Markov chain. The definition of a Markov process requires that the next output depends only on the current state. You’re assuming that the LLM’s “state” is its current context window. But in an LLM, this “state” is not discrete. It is a structured, deeply encoded set of vectors shaped by non-linear transformations across layers. The state is not just the visible tokens—it is the full set of learned representations computed from them. A Markov chain transitions between discrete, enumerable states with fixed transition probabilities. LLMs instead apply a learned function over a high-dimensional, continuous input space, producing outputs by computing context-sensitive interactions. These interactions allow generalization and compositionality, not just selection among known paths. The fact that inference uses fixed weights does not mean it reduces to a transition table. The output is computed by composing multiple learned projections, attention mechanisms, and feedforward layers that operate in ways no Markov chain ever has. You can’t describe an attention head with a transition matrix. You can’t reduce positional encoding or attention-weighted context mixing into state transitions. These are structured transformations, not symbolic transitions. You can describe any deterministic process as a function, but not all deterministic functions are Markovian. What makes a process Markov is not just forgetting prior history. It is having a fixed, memoryless probabilistic structure where transitions depend only on a defined discrete state. LLMs don’t transition between states in this sense. They recompute probability distributions from scratch each step, based on context-rich, continuous-valued encodings. That is not a Markov process. It’s a stateless function approximator conditioned on a window, built to generalize across unseen input patterns.
  • What was Radiant AI, anyway?

    Technology technology
    6
    1
    20 Stimmen
    6 Beiträge
    1 Aufrufe
    T
    In fact Daggerfall was almost nothing but quests and other content like that.
  • uBlockOrigin is porting uBOL to iOS and macOS

    Technology technology
    30
    325 Stimmen
    30 Beiträge
    4 Aufrufe
    C
    Will never happen unfortunately
  • Programming languages

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    0 Aufrufe
    Niemand hat geantwortet
  • Duolingo CEO tries to walk back AI-first comments, fails

    Technology technology
    134
    758 Stimmen
    134 Beiträge
    4 Aufrufe
    kingthrillgore@lemmy.mlK
    I think on iOS they added a thing where it would change based on the days you didn't use Duolingo. Honestly at this point I think it speaks more about the sorry state of their company more than anything.
  • Elon Musk's X temporarily down for tens of thousands of users

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    0 Aufrufe
    Niemand hat geantwortet
  • 88 Stimmen
    26 Beiträge
    7 Aufrufe
    M
    I really can't stand this guy. What a slag.