Skip to content

Is it feasible and scalable to combine self-replicating automata (after von Neumann) with federated learning and the social web?

Technology
9 5 68
  • Von Neumann’s idea of self-replicating automata describes machines that can reproduce themselves given a blueprint and a suitable environment. I’m exploring a concept that tries to apply this idea to AI in a modern context:

    • AI agents (or “fungus nodes”) that run on federated servers
    • They communicate via ActivityPub (used in Mastodon and the Fediverse)
    • Each node can train models locally, then merge or share models with others
    • Knowledge and behavior are stored in RDF graphs + code (acting like a blueprint)
    • Agents evolve via co-training and mutation, they can switch learning groups and also chose to defederate different parts of the network

    This creates something like a digital ecosystem of AI agents, growing across the social web; with nodes being able to freely train their models, which indirectly results in shared models moving across the network in comparison to siloed models of current federated learning.

    My question: Is this kind of architecture - blending self-replicating AI agents, federated learning, and social protocols like ActivityPub - feasible and scalable in practice? Or are there fundamental barriers (technical, theoretical, or social) that would limit it?

    I started to realize this using an architecture with four micro-services for each node (frontend, backend, knowledge graph using fuseki jena and activitypub-communicator); however, it brings my local laptop to its limits even with 8 nodes.

    The question could also be stated differently: how much compute would be necessary to trigger non trivial behaviours that can generate some value to sustain the overall system?

  • Von Neumann’s idea of self-replicating automata describes machines that can reproduce themselves given a blueprint and a suitable environment. I’m exploring a concept that tries to apply this idea to AI in a modern context:

    • AI agents (or “fungus nodes”) that run on federated servers
    • They communicate via ActivityPub (used in Mastodon and the Fediverse)
    • Each node can train models locally, then merge or share models with others
    • Knowledge and behavior are stored in RDF graphs + code (acting like a blueprint)
    • Agents evolve via co-training and mutation, they can switch learning groups and also chose to defederate different parts of the network

    This creates something like a digital ecosystem of AI agents, growing across the social web; with nodes being able to freely train their models, which indirectly results in shared models moving across the network in comparison to siloed models of current federated learning.

    My question: Is this kind of architecture - blending self-replicating AI agents, federated learning, and social protocols like ActivityPub - feasible and scalable in practice? Or are there fundamental barriers (technical, theoretical, or social) that would limit it?

    I started to realize this using an architecture with four micro-services for each node (frontend, backend, knowledge graph using fuseki jena and activitypub-communicator); however, it brings my local laptop to its limits even with 8 nodes.

    The question could also be stated differently: how much compute would be necessary to trigger non trivial behaviours that can generate some value to sustain the overall system?

    Your bottleneck will probably be communication speed if you want any kind of performance in a reasonable time. Small nodes will have to communicate a lot of little data a lot, big nodes will have to communicate bigger data even if it's more refined.

  • Von Neumann’s idea of self-replicating automata describes machines that can reproduce themselves given a blueprint and a suitable environment. I’m exploring a concept that tries to apply this idea to AI in a modern context:

    • AI agents (or “fungus nodes”) that run on federated servers
    • They communicate via ActivityPub (used in Mastodon and the Fediverse)
    • Each node can train models locally, then merge or share models with others
    • Knowledge and behavior are stored in RDF graphs + code (acting like a blueprint)
    • Agents evolve via co-training and mutation, they can switch learning groups and also chose to defederate different parts of the network

    This creates something like a digital ecosystem of AI agents, growing across the social web; with nodes being able to freely train their models, which indirectly results in shared models moving across the network in comparison to siloed models of current federated learning.

    My question: Is this kind of architecture - blending self-replicating AI agents, federated learning, and social protocols like ActivityPub - feasible and scalable in practice? Or are there fundamental barriers (technical, theoretical, or social) that would limit it?

    I started to realize this using an architecture with four micro-services for each node (frontend, backend, knowledge graph using fuseki jena and activitypub-communicator); however, it brings my local laptop to its limits even with 8 nodes.

    The question could also be stated differently: how much compute would be necessary to trigger non trivial behaviours that can generate some value to sustain the overall system?

    I love how casually you just dropped this 😂.

    I'm working on something similar but it's not ready for it's FOSS release yet.

    Feel free to message me if you wanna discuss ideas.

  • Von Neumann’s idea of self-replicating automata describes machines that can reproduce themselves given a blueprint and a suitable environment. I’m exploring a concept that tries to apply this idea to AI in a modern context:

    • AI agents (or “fungus nodes”) that run on federated servers
    • They communicate via ActivityPub (used in Mastodon and the Fediverse)
    • Each node can train models locally, then merge or share models with others
    • Knowledge and behavior are stored in RDF graphs + code (acting like a blueprint)
    • Agents evolve via co-training and mutation, they can switch learning groups and also chose to defederate different parts of the network

    This creates something like a digital ecosystem of AI agents, growing across the social web; with nodes being able to freely train their models, which indirectly results in shared models moving across the network in comparison to siloed models of current federated learning.

    My question: Is this kind of architecture - blending self-replicating AI agents, federated learning, and social protocols like ActivityPub - feasible and scalable in practice? Or are there fundamental barriers (technical, theoretical, or social) that would limit it?

    I started to realize this using an architecture with four micro-services for each node (frontend, backend, knowledge graph using fuseki jena and activitypub-communicator); however, it brings my local laptop to its limits even with 8 nodes.

    The question could also be stated differently: how much compute would be necessary to trigger non trivial behaviours that can generate some value to sustain the overall system?

    Train them to do what?

  • Von Neumann’s idea of self-replicating automata describes machines that can reproduce themselves given a blueprint and a suitable environment. I’m exploring a concept that tries to apply this idea to AI in a modern context:

    • AI agents (or “fungus nodes”) that run on federated servers
    • They communicate via ActivityPub (used in Mastodon and the Fediverse)
    • Each node can train models locally, then merge or share models with others
    • Knowledge and behavior are stored in RDF graphs + code (acting like a blueprint)
    • Agents evolve via co-training and mutation, they can switch learning groups and also chose to defederate different parts of the network

    This creates something like a digital ecosystem of AI agents, growing across the social web; with nodes being able to freely train their models, which indirectly results in shared models moving across the network in comparison to siloed models of current federated learning.

    My question: Is this kind of architecture - blending self-replicating AI agents, federated learning, and social protocols like ActivityPub - feasible and scalable in practice? Or are there fundamental barriers (technical, theoretical, or social) that would limit it?

    I started to realize this using an architecture with four micro-services for each node (frontend, backend, knowledge graph using fuseki jena and activitypub-communicator); however, it brings my local laptop to its limits even with 8 nodes.

    The question could also be stated differently: how much compute would be necessary to trigger non trivial behaviours that can generate some value to sustain the overall system?

    I think everyone thought of what you described, but most lacked mind discipline to think in specifics.

    That's all I wanted to say, very cool, good luck.

    (I personally would prefer thinking of a distributed computer (tasks distributed among members of a group with some redundancy and a graph of execution dependencies, results merged, so the initial state a node retrieves only once), but I lack knowledge of fundamentals.)

  • I think everyone thought of what you described, but most lacked mind discipline to think in specifics.

    That's all I wanted to say, very cool, good luck.

    (I personally would prefer thinking of a distributed computer (tasks distributed among members of a group with some redundancy and a graph of execution dependencies, results merged, so the initial state a node retrieves only once), but I lack knowledge of fundamentals.)

    Thanks 🙂

  • Train them to do what?

    Currently the nodes only recommend music (and are not really good at it tbh). But theoretically, it could be all kinds of machine learning problems (then again, there is the issue with scaling and quality of the training results).

  • Your bottleneck will probably be communication speed if you want any kind of performance in a reasonable time. Small nodes will have to communicate a lot of little data a lot, big nodes will have to communicate bigger data even if it's more refined.

    Yeah thats a good point. Also given that nodes could be fairly far apart from one another, this could become a serious problem.

  • I love how casually you just dropped this 😂.

    I'm working on something similar but it's not ready for it's FOSS release yet.

    Feel free to message me if you wanna discuss ideas.

    Cool. Well, the feedback until now was rather lukewarm. But that's fine, I'm now going more in a P2P-direction. It would be cool to have a way for everybody to participate in the training of big AI models in case HuggingFace enshittifies

  • 36 Stimmen
    9 Beiträge
    12 Aufrufe
    abbiistabbii@lemmy.blahaj.zoneA
    I fucking hate the government. They are so fucking clueless. Not just Labour but the Cons, the Lib Dems, Reform, all of them are so fucking stupid and they ignore the people there to tell them their ideas are Stupid.
  • Google Pixel 10 Leak Blowout: Specs, Price & Release Date

    Technology technology
    4
    2
    11 Stimmen
    4 Beiträge
    14 Aufrufe
    S
    I sure hope so
  • Using Clouds for too long might have made you incompetent

    Technology technology
    87
    166 Stimmen
    87 Beiträge
    2k Aufrufe
    M
    I was recruited as an R&D engineer by a company that was sales focused. It was pretty funny being recruited like a new sales hire: limo from the airport, etc. Limo driver didn't work direct for the company but she did a lot of work for them, it was an hour drive both ways to/from the "big" airport they used. She said most of the sales recruits she drove in were clueless kids, no idea how the world worked yet at all - gunning for a big commission job where 9/10 hires wash out within a year. At least after I arrived on-site I spent the day with my prospective new department, that was a pretty decent process. The one guy I didn't interview well with turned out to be the guy who had applied to the spot I was taking and had been passed over. As I was walking in on my first day he was just finishing moving his stuff out of the window-office desk he was giving up for me, into a cube. I can understand why he was a little prickly.
  • 262 Stimmen
    24 Beiträge
    315 Aufrufe
    glitchvid@lemmy.worldG
    Republicans are the biggest suckers there are. There's a reason as soon as the jig is up grifters pivot to conservative talking points.
  • 147 Stimmen
    55 Beiträge
    885 Aufrufe
    01189998819991197253@infosec.pub0
    I meant to download from the official Microsoft site. Kudos on getting your mum on Linux! I was unable to keep mine on it : / Maybe I'm missing something, but this is from the "Download Windows 11 Disk Image (ISO) for x64 devices" section from the official Microsoft site, but I don't see any option to buy or mention of it: Before you begin downloading an ISO Make sure you have: An internet connection (internet service provider fees may apply). Sufficient data storage available on the computer, USB, or external drive you are downloading the .iso file to. A blank DVD disc with at least 8GB (and DVD burner) to create a bootable disc. We recommend using a blank USB or blank DVD, because any content on it will be deleted during installation. If you receive a “disc image file is too large” message while attempting to burn a DVD bootable disc from an ISO file, consider using a higher capacity Dual Layer DVD.
  • 429 Stimmen
    42 Beiträge
    620 Aufrufe
    B
    I'm not sure who you're referencing to, but I'm assuming you're not referring to me, because I despise the IDF
  • 14 Stimmen
    10 Beiträge
    105 Aufrufe
    M
    Exactly, we don’t know how the brain would adapt to having electric impulses wired right in to it, and it could adapt in some seriously negative ways.
  • 0 Stimmen
    3 Beiträge
    3 Aufrufe
    tuuktuuk@sopuli.xyzT
    Almost nothing is ever really done on any filesystem when you press "delete". The only thing is that those physical parts of the disk with the "deleted" file are marked as "not in use". The data is there still unchanged, until you save something else and that spot on the disk is the first free spot available for saving that new file. So, if you accidentally delete files, make sure that nothing gets saved on that disk anymore, not even by the OS. So, either unmount the disk, or cut the power to your computer, or whatever. Then learn how to mount hard drives as read-only and how to mark the "not in use" spots on your disk as "this spot contains this file". This is why proper deletion of files always includes filling the disk with random data. As long as nothing has been written on top of where the file was (and in reality: still is), it's still there. Only access to it has been removed, but that access can be regained. Been there, done that.