Skip to content

AI agents wrong ~70% of time: Carnegie Mellon study

Technology
272 107 79
  • 349 Stimmen
    72 Beiträge
    199 Aufrufe
    M
    Sure, the internet is more practical, and the odds of being caught in the time required to execute a decent strike plan, even one as vague as: "we're going to Amerika and we're going to hit 50 high profile targets on July 4th, one in every state" (Dear NSA analyst, this is entirely hypothetical) so your agents spread to the field and start assessing from the ground the highest impact targets attainable with their resources, extensive back and forth from the field to central command daily for 90 days of prep, but it's being carried out on 270 different active social media channels as innocuous looking photo exchanges with 540 pre-arranged algorithms hiding the messages in the noise of the image bits. Chances of security agencies picking this up from the communication itself? About 100x less than them noticing 50 teams of activists deployed to 50 states at roughly the same time, even if they never communicate anything. HF (more often called shortwave) is well suited for the numbers game. A deep cover agent lying in wait, potentially for years. Only "tell" is their odd habit of listening to the radio most nights. All they're waiting for is a binary message: if you hear the sequence 3 17 22 you are to make contact for further instructions. That message may come at any time, or may not come for a decade. These days, you would make your contact for further instructions via internet, and sure, it would be more practical to hide the "make contact" signal in the internet too, but shortwave is a longstanding tech with known operating parameters.
  • Delivering BlogOnLemmy worldwide in record speeds

    Technology technology
    3
    28 Stimmen
    3 Beiträge
    21 Aufrufe
    kernelle@0d.gsK
    Nice to hear! I'm glad you enjoyed it.
  • No JS, No CSS, No HTML: online "clubs" celebrate plainer websites

    Technology technology
    205
    2
    771 Stimmen
    205 Beiträge
    563 Aufrufe
    R
    Gemini is just a web replacement protocol. With basic things we remember from olden days Web, but with everything non-essential removed, for a client to be doable in a couple of days. I have my own Gemini viewer, LOL. This for me seems a completely different application from torrents. I was dreaming for a thing similar to torrent trackers for aggregating storage and computation and indexing and search, with search and aggregation and other services' responses being structured and standardized, and cryptographic identities, and some kind of market services to sell and buy storage and computation in unified and pooled, but transparent way (scripted by buyer\seller), similar to MMORPG markets, with the representation (what is a siloed service in modern web) being on the client native application, and those services allowing to build any kind of client-server huge system on them, that being global. But that's more of a global Facebook\Usenet\whatever, a killer of platforms. Their infrastructure is internal, while their representation is public on the Internet. I want to make infrastructure public on the Internet, and representation client-side, sharing it for many kinds of applications. Adding another layer to the OSI model, so to say, between transport and application layer. For this application: I think you could have some kind of Kademlia-based p2p with groups voluntarily joined (involving very huge groups) where nodes store replicas of partitions of group common data based on their pseudo-random identifiers and/or some kind of ring built from those identifiers, to balance storage and resilience. If a group has a creator, then you can have replication factor propagated signed by them, and membership too signed by them. But if having a creator (even with cryptographically delegated decisions) and propagating changes by them is not ok, then maybe just using whole data hash, or it's bittorrent-like info tree hash, as namespace with peers freely joining it can do. Then it may be better to partition not by parts of the whole piece, but by info tree? I guess making it exactly bittorrent-like is not a good idea, rather some kind of block tree, like for a filesystem, and a separate piece of information to lookup which file is in which blocks. If we are doing directory structure. Then, with freely joining it, there's no need in any owners or replication factors, I guess just pseudorandom distribution of hashes will do, and each node storing first partitions closest to its hash. Now thinking about it, such a system would be not that different from bittorrent and can even be interoperable with it. There's the issue of updates, yes, hence I've started with groups having hierarchy of creators, who can make or accept those updates. Having that and the ability to gradually store one group's data to another group, it should be possible to do forks of a certain state. But that line of thought makes reusing bittorrent only possible for part of the system. The whole database is guaranteed to be more than a normal HDD (1 TB? I dunno). Absolutely guaranteed, no doubt at all. 1 TB (for example) would be someone's collection of favorite stuff, and not too rich one.
  • How to store data on paper?

    Technology technology
    9
    44 Stimmen
    9 Beiträge
    37 Aufrufe
    U
    This has to be a shitpost. Transportation of paper-stored data You can take the sheets with you, send them by post, or even attach them to homing pigeons
  • The Universal Tech Tree

    Technology technology
    1
    1
    21 Stimmen
    1 Beiträge
    7 Aufrufe
    Niemand hat geantwortet
  • 82 Stimmen
    3 Beiträge
    20 Aufrufe
    sfxrlz@lemmy.dbzer0.comS
    As a Star Wars yellowtext: „In the final days of the senate, senator organa…“
  • 4 Stimmen
    20 Beiträge
    71 Aufrufe
    V
    Oh, I get it. You're a purposefully ignorant dumbass.
  • 0 Stimmen
    9 Beiträge
    7 Aufrufe
    kolanaki@pawb.socialK
    I kinda don't want anyone other than a doctor determining it, tbh. Fuck the human bean counters just as much as the AI ones. Hopefully we can just start growing organs instead of having to even make such a grim decision and everyone can get new livers. Even if they don't need them.