Skip to content

Robot performs first realistic surgery without human help: System trained on videos of surgeries performs like an expert surgeon

Technology
68 54 0
  • Microsoft Shifts Gears On AI Chip Design Plans

    Technology technology
    2
    19 Stimmen
    2 Beiträge
    19 Aufrufe
    P
    AI needs to be regulated with an energy cap. If you need more capacity, optimise your AI. Don't just throw more electricity at it.
  • Session Messenger

    Technology technology
    8
    2
    15 Stimmen
    8 Beiträge
    46 Aufrufe
    S
    I think it was a great idea, but poorly executed. I prefer using simpleX, personally.
  • No JS, No CSS, No HTML: online "clubs" celebrate plainer websites

    Technology technology
    205
    2
    771 Stimmen
    205 Beiträge
    648 Aufrufe
    R
    Gemini is just a web replacement protocol. With basic things we remember from olden days Web, but with everything non-essential removed, for a client to be doable in a couple of days. I have my own Gemini viewer, LOL. This for me seems a completely different application from torrents. I was dreaming for a thing similar to torrent trackers for aggregating storage and computation and indexing and search, with search and aggregation and other services' responses being structured and standardized, and cryptographic identities, and some kind of market services to sell and buy storage and computation in unified and pooled, but transparent way (scripted by buyer\seller), similar to MMORPG markets, with the representation (what is a siloed service in modern web) being on the client native application, and those services allowing to build any kind of client-server huge system on them, that being global. But that's more of a global Facebook\Usenet\whatever, a killer of platforms. Their infrastructure is internal, while their representation is public on the Internet. I want to make infrastructure public on the Internet, and representation client-side, sharing it for many kinds of applications. Adding another layer to the OSI model, so to say, between transport and application layer. For this application: I think you could have some kind of Kademlia-based p2p with groups voluntarily joined (involving very huge groups) where nodes store replicas of partitions of group common data based on their pseudo-random identifiers and/or some kind of ring built from those identifiers, to balance storage and resilience. If a group has a creator, then you can have replication factor propagated signed by them, and membership too signed by them. But if having a creator (even with cryptographically delegated decisions) and propagating changes by them is not ok, then maybe just using whole data hash, or it's bittorrent-like info tree hash, as namespace with peers freely joining it can do. Then it may be better to partition not by parts of the whole piece, but by info tree? I guess making it exactly bittorrent-like is not a good idea, rather some kind of block tree, like for a filesystem, and a separate piece of information to lookup which file is in which blocks. If we are doing directory structure. Then, with freely joining it, there's no need in any owners or replication factors, I guess just pseudorandom distribution of hashes will do, and each node storing first partitions closest to its hash. Now thinking about it, such a system would be not that different from bittorrent and can even be interoperable with it. There's the issue of updates, yes, hence I've started with groups having hierarchy of creators, who can make or accept those updates. Having that and the ability to gradually store one group's data to another group, it should be possible to do forks of a certain state. But that line of thought makes reusing bittorrent only possible for part of the system. The whole database is guaranteed to be more than a normal HDD (1 TB? I dunno). Absolutely guaranteed, no doubt at all. 1 TB (for example) would be someone's collection of favorite stuff, and not too rich one.
  • OpenAI wins $200m contract with US military for ‘warfighting’

    Technology technology
    42
    1
    283 Stimmen
    42 Beiträge
    146 Aufrufe
    gadgetboy@lemmy.mlG
    [image: 8aff8b12-7ed7-4df5-b40d-9d9d14708dbf.gif]
  • No Internet For 4 Hours And Now This

    Technology technology
    14
    6 Stimmen
    14 Beiträge
    59 Aufrufe
    nokturne213@sopuli.xyzN
    My first set I made myself. The "blackout" backing was white. The curtains themselves were blue with horses I think (I was like 8). I later used the backing with some Star Wars sheets to make new curtains.
  • 311 Stimmen
    37 Beiträge
    119 Aufrufe
    S
    Same, especially when searching technical or niche topics. Since there aren't a ton of results specific to the topic, mostly semi-related results will appear in the first page or two of a regular (non-Gemini) Google search, just due to the higher popularity of those webpages compared to the relevant webpages. Even the relevant webpages will have lots of non-relevant or semi-relevant information surrounding the answer I'm looking for. I don't know enough about it to be sure, but Gemini is probably just scraping a handful of websites on the first page, and since most of those are only semi-related, the resulting summary is a classic example of garbage in, garbage out. I also think there's probably something in the code that looks for information that is shared across multiple sources and prioritizing that over something that's only on one particular page (possibly the sole result with the information you need). Then, it phrases the summary as a direct answer to your query, misrepresenting the actual information on the pages they scraped. At least Gemini gives sources, I guess. The thing that gets on my nerves the most is how often I see people quote the summary as proof of something without checking the sources. It was bad before the rollout of Gemini, but at least back then Google was mostly scraping text and presenting it with little modification, along with a direct link to the webpage. Now, it's an LLM generating text phrased as a direct answer to a question (that was also AI-generated from your search query) using AI-summarized data points scraped from multiple webpages. It's obfuscating the source material further, but I also can't help but feel like it exposes a little of the behind-the-scenes fuckery Google has been doing for years before Gemini. How it bastardizes your query by interpreting it into a question, and then prioritizes homogeneous results that agree on the "answer" to your "question". For years they've been doing this to a certain extent, they just didn't share how they interpreted your query.
  • 68 Stimmen
    4 Beiträge
    24 Aufrufe
    jimmydoreisalefty@lemmy.worldJ
    Damn, I heard this mentioned somewhere as well! I don't remember where, though... The CIA is also involved with the cartels in Mexico as well as certain groups in the Middle East. They like to bring "democracy" to many countries that won't become a pawn of the Western regime.
  • 48 Stimmen
    19 Beiträge
    74 Aufrufe
    mrjgyfly@lemmy.worldM
    Does that run the risk of leading to a future collapse of certain businesses, especially if their expenses remain consistently astronomical like OpenAI? Please note I don’t actually know—not trying to be cheeky with this question. Genuinely curious.