Skip to content

Japan using generative AI less than other countries

Technology
46 28 0
  • Using Clouds for too long might have made you incompetent

    Technology technology
    85
    162 Stimmen
    85 Beiträge
    20 Aufrufe
    S
    And those candidates are usually trash, especially in a company like mine where there are maybe a few dozen software roles and many hundreds of other roles. They just don't know how to recruit devs, they usually recruit marketing or domain specific people.
  • VMware’s rivals ramp efforts to create alternative stacks

    Technology technology
    10
    1
    77 Stimmen
    10 Beiträge
    79 Aufrufe
    B
    I don't use any GUI... I use terraform in the terminal or via CI/CD. There is an API and also a Terraform provider for Proxmox, and I can use that, together with Ansible and shell scripts to manage VMs, but I was looking for k8s support. Again, it works fine for small environments, with a bit of manual work and human intervention, but for larger ones, I need a bit more. I moved away from a few VMs acting as k8s nodes, to k8s as a service (at work).
  • Oracle, OpenAI Expand Stargate Deal for More US Data Centers

    Technology technology
    4
    17 Stimmen
    4 Beiträge
    36 Aufrufe
    M
    Is the 30B calculated before or after Oracle arbitrarily increases their pricing for no reason?
  • Why Ohio Trusts Baker Chiropractic for Arthritis Pain Relief

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    13 Aufrufe
    Niemand hat geantwortet
  • No JS, No CSS, No HTML: online "clubs" celebrate plainer websites

    Technology technology
    205
    2
    772 Stimmen
    205 Beiträge
    721 Aufrufe
    R
    Gemini is just a web replacement protocol. With basic things we remember from olden days Web, but with everything non-essential removed, for a client to be doable in a couple of days. I have my own Gemini viewer, LOL. This for me seems a completely different application from torrents. I was dreaming for a thing similar to torrent trackers for aggregating storage and computation and indexing and search, with search and aggregation and other services' responses being structured and standardized, and cryptographic identities, and some kind of market services to sell and buy storage and computation in unified and pooled, but transparent way (scripted by buyer\seller), similar to MMORPG markets, with the representation (what is a siloed service in modern web) being on the client native application, and those services allowing to build any kind of client-server huge system on them, that being global. But that's more of a global Facebook\Usenet\whatever, a killer of platforms. Their infrastructure is internal, while their representation is public on the Internet. I want to make infrastructure public on the Internet, and representation client-side, sharing it for many kinds of applications. Adding another layer to the OSI model, so to say, between transport and application layer. For this application: I think you could have some kind of Kademlia-based p2p with groups voluntarily joined (involving very huge groups) where nodes store replicas of partitions of group common data based on their pseudo-random identifiers and/or some kind of ring built from those identifiers, to balance storage and resilience. If a group has a creator, then you can have replication factor propagated signed by them, and membership too signed by them. But if having a creator (even with cryptographically delegated decisions) and propagating changes by them is not ok, then maybe just using whole data hash, or it's bittorrent-like info tree hash, as namespace with peers freely joining it can do. Then it may be better to partition not by parts of the whole piece, but by info tree? I guess making it exactly bittorrent-like is not a good idea, rather some kind of block tree, like for a filesystem, and a separate piece of information to lookup which file is in which blocks. If we are doing directory structure. Then, with freely joining it, there's no need in any owners or replication factors, I guess just pseudorandom distribution of hashes will do, and each node storing first partitions closest to its hash. Now thinking about it, such a system would be not that different from bittorrent and can even be interoperable with it. There's the issue of updates, yes, hence I've started with groups having hierarchy of creators, who can make or accept those updates. Having that and the ability to gradually store one group's data to another group, it should be possible to do forks of a certain state. But that line of thought makes reusing bittorrent only possible for part of the system. The whole database is guaranteed to be more than a normal HDD (1 TB? I dunno). Absolutely guaranteed, no doubt at all. 1 TB (for example) would be someone's collection of favorite stuff, and not too rich one.
  • Honda successfully launched and landed its own reusable rocket

    Technology technology
    170
    1
    1k Stimmen
    170 Beiträge
    519 Aufrufe
    gerryflap@feddit.nlG
    Call me an optimist, but I still hold the hope that we can one day do better as humanity than we do now. Humanity has become a "better" species throughout its existence overall. Even a hundred years ago we were much more horrible and brutal than we are now. The current trend is not great, with climate change and far-right grifters taking control. But I hold hope that in the end this is but a blip on the radar. Horrible for us now, but in the grand scheme of things not something that will end humanity. It might in the worst case set us back a few hundred years.
  • 533 Stimmen
    92 Beiträge
    350 Aufrufe
    C
    Thanks for the speed and the work !
  • 317 Stimmen
    45 Beiträge
    191 Aufrufe
    F
    By giving us the choice of whether someone else should profit by our data. Same as I don't want someone looking over my shoulder and copying off my test answers.