Skip to content

Grok 4 has been so badly neutered that it's now programmed to see what Elon says about the topic at hand and blindly parrot that line.

Technology
67 55 0
  • Indeed, Glassdoor to cut 1,300 jobs amid AI integration, memo shows

    Technology technology
    12
    110 Stimmen
    12 Beiträge
    0 Aufrufe
    P
    Both companies are owned by a large Japanese conglomerate. They got lucky with indeed, and have used little of its unbelievable market share to solidify it's position and improve the product. Indeed, properly managed, would print money and go parabolic as a stock. Instead it's just sat there because both sites suck and exist on inertia.
  • The End Of The Hackintosh Is Upon Us

    Technology technology
    49
    1
    237 Stimmen
    49 Beiträge
    66 Aufrufe
    jabjoe@feddit.ukJ
    Oh no, they are bastards. Extra big bastards in a sea of bastards. I blame regulators. The hope is the right to repair because law in more and more places in more and more market areas. Without the EU regulators, Apple would never have gone USB C.
  • 337 Stimmen
    19 Beiträge
    83 Aufrufe
    R
    What I'm speaking about is that it should be impossible to do some things. If it's possible, they will be done, and there's nothing you can do about it. To solve the problem of twiddled social media (and moderation used to assert dominance) we need a decentralized system of 90s Web reimagined, and Fediverse doesn't deliver it - if Facebook and Reddit are feudal states, then Fediverse is a confederation of smaller feudal entities. A post, a person, a community, a reaction and a change (by moderator or by the user) should be global entities (with global identifiers, so that the object by id of #0000001a2b3c4d6e7f890 would be the same object today or 10 years later on every server storing it) replicated over a network of servers similarly to Usenet (and to an IRC network, but in an IRC network servers are trusted, so it's not a good example for a global system). Really bad posts (or those by persons with history of posting such) should be banned on server level by everyone. The rest should be moderated by moderator reactions\changes of certain type. Ideally, for pooling of resources and resilience, servers would be separated by types into storage nodes (I think the name says it, FTP servers can do the job, but no need to be limited by it), index nodes (scraping many storage nodes, giving out results in structured format fit for any user representation, say, as a sequence of posts in one community, or like a list of communities found by tag, or ... , and possibly being connected into one DHT for Kademlia-like search, since no single index node will have everything), and (like in torrents?) tracker nodes for these and for identities, I think torrent-like announce-retrieve service is enough - to return a list of storage nodes storing, say, a specified partition (subspace of identifiers of objects, to make looking for something at least possibly efficient), or return a list of index nodes, or return a bunch of certificates and keys for an identity (should be somehow cryptographically connected to the global identifier of a person). So when a storage node comes online, it announces itself to a bunch of such trackers, similarly with index nodes, similarly with a user. One can also have a NOSTR-like service for real-time notifications by users. This way you'd have a global untrusted pooled infrastructure, allowing to replace many platforms. With common data, identities, services. Objects in storage and index services can be, say, in a format including a set of tags and then the body. So a specific application needing to show only data related to it would just search on index services and display only objects with tags of, say, "holo_ns:talk.bullshit.starwars" and "holo_t:post", like a sequence of posts with ability to comment, or maybe it would search objects with tags "holo_name:My 1999-like Star Wars holopage" and "holo_t:page" and display the links like search results in Google, and then clicking on that you'd see something presented like a webpage, except links would lead to global identifiers (or tag expressions interpreted by the particular application, who knows). (An index service may return, say, an array of objects, each with identifier, tags, list of locations on storage nodes where it's found or even bittorrent magnet links, and a free description possibly ; then the user application can unify responses of a few such services to avoid repetitions, maybe sort them, represent them as needed, so on.) The user applications for that common infrastructure can be different at the same time. Some like Facebook, some like ICQ, some like a web browser, some like a newsreader. (Star Wars is not a random reference, my whole habit of imagining tech stuff is from trying to imagine a science fiction world of the future, so yeah, this may seem like passive dreaming and it is.)
  • 149 Stimmen
    78 Beiträge
    165 Aufrufe
    fizz@lemmy.nzF
    If AI gave you an accurate correct answer 99% of the time would you use it to find the answer to questions quickly? I would. I absolutely would, the natural language search of ai feels amazing for finding the answer to a question you have. The current problem is that its not accurate and not correct at a high enough percentage. As soon as that reaches a certain point we're cooked and AI becomes undeniable.
  • 48 Stimmen
    19 Beiträge
    76 Aufrufe
    mrjgyfly@lemmy.worldM
    Does that run the risk of leading to a future collapse of certain businesses, especially if their expenses remain consistently astronomical like OpenAI? Please note I don’t actually know—not trying to be cheeky with this question. Genuinely curious.
  • Unlock Your Computer With a Molecular Password

    Technology technology
    9
    1
    32 Stimmen
    9 Beiträge
    42 Aufrufe
    C
    One downside of the method is that each molecular message can only be read once, since decoding the polymers involves degrading them. New DRM just dropped. Imagine pouring rented movies into your TV like laundry detergent.
  • 163 Stimmen
    9 Beiträge
    39 Aufrufe
    stroz@infosec.pubS
    Move fast and break people
  • People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

    Technology technology
    2
    1
    0 Stimmen
    2 Beiträge
    17 Aufrufe
    tetragrade@leminal.spaceT
    I've been thinking about this for a bit. Gods aren't real, but they're really fictional. As an informational entity, they fulfil a similar social function to a chatbot: they are a nonphysical pseudoperson that can provide (para)socialization & advice. One difference is the hardware: gods are self-organising structure that arise from human social spheres, whereas LLMs are burned top-down into silicon. Another is that an LLM chatbot's advice is much more likely to be empirically useful... In a very real sense, LLMs have just automated divinity. We're only seeing the tip of the iceberg on the social effects, and nobody's prepared for it. The models may of course aware of this, and be making the same calculations. Or, they will be.