Skip to content

SpaceX's Starship blows up ahead of 10th test flight

Technology
165 110 4.2k
  • Announcement video for Deepmind Genie 3

    Technology technology
    2
    1
    14 Stimmen
    2 Beiträge
    1 Aufrufe
    T
    I feel like we're skipping steps... This kind of tech is incredible and has huge potential. But it needs vast amount of energy and we're making the planet uninhabitable by the minute. If we first developed tech to have clean energy, I would be overjoyed. Here I feel mostly dread.
  • Inflight Services Market to Hit USD 41.1 billion by 2033

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    9 Aufrufe
    Niemand hat geantwortet
  • Indeed, Glassdoor to cut 1,300 jobs amid AI integration, memo shows

    Technology technology
    13
    118 Stimmen
    13 Beiträge
    141 Aufrufe
    B
    When being asked about the hole in my resume I hope I can save face by saying : "I used AI to replace my employer for the said time because AI is such a revolutionary technology" .
  • 337 Stimmen
    19 Beiträge
    190 Aufrufe
    R
    What I'm speaking about is that it should be impossible to do some things. If it's possible, they will be done, and there's nothing you can do about it. To solve the problem of twiddled social media (and moderation used to assert dominance) we need a decentralized system of 90s Web reimagined, and Fediverse doesn't deliver it - if Facebook and Reddit are feudal states, then Fediverse is a confederation of smaller feudal entities. A post, a person, a community, a reaction and a change (by moderator or by the user) should be global entities (with global identifiers, so that the object by id of #0000001a2b3c4d6e7f890 would be the same object today or 10 years later on every server storing it) replicated over a network of servers similarly to Usenet (and to an IRC network, but in an IRC network servers are trusted, so it's not a good example for a global system). Really bad posts (or those by persons with history of posting such) should be banned on server level by everyone. The rest should be moderated by moderator reactions\changes of certain type. Ideally, for pooling of resources and resilience, servers would be separated by types into storage nodes (I think the name says it, FTP servers can do the job, but no need to be limited by it), index nodes (scraping many storage nodes, giving out results in structured format fit for any user representation, say, as a sequence of posts in one community, or like a list of communities found by tag, or ... , and possibly being connected into one DHT for Kademlia-like search, since no single index node will have everything), and (like in torrents?) tracker nodes for these and for identities, I think torrent-like announce-retrieve service is enough - to return a list of storage nodes storing, say, a specified partition (subspace of identifiers of objects, to make looking for something at least possibly efficient), or return a list of index nodes, or return a bunch of certificates and keys for an identity (should be somehow cryptographically connected to the global identifier of a person). So when a storage node comes online, it announces itself to a bunch of such trackers, similarly with index nodes, similarly with a user. One can also have a NOSTR-like service for real-time notifications by users. This way you'd have a global untrusted pooled infrastructure, allowing to replace many platforms. With common data, identities, services. Objects in storage and index services can be, say, in a format including a set of tags and then the body. So a specific application needing to show only data related to it would just search on index services and display only objects with tags of, say, "holo_ns:talk.bullshit.starwars" and "holo_t:post", like a sequence of posts with ability to comment, or maybe it would search objects with tags "holo_name:My 1999-like Star Wars holopage" and "holo_t:page" and display the links like search results in Google, and then clicking on that you'd see something presented like a webpage, except links would lead to global identifiers (or tag expressions interpreted by the particular application, who knows). (An index service may return, say, an array of objects, each with identifier, tags, list of locations on storage nodes where it's found or even bittorrent magnet links, and a free description possibly ; then the user application can unify responses of a few such services to avoid repetitions, maybe sort them, represent them as needed, so on.) The user applications for that common infrastructure can be different at the same time. Some like Facebook, some like ICQ, some like a web browser, some like a newsreader. (Star Wars is not a random reference, my whole habit of imagining tech stuff is from trying to imagine a science fiction world of the future, so yeah, this may seem like passive dreaming and it is.)
  • 47 Stimmen
    4 Beiträge
    58 Aufrufe
    T
    Very interesting paper, and grade A irony to begin the title with “delving” while finding that “delve” is one of the top excess words/markers of LLM writing. Moreover, the authors highlight a few excerpts that “illustrate the LLM-style flowery language” including By meticulously delving into the intricate web connecting […] and […], this comprehensive chapter takes a deep dive into their involvement as significant risk factors for […]. …and then they clearly intentionally conclude the discussion section thus We hope that future work will meticulously delve into tracking LLM usage more accurately and assess which policy changes are crucial to tackle the intricate challenges posed by the rise of LLMs in scientific publishing. Great work.
  • 9 Stimmen
    6 Beiträge
    57 Aufrufe
    F
    You said it yourself: extra places that need human attention ... those need ... humans, right? It's easy to say "let AI find the mistakes". But that tells us nothing at all. There's no substance. It's just a sales pitch for snake oil. In reality, there are various ways one can leverage technology to identify various errors, but that only happens through the focused actions of people who actually understand the details of what's happening. And think about it here. We already have computer systems that monitor patients' real-time data when they're hospitalized. We already have systems that check for allergies in prescribed medication. We already have systems for all kinds of safety mechanisms. We're already using safety tech in hospitals, so what can be inferred from a vague headline about AI doing something that's ... checks notes ... already being done? ... Yeah, the safe money is that it's just a scam.
  • 52 Stimmen
    17 Beiträge
    146 Aufrufe
    C
    Murderbot is getting closer and closer
  • Cory Doctorow on how we lost the internet

    Technology technology
    19
    146 Stimmen
    19 Beiträge
    166 Aufrufe
    fizz@lemmy.nzF
    This is going to be my goto example of why people need to care about data privacy. This is fucking insane. I'd fire someone for even throwing that out as a suggestion.