Skip to content

Jack Dorsey just Announced Bitchat(A secure, decentralized, peer-to-peer messaging app for iOS and macOS that works over Bluetooth mesh networks) Licensed Under Public Domain.

Technology
62 40 0
  • AI agents wrong ~70% of time: Carnegie Mellon study

    Technology technology
    255
    1
    911 Stimmen
    255 Beiträge
    5 Aufrufe
    M
    Why are you giving it data. It's a chat and language tool. It's not data based. You need something trained to work for that specific use. I think Wolfram Alpha has better tools for that. I wouldn't trust it to calculate how many patio stones I need to build a project. But I trust it to tell me where a good source is on a topic or if a quote was said by who ever or if I need to remember something but I only have vague pieces like old timey historical witch burning related factoid about villagers who pulled people through a hole in the church wall or what was a the princess who was skeptic and sent her scientist to villages to try to calm superstitious panic . Other uses are like digging around my computer and seeing what processes do what. How concepts work regarding the think I'm currently learning. So many excellent users. But I fucking wouldn't trust it to do any kind of calculation.
  • 9 Stimmen
    6 Beiträge
    28 Aufrufe
    F
    You said it yourself: extra places that need human attention ... those need ... humans, right? It's easy to say "let AI find the mistakes". But that tells us nothing at all. There's no substance. It's just a sales pitch for snake oil. In reality, there are various ways one can leverage technology to identify various errors, but that only happens through the focused actions of people who actually understand the details of what's happening. And think about it here. We already have computer systems that monitor patients' real-time data when they're hospitalized. We already have systems that check for allergies in prescribed medication. We already have systems for all kinds of safety mechanisms. We're already using safety tech in hospitals, so what can be inferred from a vague headline about AI doing something that's ... checks notes ... already being done? ... Yeah, the safe money is that it's just a scam.
  • Deep Dive on Google's TPU (Tensor Processing Unit)

    Technology technology
    1
    45 Stimmen
    1 Beiträge
    9 Aufrufe
    Niemand hat geantwortet
  • 15 Stimmen
    4 Beiträge
    22 Aufrufe
    P
    WTF I looked for something like this for a while and this never popped up. Awesome.
  • 34 Stimmen
    3 Beiträge
    19 Aufrufe
    L
    deleted by creator
  • 23 Stimmen
    31 Beiträge
    45 Aufrufe
    A
    World actually.
  • AI will replace routine — freeing people for creativity.

    Technology technology
    14
    2
    42 Stimmen
    14 Beiträge
    53 Aufrufe
    G
    So you are against having machines do the work of blue collar workers? We should all be out in the fields with plows instead of using a tractor and assembling everything by hand in factories?
  • CrowdStrike Announces Layoffs Affecting 500 Employees

    Technology technology
    8
    1
    242 Stimmen
    8 Beiträge
    37 Aufrufe
    S
    This is where the magic of near meaningless corpo-babble comes in. The layoffs are part of a plan to aspirationally acheive the goal of $10b revenue by EoY 2025. What they are actually doing is a significant restructuring of the company, refocusing by outside hiring some amount of new people to lead or be a part of departments or positions that haven't existed before, or are being refocused to other priorities... ... But this process also involves laying off 500 of the 'least productive' or 'least mission critical' employees. So, technically, they can, and are, arguing that their new organizational paradigm will be so succesful that it actually will result in increased revenue, not just lower expenses. Generally corpos call this something like 'right-sizing' or 'refocusing' or something like that. ... But of course... anyone with any actual experience with working at a place that does this... will tell you roughly this is what happens: Turns out all those 'grunts' you let go of, well they actually do a lot more work in a bunch of weird, esoteric, bandaid solutions to keep everything going, than upper management was aware of... because middle management doesn't acknowledge or often even understand that that work was being done, because they are generally self-aggrandizing narcissist petty tyrants who spend more time in meetings fluffing themselves up than actually doing any useful management. Then, also, you are now bringing on new, outside people who look great on paper, to lead new or modified apartments... but they of course also do not have any institutional knowledge, as they are new. So now, you have a whole bunch of undocumented work that was being done, processes which were being followed... which is no longer being done, which is not documented.... and the new guys, even if they have the best intentions, now have to spend a quarter or two or three figuring out just exactly how much pre-existing middle management has been bullshitting about, figuring out just how much things do not actually function as they ssid it did... So now your efficiency improving restructuring is actually a chaotic mess. ... Now, this 'right sizing' is not always apocalyptically extremely bad, but it is also essentially never totally free from hiccups... and it increases stress, workload, and tensions between basically everyone at the company, to some extent. Here's Forbes explanation of this phenomenon, if you prefer an explanation of right sizing in corpospeak: https://www.forbes.com/advisor/business/rightsizing/