Skip to content

SpaceX's Starship blows up ahead of 10th test flight

Technology
165 110 4.3k
  • 248 Stimmen
    20 Beiträge
    0 Aufrufe
    C
    Anybody know the legalese for "because they're lying buckets of shit"?
  • 23 Stimmen
    5 Beiträge
    14 Aufrufe
    U
    Yes. The dumb people will meet this fate.
  • The Decline of Usability: Revisited | datagubbe.se

    Technology technology
    8
    68 Stimmen
    8 Beiträge
    80 Aufrufe
    R
    I blame the idea of the 00s and 10s that there should be some "Zen" in computer UIs and that "Zen" is doing things wrong with the arrogant tone of "you don't understand it". Associated with Steve Jobs, but TBH Google as well. And also another idea of "you dummy talking about ergonomics can't be smarter than this big respectable corporation popping out stylish unusable bullshit". So - pretense of wisdom and taste, under which crowd fashion is masked, almost aggressive preference for authority over people actually having maybe some wisdom and taste due to being interested in that, blind trust into whatever tech authority you chose for yourself, because, if you remember, in the 00s it was still perceived as if all people working in anything connected to computers were as cool as aerospace engineers or naval engineers, some kind of elite, including those making user applications, objective flaw (or upside) of the old normal UIs - they are boring, that's why UIs in video games and in fashionable chat applications (like ICQ and Skype), not talking about video and audio players, were non-standard like always, I think the solution would be in per-application theming, not in breaking paradigms, again, like with ICQ and old Skype and video games, I prefer it when boredom is thought with different applications having different icons and colors, but the UI paradigm remains the same, I think there was a themed IE called LOTR browser which I used (ok, not really, I used Opera) to complement ICQ, QuickTime player and BitComet, all mentioned had standard paradigm and non-standard look.
  • 0 Stimmen
    1 Beiträge
    23 Aufrufe
    Niemand hat geantwortet
  • 51 Stimmen
    8 Beiträge
    97 Aufrufe
    B
    But do you also sometimes leave out AI for steps the AI often does for you, like the conceptualisation or the implementation? Would it be possible for you to do these steps as efficiently as before the use of AI? Would you be able to spot the mistakes the AI makes in these steps, even months or years along those lines? The main issue I have with AI being used in tasks is that it deprives you from using logic by applying it to real life scenarios, the thing we excel at. It would be better to use AI in the opposite direction you are currently use it as: develop methods to view the works critically. After all, if there is one thing a lot of people are bad at, it's thorough critical thinking. We just suck at knowing of all edge cases and how we test for them. Let the AI come up with unit tests, let it be the one that questions your work, in order to get a better perspective on it.
  • 138 Stimmen
    15 Beiträge
    131 Aufrufe
    toastedravioli@midwest.socialT
    ChatGPT is not a doctor. But models trained on imaging can actually be a very useful tool for them to utilize. Even years ago, just before the AI “boom”, they were asking doctors for details on how they examine patient images and then training models on that. They found that the AI was “better” than doctors specifically because it followed the doctor’s advice 100% of the time; thereby eliminating any kind of bias from the doctor that might interfere with following their own training. Of course, the splashy headline “AI better than doctors” was ridiculous. But it does show the benefit of having a neutral tool for doctors to utilize, especially when looking at images for people who are outside of the typical demographics that much medical training is based on. (As in mostly just white men. For example, everything they train doctors on regarding knee imagining comes from images of the knees of coal miners in the UK some decades ago)
  • 2 Stimmen
    1 Beiträge
    19 Aufrufe
    Niemand hat geantwortet
  • 461 Stimmen
    94 Beiträge
    2k Aufrufe
    L
    Make them publishers or whatever is required to have it be a legal requirement, have them ban people who share false information. The law doesn't magically make open discussions not open. By design, social media is open. If discussion from the public is closed, then it's no longer social media. ban people who share false information Banning people doesn't stop falsehoods. It's a broken solution promoting a false assurance. Authorities are still fallible & risk banning over unpopular/debatable expressions that may turn out true. There was unpopular dissent over covid lockdown policies in the US despite some dramatic differences with EU policies. Pro-palestinian protests get cracked down. Authorities are vulnerable to biases & swayed. Moreover, when people can just share their falsehoods offline, attempting to ban them online is hard to justify. If print media, through its decline, is being held legally responsible Print media is a controlled medium that controls it writers & approves everything before printing. It has a prepared, coordinated message. They can & do print books full of falsehoods if they want. Social media is open communication where anyone in the entire public can freely post anything before it is revoked. They aren't claiming to spread the truth, merely to enable communication.