Skip to content

OpenAI will not disclose GPT-5’s energy use. It could be higher than past models

Technology
67 52 0
  • Big O vs Hardware: Better Complexity ≠ Better Performance

    Technology technology
    4
    1
    48 Stimmen
    4 Beiträge
    13 Aufrufe
    F
    TL;DR: Big-O notation describes asymptotic behavior
  • Robot Hand Could Harvest Blackberries Better Than Humans

    Technology technology
    11
    1
    55 Stimmen
    11 Beiträge
    82 Aufrufe
    intheflsun@lemmy.worldI
    I mean when I'm picking them, like 65% end up being eaten, 35% end up in the basket. I don't imagine the clankers would eat that much.
  • 737 Stimmen
    67 Beiträge
    1k Aufrufe
    K
    That has always been the two big problems with AI. Biases in the training, intentional or not, will always bias the output. And AI is incapable of saying "I do not have suffient training on this subject or reliable sources for it to give you a confident answer". It will always give you its best guess, even if it is completely hallucinating much of the data. The only way to identify the hallucinations if it isn't just saying absurd stuff on the face of it, it to do independent research to verify it, at which point you may as well have just researched it yourself in the first place. AI is a tool, and it can be a very powerful tool with the right training and use cases. For example, I use it at a software engineer to help me parse error codes when googling working or to give me code examples for modules I've never used. There is no small number of times it has been completely wrong, but in my particular use case, that is pretty easy to confirm very quickly. The code either works as expected or it doesn't, and code is always tested before releasing it anyway. In research, it is great at helping you find a relevant source for your research across the internet or in a specific database. It is usually very good at summarizing a source for you to get a quick idea about it before diving into dozens of pages. It CAN be good at helping you write your own papers in a LIMITED capacity, such as cleaning up your writing in your writing to make it clearer, correctly formatting your bibliography (with actual sources you provide or at least verify), etc. But you have to remember that it doesn't "know" anything at all. It isn't sentient, intelligent, thoughtful, or any other personification placed on AI. None of the information it gives you is trustworthy without verification. It can and will fabricate entire studies that do not exist even while attributed to real researcher. It can mix in unreliable information with reliable information becuase there is no difference to it. Put simply, it is not a reliable source of information... ever. Make sure you understand that.
  • The End Of The Hackintosh Is Upon Us

    Technology technology
    50
    1
    239 Stimmen
    50 Beiträge
    614 Aufrufe
    S
    There are some companies as bad as Apple (John Deere comes to mind), but it's certainly not the norm. User-replacable standard m.2 SSDs are bog standard and non-standard formats are really rare. Apart from Apple I can not think of many companies that do that. IIRC Red Magic cameras, and Synology NAS but that's the only ones I can think of.
  • 30 Stimmen
    2 Beiträge
    34 Aufrufe
    captainastronaut@seattlelunarsociety.orgC
    If you had asked me during the Obama administration I would have said this a chance of becoming law. Today I give it 0.002%.
  • Websites Are Tracking You Via Browser Fingerprinting

    Technology technology
    41
    1
    296 Stimmen
    41 Beiträge
    601 Aufrufe
    M
    Lets you question how digital stalking is still allowed?
  • 270 Stimmen
    80 Beiträge
    1k Aufrufe
    S
    that sub seems to be fully brigaded by bots from marketing team of closed-ai and preplexity
  • 14 Stimmen
    10 Beiträge
    99 Aufrufe
    M
    Exactly, we don’t know how the brain would adapt to having electric impulses wired right in to it, and it could adapt in some seriously negative ways.