Skip to content

Amazon is considering shoving ads into Alexa+ conversations

Technology
92 67 117
  • DIY experimental Redox Flow Battery kit

    Technology technology
    3
    1
    37 Stimmen
    3 Beiträge
    42 Aufrufe
    C
    The roadmap defines 3 milestone batteries. The first is released, it's a benchtop device that you can relatively easily build on your own. It has an electrode side of 2 x 2cm2. It does not store any significant amount of energy. The second one is being developed right now, it has a cell the size of a small 3d printer bed (20x20cm) and will also not store practical amounts of energy. It will hopefully prove though that they are on the right track and that they can scale it up. The third battery only will store significant amounts of energy but in only due end of the year (probably later). Current Vanadium systems cost approx. 300-600$/kWh according to some random website I found. The goal of this project is to spread the knowledge about Redox Flow Batteries and in the medium term only make them commercially viable. The aniolyth and catholyth are based on the Zink-Iodine system in an aqueous solution. There are a bunch of other systems though, each with their trade offs. The anode and cathode are both graphite felt in the case of the dev kit.
  • 738 Stimmen
    67 Beiträge
    926 Aufrufe
    K
    That has always been the two big problems with AI. Biases in the training, intentional or not, will always bias the output. And AI is incapable of saying "I do not have suffient training on this subject or reliable sources for it to give you a confident answer". It will always give you its best guess, even if it is completely hallucinating much of the data. The only way to identify the hallucinations if it isn't just saying absurd stuff on the face of it, it to do independent research to verify it, at which point you may as well have just researched it yourself in the first place. AI is a tool, and it can be a very powerful tool with the right training and use cases. For example, I use it at a software engineer to help me parse error codes when googling working or to give me code examples for modules I've never used. There is no small number of times it has been completely wrong, but in my particular use case, that is pretty easy to confirm very quickly. The code either works as expected or it doesn't, and code is always tested before releasing it anyway. In research, it is great at helping you find a relevant source for your research across the internet or in a specific database. It is usually very good at summarizing a source for you to get a quick idea about it before diving into dozens of pages. It CAN be good at helping you write your own papers in a LIMITED capacity, such as cleaning up your writing in your writing to make it clearer, correctly formatting your bibliography (with actual sources you provide or at least verify), etc. But you have to remember that it doesn't "know" anything at all. It isn't sentient, intelligent, thoughtful, or any other personification placed on AI. None of the information it gives you is trustworthy without verification. It can and will fabricate entire studies that do not exist even while attributed to real researcher. It can mix in unreliable information with reliable information becuase there is no difference to it. Put simply, it is not a reliable source of information... ever. Make sure you understand that.
  • 142 Stimmen
    5 Beiträge
    52 Aufrufe
    B
    Of all the crap that comes out of the dipshit-in-chief's mouth, the one thing I really wish he would've followed through on was deporting Elmo.
  • 92 Stimmen
    5 Beiträge
    59 Aufrufe
    H
    This is interesting to me as I like to say the llms are basically another abstraction of search. Initially it was links with no real weight that had to be gone through and then various algorithms weighted the return, then the results started giving a small blurb so one did not have to follow every link, and now your basically getting a report which should have references to the sources. I would like to see this looking at how folks engage with an llm. Basically my guess is if one treats the llm as a helper and collaborates to create the product that they will remember more than if they treat it as a servant and just instructs them to do it and takes the output as is.
  • 169 Stimmen
    13 Beiträge
    123 Aufrufe
    E
    Hold on let me find something[image: 1b188197-bd96-49bd-8fc0-0598e75468ea.avif]
  • 81 Stimmen
    6 Beiträge
    71 Aufrufe
    merde@sh.itjust.worksM
    (common people, this is the fediverse) [image: 922f7388-85b1-463d-9cdd-286adbb6a27b.jpeg]
  • Meta Filed a Lawsuit Against The Entity Behind CrushAI Nudify App.

    Technology technology
    21
    1
    92 Stimmen
    21 Beiträge
    220 Aufrufe
    L
    I know everybody hates AI but to me it's weird to treat artificially generated nudity differently from if somebody painted a naked body with a real person's face on it - which I assume would be legally protected freedom of expression.
  • 396 Stimmen
    24 Beiträge
    298 Aufrufe
    devfuuu@lemmy.worldD
    Lots of people have kids nowadays in their houses, we should ban all of that and put them all in a specialized center or something. I can't imagine what all those people are doing with kids behind close doors under the guise of "family". Truly scary if you think about it.