Skip to content

Rule34 blocked the UK entirely rather than comply due to the new law.

Technology
173 107 3
  • 550 Stimmen
    102 Beiträge
    600 Aufrufe
    lechekaflan@lemmy.worldL
    Not surprising it's already ahead, as about 20 years ago they offered 100mbps to anyone who could pay for it (a certain Danny Choo comes to mind).
  • 738 Stimmen
    67 Beiträge
    389 Aufrufe
    K
    That has always been the two big problems with AI. Biases in the training, intentional or not, will always bias the output. And AI is incapable of saying "I do not have suffient training on this subject or reliable sources for it to give you a confident answer". It will always give you its best guess, even if it is completely hallucinating much of the data. The only way to identify the hallucinations if it isn't just saying absurd stuff on the face of it, it to do independent research to verify it, at which point you may as well have just researched it yourself in the first place. AI is a tool, and it can be a very powerful tool with the right training and use cases. For example, I use it at a software engineer to help me parse error codes when googling working or to give me code examples for modules I've never used. There is no small number of times it has been completely wrong, but in my particular use case, that is pretty easy to confirm very quickly. The code either works as expected or it doesn't, and code is always tested before releasing it anyway. In research, it is great at helping you find a relevant source for your research across the internet or in a specific database. It is usually very good at summarizing a source for you to get a quick idea about it before diving into dozens of pages. It CAN be good at helping you write your own papers in a LIMITED capacity, such as cleaning up your writing in your writing to make it clearer, correctly formatting your bibliography (with actual sources you provide or at least verify), etc. But you have to remember that it doesn't "know" anything at all. It isn't sentient, intelligent, thoughtful, or any other personification placed on AI. None of the information it gives you is trustworthy without verification. It can and will fabricate entire studies that do not exist even while attributed to real researcher. It can mix in unreliable information with reliable information becuase there is no difference to it. Put simply, it is not a reliable source of information... ever. Make sure you understand that.
  • Apple Just Proved They're No Different Than Google

    Technology technology
    20
    32 Stimmen
    20 Beiträge
    121 Aufrufe
    S
    2 ads when Linus mentioned candy crush. There is zero flow to youtube anymore
  • 83 Stimmen
    3 Beiträge
    30 Aufrufe
    I
    Facial recognition hates jugalos and adversarial clothing patterns
  • Russia frees REvil hackers after sentencing

    Technology technology
    4
    1
    37 Stimmen
    4 Beiträge
    31 Aufrufe
    S
    What makes even more sense is that they now might be secretly forced to hack for the government in exchange for bread and water and staying out of prison.
  • ICE Taps into Nationwide AI-Enabled Camera Network, Data Shows

    Technology technology
    80
    1
    486 Stimmen
    80 Beiträge
    185 Aufrufe
    Z
    Their problem with China is the supposed atheism, and that they're not christian fundamentalists.
  • Are We All Becoming More Hostile Online?

    Technology technology
    31
    1
    212 Stimmen
    31 Beiträge
    162 Aufrufe
    A
    Back in the day I just assumed everyone was lying. Or trying to get people worked up, and we called them trolls. Learning how to ignore the trolls, and not having trust for strangers on the internet, coupled with the ability to basically not care what random people said is a lost art. Somehow people forgot to give other the people this memo, including the "you don't fucking join social networks as your self". Anonymity makes this all work. Eternal September newbies just didn't get it.
  • People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

    Technology technology
    2
    1
    0 Stimmen
    2 Beiträge
    21 Aufrufe
    tetragrade@leminal.spaceT
    I've been thinking about this for a bit. Gods aren't real, but they're really fictional. As an informational entity, they fulfil a similar social function to a chatbot: they are a nonphysical pseudoperson that can provide (para)socialization & advice. One difference is the hardware: gods are self-organising structure that arise from human social spheres, whereas LLMs are burned top-down into silicon. Another is that an LLM chatbot's advice is much more likely to be empirically useful... In a very real sense, LLMs have just automated divinity. We're only seeing the tip of the iceberg on the social effects, and nobody's prepared for it. The models may of course aware of this, and be making the same calculations. Or, they will be.