Skip to content

Airlines Are Selling Your Data to ICE

Technology
23 17 329
  • UK Plans AI Experiment on Children Seeking Asylum

    Technology technology
    12
    1
    79 Stimmen
    12 Beiträge
    34 Aufrufe
    A
    Companies that tested their technology in a handful of supermarkets, pubs, and on websites set them to predict whether a person looks under 25, not 18, allowing a wide error margin for algorithms that struggle to distinguish a 17-year-old from a 19-year-old. AI face scans were never designed for children seeking asylum, and risk producing disastrous, life-changing errors. Algorithms identify patterns in the distance between nostrils and the texture of skin; they cannot account for children who have aged prematurely from trauma and violence. They cannot grasp how malnutrition, dehydration, sleep deprivation, and exposure to salt water during a dangerous sea crossing might profoundly alter a child’s face. Goddamn, this is horrible. Imagine leaving shitty AI to determine the fate of this girl : 'Psychologically broken,' 8-year-old Sama loses her hair
  • 255 Stimmen
    30 Beiträge
    262 Aufrufe
    srmono@feddit.orgS
    Rethink/Adguard/pihole all interfere with the DNS lookup. Depending on the quality of your blocklist, the servers they try to send the data too will simply not be reachable.
  • Getting Started with Ebitengine (Go game engine)

    Technology technology
    1
    0 Stimmen
    1 Beiträge
    6 Aufrufe
    Niemand hat geantwortet
  • A Forensic Examination of GIS Arta

    Technology technology
    1
    1
    7 Stimmen
    1 Beiträge
    21 Aufrufe
    Niemand hat geantwortet
  • 51 Stimmen
    8 Beiträge
    93 Aufrufe
    B
    But do you also sometimes leave out AI for steps the AI often does for you, like the conceptualisation or the implementation? Would it be possible for you to do these steps as efficiently as before the use of AI? Would you be able to spot the mistakes the AI makes in these steps, even months or years along those lines? The main issue I have with AI being used in tasks is that it deprives you from using logic by applying it to real life scenarios, the thing we excel at. It would be better to use AI in the opposite direction you are currently use it as: develop methods to view the works critically. After all, if there is one thing a lot of people are bad at, it's thorough critical thinking. We just suck at knowing of all edge cases and how we test for them. Let the AI come up with unit tests, let it be the one that questions your work, in order to get a better perspective on it.
  • Is AI Apocalypse Inevitable? - Tristan Harris

    Technology technology
    11
    1
    121 Stimmen
    11 Beiträge
    110 Aufrufe
    V
    Define AGI, because recently the definition is shifting down to match LLM. In fact we can say we achieved AGI now because we have machine that answers questions. The problem will be when the number of questions will start shrinking not because of number of problems but number of people that understand those problems. That is what is happening now. Don't believe me, read the statistics about age and workforce. Now put it into urgent need to something to replace those people. After that think what will happen when all those attempts fail.
  • 45 Stimmen
    35 Beiträge
    366 Aufrufe
    M
    You guys sure display a crazy obsession with "Apple Fanboys" in this sub… The amount of Applephobia… Phew! As if the new release had you all flustered or something… Gotta take a bite and taste the Apple at some point! Can’t stay in the closet forever, ya know?
  • AI cheating surge pushes schools into chaos

    Technology technology
    25
    45 Stimmen
    25 Beiträge
    297 Aufrufe
    C
    Sorry for the late reply, I had to sit and think on this one for a little bit. I think there are would be a few things going on when it comes to designing a course to teach critical thinking, nuances, and originality; and they each have their own requirements. For critical thinking: The main goal is to provide students with a toolbelt for solving various problems. Then instilling the habit of always asking "does this match the expected outcome? What was I expecting?". So usually courses will be setup so students learn about a tool, practice using the tool, then have a culminating assignment on using all the tools. Ideally, the problems students face at the end require multiple tools to solve. Nuance mainly naturally comes with exposure to the material from a professional - The way a mechanical engineer may describe building a desk will probably differ greatly compared to a fantasy author. You can also explain definitions and industry standards; but thats really dry. So I try to teach nuances via definitions by mixing in the weird nuances as much as possible with jokes. Then for originality; I've realized I dont actually look for an original idea; but something creative. In a classroom setting, you're usually learning new things about a subject so a student's knowledge of that space is usually very limited. Thus, an idea that they've never heard about may be original to them, but common for an industry expert. For teaching originality creativity, I usually provide time to be creative & think, and provide open ended questions as prompts to explore ideas. My courses that require originality usually have it as a part of the culminating assignment at the end where they can apply their knowledge. I'll also add in time where students can come to me with preliminary ideas and I can provide feedback on whether or not it passes the creative threshold. Not all ideas are original, but I sometimes give a bit of slack if its creative enough. The amount of course overhauling to get around AI really depends on the material being taught. For example, in programming - you teach critical thinking by always testing your code, even with parameters that don't make sense. For example: Try to add 123 + "skibbidy", and see what the program does.