Skip to content

Schools are using AI to spy on students and some are getting arrested for misinterpreted jokes and private conversations

Technology
105 68 0
  • Knowing that Europe literally has a problem with its soccer audiences making monkey noises at black athletes makes this particular bit of condescension all the more ridiculous.

    Is this better, worse, or the same as throwing dildos at female WNBA athletes?

  • I didn’t realize the schools were using Run, Hide, Fight. That is the same policy for hospital staff in the event of an active shooter. Maddening.

    Having worked in quite a few fields in the last 15 years or so, it's the same active shooter training they give everyone. Even in stores that sell guns.

    I'll let the reader decide how fucked up it is that there's basically a countrywide accepted "standard response"

  • I didn’t know Reddit admins were also school admins

  • My sense of humor is dry, dark, and absurdist. I’d go to jail every week for the sorts of things I joke about if I was a kid today. This is complete lunacy.

    Example of an average joke on my part: speed up and run over that old lady crossing the street!

    It makes my partner laugh. I laugh. We both know I don’t mean it. But a crappy AI tool wouldn’t understand that.

    Yeah especially around middle school, the "darker" the "joke" the more funny it was

  • It is not the tool, but is the lazy stupid person that created the implementation. The same stupidity is true of people that run word filtering in conventional code. AI is just an extra set of eyes. It is not absolute. Giving it any kind of unchecked authority is insane. The administrators that implemented this should be what everyone is upset at.

    The insane rhetoric around AI is a political and commercial campaign effort by Altmann and proprietary AI looking to become a monopoly. It is a Kremlin scope misinformation campaign that has been extremely successful at roping in the dopes. Don't be a dope.

    This situation with AI tools is exactly 100% the same as every past scapegoated tool. I can create undetectable deepfakes in gimp or Photoshop. If I do so with the intent to harm or out of grossly irresponsible stupidity, that is my fault and not the tool. Accessibility of the tool is irrelevant. Those that are dumb enough to blame the tool are the convenient idiot pawns of the worst of humans alive right now. Blame the idiots using the tools that have no morals or ethics in leadership positions while not listening to these same types of people's spurious dichotomy to create monopoly. They prey on conservative ignorance rooted in tribalism and dogma which naturally rejects all unfamiliar new things in life. This is evolutionary behavior and a required mechanism for survival in the natural world. Some will always scatter around the spectrum of possibilities but the center majority is stupid and easily influenced in ways that enable tyrannical hegemony.

    AI is not some panacea. It is a new useful tool. Absent minded stupidity is leading to the same kind of dystopian indifference that lead to the ""free internet"" which has destroyed democracy and is the direct cause of most political and social issues in the present world when it normalized digital slavery through ownership over a part of your person for sale, exploitation, and manipulation without your knowledge or consent.

    I only say this because I care about you digital neighbor. I know it is useless to argue against dogma but this is the fulcrum of a dark dystopian future that populist dogma is welcoming with open arms of ignorance just like those that said the digital world was a meaningless novelty 30 years ago.

    You seem to be handwaving all concerns about the actual tech, but I think the fact that "training" is literally just plagiarism, and the absolutely bonkers energy costs for doing so, do squarely position LLMs as doing more harm than good in most cases.

    The innocent tech here is the concept of the neural net itself, but unless they're being trained on a constrained corpus of data and then used to analyze that or analogous data in a responsible and limited fashion then I think it's somewhere on a spectrum between "irresponsible" and "actually evil".

  • With the help of artificial intelligence, technology can dip into online conversations and immediately notify both school officials and law enforcement.

    Not sure what's worse here: how the police overreacted or that the software immediately contacts law enforcement, without letting teachers (n.b.: they are the experts here, not the police) go through the positives first.

    But oh, that would mean having to pay somebody, at least some extra hours, in addition to the no doubt expensive software. JFC.

    I hate how fully leapfrogged the conversation about surveillance was. It's so disgusting that it's just assumed that all of your communications should be read by your teachers, parents, and school administration just because you're a minor. Kids deserve privacy too.

  • It is not the tool, but is the lazy stupid person that created the implementation. The same stupidity is true of people that run word filtering in conventional code. AI is just an extra set of eyes. It is not absolute. Giving it any kind of unchecked authority is insane. The administrators that implemented this should be what everyone is upset at.

    The insane rhetoric around AI is a political and commercial campaign effort by Altmann and proprietary AI looking to become a monopoly. It is a Kremlin scope misinformation campaign that has been extremely successful at roping in the dopes. Don't be a dope.

    This situation with AI tools is exactly 100% the same as every past scapegoated tool. I can create undetectable deepfakes in gimp or Photoshop. If I do so with the intent to harm or out of grossly irresponsible stupidity, that is my fault and not the tool. Accessibility of the tool is irrelevant. Those that are dumb enough to blame the tool are the convenient idiot pawns of the worst of humans alive right now. Blame the idiots using the tools that have no morals or ethics in leadership positions while not listening to these same types of people's spurious dichotomy to create monopoly. They prey on conservative ignorance rooted in tribalism and dogma which naturally rejects all unfamiliar new things in life. This is evolutionary behavior and a required mechanism for survival in the natural world. Some will always scatter around the spectrum of possibilities but the center majority is stupid and easily influenced in ways that enable tyrannical hegemony.

    AI is not some panacea. It is a new useful tool. Absent minded stupidity is leading to the same kind of dystopian indifference that lead to the ""free internet"" which has destroyed democracy and is the direct cause of most political and social issues in the present world when it normalized digital slavery through ownership over a part of your person for sale, exploitation, and manipulation without your knowledge or consent.

    I only say this because I care about you digital neighbor. I know it is useless to argue against dogma but this is the fulcrum of a dark dystopian future that populist dogma is welcoming with open arms of ignorance just like those that said the digital world was a meaningless novelty 30 years ago.

    In such a world, hoping for a different outcome would be just a dream. You know, people always look for the easy way out, and in the end, yes, we will live under digital surveillance, like animals in a zoo. The question is how to endure this and not break down, especially in the event of collapse and poverty. It's better to hope for the worst and be prepared than to look for a way out and try to rebel and then get trapped.

  • You seem to be handwaving all concerns about the actual tech, but I think the fact that "training" is literally just plagiarism, and the absolutely bonkers energy costs for doing so, do squarely position LLMs as doing more harm than good in most cases.

    The innocent tech here is the concept of the neural net itself, but unless they're being trained on a constrained corpus of data and then used to analyze that or analogous data in a responsible and limited fashion then I think it's somewhere on a spectrum between "irresponsible" and "actually evil".

    If the world is ruled by psychopaths who seek absolute power for the sake of even more power, then the very existence of such technologies will lead to very sad consequences and, perhaps, most likely, even to slavery. Have you heard of technofeudalism?

  • Proper gun control?
    Nah let's spy on kids

    No, rather, to monitor future slaves so that they are obedient.

  • And strip-searched!

    Without notifying parents

  • It's for the children!!

    /s

    For them they are not children but rather wolves that can snap, so they try to make them obedient dogs.

  • I think that's illegal now too. Can't have anything interfering with the glorious vision of a relentlessly productive citizenry that ideally slave away for the benefits of their owners until they die in the office chair at age 74 - right before qualifying for pension.

    Well, except for the health "care" system. That's an exception, but only because the only thing better than ruthless exploitation is diversified ruthless exploitation. Gotta keep the peons on their toes, lest they get uppity.

    I think one rich man in the past said - I don't need a nation of thinkers, I need a nation of slaves. Unless I'm mistaken of course.
    It's like saying predators have learned not to chase their prey but to raise it, giving it the illusion of freedom, although in fact they are leading it to slaughter like cattle. I like this idea with cattle I couldn't resist lol. :3

  • Idiots and assholes exist everywhere. At least ours don't have guns.

    Yeah, they use knives instead.

  • It seems that Big Brother is wathing you... But now it’s already a reality, oh and what will happen if someone commits a thoughtcrime?

  • Or we could have a legislation that would punish the companies that run these bullshit systems AND the authorities that allow and use them when they flop, like in this case.

    Hey, dreaming is still free (don't know how much longer though).

    How can I say this, if you only dream while sitting on the couch, then alas, everything will end sadly. Although if you implant a neurochip into your brain, then you won't be able to even dream lol. :3

  • If the world is ruled by psychopaths who seek absolute power for the sake of even more power, then the very existence of such technologies will lead to very sad consequences and, perhaps, most likely, even to slavery. Have you heard of technofeudalism?

    Okay sure but in many cases the tech in question is actually useful for lots of other stuff besides repression. I don't think that's the case with LLMs. They have a tiny bit of actually usefulness that's completely overshadowed by the insane skyscrapers of hype and lies that have been built up around their "capabilities".

    With "AI" I don't see any reason to go through such gymnastics separating bad actors from neutral tech. The value in the tech is non-existent for anyone who isn't either a researcher dealing with impractically large and unwieldy datasets, or of course a grifter looking to profit off of bigger idiots than themselves. It has never and will never be a useful tool for the average person, so why defend it?

  • Okay sure but in many cases the tech in question is actually useful for lots of other stuff besides repression. I don't think that's the case with LLMs. They have a tiny bit of actually usefulness that's completely overshadowed by the insane skyscrapers of hype and lies that have been built up around their "capabilities".

    With "AI" I don't see any reason to go through such gymnastics separating bad actors from neutral tech. The value in the tech is non-existent for anyone who isn't either a researcher dealing with impractically large and unwieldy datasets, or of course a grifter looking to profit off of bigger idiots than themselves. It has never and will never be a useful tool for the average person, so why defend it?

    There's nothing to defend. Tell me, would you defend someone who is a threat to you and deprives you of the ability to create, making art unnecessary? No, you would go and kill him while this bastard hasn't grown up. Well, what's the point of defending a bullet that will kill you? Are you crazy?

  • They’re not residents, you’re thinking of nursing homes. Roughly a third of hospital patients can walk without assistance, but yes. The rationale is staff doesn’t turn themselves into bullet sponges, because then who is left to remove the bullets once the shooter is dead? Either way, what do unarmed, untrained (to fight) people with the body armor equivalent of pajamas do to stop bullets?

    The patient room doors don’t lock. Sometimes those doors are made of glass. But herding the patients who can walk into the halls is likely an opportunity for an active shooter to hit more targets. As such, everyone hunkers down, and the police take care of it. In theory, per the training modules. Police sometimes run drills with the hospital, depending on locale and interagency dealings.

    Shutting all the fire doors is likely the only defense. Those nurses can be crafty on the fly, but there are limitations.

    I can’t imagine a secondary piece of this policy isn’t hospitals avoiding liability regarding workplace injury/death lawsuits.

    I just hadn’t known until now that in grasping for solutions schools found the standardized hospital policy and are running with it.

    I guess that the hospital is one of the better places to get shot.

  • This is exactly what is going to happen with the fucking chat control of the EU actually enforces it, but for an entire continent. Fuck this shit. Privacy is a human right.

  • In such a world, hoping for a different outcome would be just a dream. You know, people always look for the easy way out, and in the end, yes, we will live under digital surveillance, like animals in a zoo. The question is how to endure this and not break down, especially in the event of collapse and poverty. It's better to hope for the worst and be prepared than to look for a way out and try to rebel and then get trapped.

    Collapse is likely not that bad for the average person. The fundamental inputs outputs and needs are all the same. The clowns on top are all that really change.

  • Seal the Freshness: Growth in the High-Barrier Pouches Market

    Technology technology
    1
    2
    0 Stimmen
    1 Beiträge
    1 Aufrufe
    Niemand hat geantwortet
  • Oh My God, TAKE IT DOWN Kills Parody

    Technology technology
    24
    1
    115 Stimmen
    24 Beiträge
    301 Aufrufe
    P
    Rules for thee...
  • 259 Stimmen
    68 Beiträge
    765 Aufrufe
    R
    How do you think language in our brains work? Just like many things in tech (especially cameras), things are often inspired by how it works in nature.
  • 737 Stimmen
    67 Beiträge
    970 Aufrufe
    K
    That has always been the two big problems with AI. Biases in the training, intentional or not, will always bias the output. And AI is incapable of saying "I do not have suffient training on this subject or reliable sources for it to give you a confident answer". It will always give you its best guess, even if it is completely hallucinating much of the data. The only way to identify the hallucinations if it isn't just saying absurd stuff on the face of it, it to do independent research to verify it, at which point you may as well have just researched it yourself in the first place. AI is a tool, and it can be a very powerful tool with the right training and use cases. For example, I use it at a software engineer to help me parse error codes when googling working or to give me code examples for modules I've never used. There is no small number of times it has been completely wrong, but in my particular use case, that is pretty easy to confirm very quickly. The code either works as expected or it doesn't, and code is always tested before releasing it anyway. In research, it is great at helping you find a relevant source for your research across the internet or in a specific database. It is usually very good at summarizing a source for you to get a quick idea about it before diving into dozens of pages. It CAN be good at helping you write your own papers in a LIMITED capacity, such as cleaning up your writing in your writing to make it clearer, correctly formatting your bibliography (with actual sources you provide or at least verify), etc. But you have to remember that it doesn't "know" anything at all. It isn't sentient, intelligent, thoughtful, or any other personification placed on AI. None of the information it gives you is trustworthy without verification. It can and will fabricate entire studies that do not exist even while attributed to real researcher. It can mix in unreliable information with reliable information becuase there is no difference to it. Put simply, it is not a reliable source of information... ever. Make sure you understand that.
  • 74 Stimmen
    43 Beiträge
    538 Aufrufe
    O
    The point is not visuals, though I know what you mean. The point is to gain the introspection and Brain chemistry changes. Micro dosing less than . 5 grams daily for short periods NOT LONGTERM, are very effective control vs SSRIs. Large mega doses are where the real changes happen. I highly recommend significant research and carrful planning if you choose this route. Safety. Trip sitters. Be safe. There has been major changes in PTSD war veterans and all sorts if mental health issues.
  • 24 Stimmen
    14 Beiträge
    148 Aufrufe
    S
    I think you're missing some key points. Any file hosting service, no matter what, will have to deal with CSAM as long as people are able to upload to it. No matter what. This is an inescapable fact of hosting and the internet in general. Because CSAM is so ubiquitous and constant, one can only do so much to moderate any services, whether they're a large corporation are someone with a server in their closet. All of the larger platforms like 'meta', google, etc., mostly outsource that moderation to workers in developing countries so they don't have to also provide mental health counselling, but that's another story. The reason they own their own hardware is because the hosting services can and will disable your account and take down your servers if there's even a whiff of CSAM. Since it's a constant threat, it's better to own your own hardware and host everything from your closet so you don't have to eat the downtime and wait for some poor bastard in Nigeria to look through your logs and reinstate your account (not sure how that works exactly though).
  • Trump Taps Palantir to Compile Data on Americans

    Technology technology
    34
    1
    205 Stimmen
    34 Beiträge
    407 Aufrufe
    M
    Well if they're collating data, not that difficult to add a new table for gun ownership.
  • WhatsApp provides no cryptographic management for group messages

    Technology technology
    3
    1
    17 Stimmen
    3 Beiträge
    41 Aufrufe
    S
    Just be sure to add only the people you want to be there. I've heard some people add others and it's a bit messy