Skip to content

Schools are using AI to spy on students and some are getting arrested for misinterpreted jokes and private conversations

Technology
105 68 0
  • AI is a virus

  • Hope the kids find the people responsible and do everything I know a teenager to be able to do to make their lives waking nightmares.

  • Just about the same thing is happening in my school; that kind of system was implemented last year. While I'm not sure if it's AI (let's be real it likely is), let alone effective, I feel like students using the Internet for social media and/or communications between peers are mostly just using it for dumb unproductivity, rather than any kind of serious crime conducting.

    While I don't doubt that students may do some kind of illegal activity (which I feel like 90% of is petty shit that doesn't effect anyone), it's such a waste of time and money to watch EVERYTHING they do on the Internet just to catch them say a fucked up joke among their friends. Instead of being so privacy-disrespecting and treating these students as though they're mini-Pablo Escobars, I would suggest that schools should, for lack of a better word, implement mental health courses, therapists, and generally try to improve student mental health free-of-charge or made cheap. Because if you're so concerned of the next Columbine happening, or cyberbullying potentially taking place, then also being concerned for the student's mental health & general well-being should be just as important. What about the students that have horrible & abusive parents at home, or have trauma, or other issues? They sadly are being deprioritised in favour of whatever this is.

    I'm so tired of schools using the "Watch What Everyone Is Doing At All Times Or Restrict What They're Allowed To Do To Make Sure They're In Line" move to address security and safety issues when it's so lazy and a waste of money when those funds could be put into much better use toward more community-based, less authoritarian measures. I haven't even mentioned AI too, which goes to show how flawed this whole ordeal is in the first place.

  • Knowing that Europe literally has a problem with its soccer audiences making monkey noises at black athletes makes this particular bit of condescension all the more ridiculous.

    Is this better, worse, or the same as throwing dildos at female WNBA athletes?

  • I didn’t realize the schools were using Run, Hide, Fight. That is the same policy for hospital staff in the event of an active shooter. Maddening.

    Having worked in quite a few fields in the last 15 years or so, it's the same active shooter training they give everyone. Even in stores that sell guns.

    I'll let the reader decide how fucked up it is that there's basically a countrywide accepted "standard response"

  • I didn’t know Reddit admins were also school admins

  • My sense of humor is dry, dark, and absurdist. I’d go to jail every week for the sorts of things I joke about if I was a kid today. This is complete lunacy.

    Example of an average joke on my part: speed up and run over that old lady crossing the street!

    It makes my partner laugh. I laugh. We both know I don’t mean it. But a crappy AI tool wouldn’t understand that.

    Yeah especially around middle school, the "darker" the "joke" the more funny it was

  • It is not the tool, but is the lazy stupid person that created the implementation. The same stupidity is true of people that run word filtering in conventional code. AI is just an extra set of eyes. It is not absolute. Giving it any kind of unchecked authority is insane. The administrators that implemented this should be what everyone is upset at.

    The insane rhetoric around AI is a political and commercial campaign effort by Altmann and proprietary AI looking to become a monopoly. It is a Kremlin scope misinformation campaign that has been extremely successful at roping in the dopes. Don't be a dope.

    This situation with AI tools is exactly 100% the same as every past scapegoated tool. I can create undetectable deepfakes in gimp or Photoshop. If I do so with the intent to harm or out of grossly irresponsible stupidity, that is my fault and not the tool. Accessibility of the tool is irrelevant. Those that are dumb enough to blame the tool are the convenient idiot pawns of the worst of humans alive right now. Blame the idiots using the tools that have no morals or ethics in leadership positions while not listening to these same types of people's spurious dichotomy to create monopoly. They prey on conservative ignorance rooted in tribalism and dogma which naturally rejects all unfamiliar new things in life. This is evolutionary behavior and a required mechanism for survival in the natural world. Some will always scatter around the spectrum of possibilities but the center majority is stupid and easily influenced in ways that enable tyrannical hegemony.

    AI is not some panacea. It is a new useful tool. Absent minded stupidity is leading to the same kind of dystopian indifference that lead to the ""free internet"" which has destroyed democracy and is the direct cause of most political and social issues in the present world when it normalized digital slavery through ownership over a part of your person for sale, exploitation, and manipulation without your knowledge or consent.

    I only say this because I care about you digital neighbor. I know it is useless to argue against dogma but this is the fulcrum of a dark dystopian future that populist dogma is welcoming with open arms of ignorance just like those that said the digital world was a meaningless novelty 30 years ago.

    You seem to be handwaving all concerns about the actual tech, but I think the fact that "training" is literally just plagiarism, and the absolutely bonkers energy costs for doing so, do squarely position LLMs as doing more harm than good in most cases.

    The innocent tech here is the concept of the neural net itself, but unless they're being trained on a constrained corpus of data and then used to analyze that or analogous data in a responsible and limited fashion then I think it's somewhere on a spectrum between "irresponsible" and "actually evil".

  • With the help of artificial intelligence, technology can dip into online conversations and immediately notify both school officials and law enforcement.

    Not sure what's worse here: how the police overreacted or that the software immediately contacts law enforcement, without letting teachers (n.b.: they are the experts here, not the police) go through the positives first.

    But oh, that would mean having to pay somebody, at least some extra hours, in addition to the no doubt expensive software. JFC.

    I hate how fully leapfrogged the conversation about surveillance was. It's so disgusting that it's just assumed that all of your communications should be read by your teachers, parents, and school administration just because you're a minor. Kids deserve privacy too.

  • It is not the tool, but is the lazy stupid person that created the implementation. The same stupidity is true of people that run word filtering in conventional code. AI is just an extra set of eyes. It is not absolute. Giving it any kind of unchecked authority is insane. The administrators that implemented this should be what everyone is upset at.

    The insane rhetoric around AI is a political and commercial campaign effort by Altmann and proprietary AI looking to become a monopoly. It is a Kremlin scope misinformation campaign that has been extremely successful at roping in the dopes. Don't be a dope.

    This situation with AI tools is exactly 100% the same as every past scapegoated tool. I can create undetectable deepfakes in gimp or Photoshop. If I do so with the intent to harm or out of grossly irresponsible stupidity, that is my fault and not the tool. Accessibility of the tool is irrelevant. Those that are dumb enough to blame the tool are the convenient idiot pawns of the worst of humans alive right now. Blame the idiots using the tools that have no morals or ethics in leadership positions while not listening to these same types of people's spurious dichotomy to create monopoly. They prey on conservative ignorance rooted in tribalism and dogma which naturally rejects all unfamiliar new things in life. This is evolutionary behavior and a required mechanism for survival in the natural world. Some will always scatter around the spectrum of possibilities but the center majority is stupid and easily influenced in ways that enable tyrannical hegemony.

    AI is not some panacea. It is a new useful tool. Absent minded stupidity is leading to the same kind of dystopian indifference that lead to the ""free internet"" which has destroyed democracy and is the direct cause of most political and social issues in the present world when it normalized digital slavery through ownership over a part of your person for sale, exploitation, and manipulation without your knowledge or consent.

    I only say this because I care about you digital neighbor. I know it is useless to argue against dogma but this is the fulcrum of a dark dystopian future that populist dogma is welcoming with open arms of ignorance just like those that said the digital world was a meaningless novelty 30 years ago.

    In such a world, hoping for a different outcome would be just a dream. You know, people always look for the easy way out, and in the end, yes, we will live under digital surveillance, like animals in a zoo. The question is how to endure this and not break down, especially in the event of collapse and poverty. It's better to hope for the worst and be prepared than to look for a way out and try to rebel and then get trapped.

  • You seem to be handwaving all concerns about the actual tech, but I think the fact that "training" is literally just plagiarism, and the absolutely bonkers energy costs for doing so, do squarely position LLMs as doing more harm than good in most cases.

    The innocent tech here is the concept of the neural net itself, but unless they're being trained on a constrained corpus of data and then used to analyze that or analogous data in a responsible and limited fashion then I think it's somewhere on a spectrum between "irresponsible" and "actually evil".

    If the world is ruled by psychopaths who seek absolute power for the sake of even more power, then the very existence of such technologies will lead to very sad consequences and, perhaps, most likely, even to slavery. Have you heard of technofeudalism?

  • Proper gun control?
    Nah let's spy on kids

    No, rather, to monitor future slaves so that they are obedient.

  • And strip-searched!

    Without notifying parents

  • It's for the children!!

    /s

    For them they are not children but rather wolves that can snap, so they try to make them obedient dogs.

  • I think that's illegal now too. Can't have anything interfering with the glorious vision of a relentlessly productive citizenry that ideally slave away for the benefits of their owners until they die in the office chair at age 74 - right before qualifying for pension.

    Well, except for the health "care" system. That's an exception, but only because the only thing better than ruthless exploitation is diversified ruthless exploitation. Gotta keep the peons on their toes, lest they get uppity.

    I think one rich man in the past said - I don't need a nation of thinkers, I need a nation of slaves. Unless I'm mistaken of course.
    It's like saying predators have learned not to chase their prey but to raise it, giving it the illusion of freedom, although in fact they are leading it to slaughter like cattle. I like this idea with cattle I couldn't resist lol. :3

  • Idiots and assholes exist everywhere. At least ours don't have guns.

    Yeah, they use knives instead.

  • It seems that Big Brother is wathing you... But now it’s already a reality, oh and what will happen if someone commits a thoughtcrime?

  • Or we could have a legislation that would punish the companies that run these bullshit systems AND the authorities that allow and use them when they flop, like in this case.

    Hey, dreaming is still free (don't know how much longer though).

    How can I say this, if you only dream while sitting on the couch, then alas, everything will end sadly. Although if you implant a neurochip into your brain, then you won't be able to even dream lol. :3

  • If the world is ruled by psychopaths who seek absolute power for the sake of even more power, then the very existence of such technologies will lead to very sad consequences and, perhaps, most likely, even to slavery. Have you heard of technofeudalism?

    Okay sure but in many cases the tech in question is actually useful for lots of other stuff besides repression. I don't think that's the case with LLMs. They have a tiny bit of actually usefulness that's completely overshadowed by the insane skyscrapers of hype and lies that have been built up around their "capabilities".

    With "AI" I don't see any reason to go through such gymnastics separating bad actors from neutral tech. The value in the tech is non-existent for anyone who isn't either a researcher dealing with impractically large and unwieldy datasets, or of course a grifter looking to profit off of bigger idiots than themselves. It has never and will never be a useful tool for the average person, so why defend it?

  • Okay sure but in many cases the tech in question is actually useful for lots of other stuff besides repression. I don't think that's the case with LLMs. They have a tiny bit of actually usefulness that's completely overshadowed by the insane skyscrapers of hype and lies that have been built up around their "capabilities".

    With "AI" I don't see any reason to go through such gymnastics separating bad actors from neutral tech. The value in the tech is non-existent for anyone who isn't either a researcher dealing with impractically large and unwieldy datasets, or of course a grifter looking to profit off of bigger idiots than themselves. It has never and will never be a useful tool for the average person, so why defend it?

    There's nothing to defend. Tell me, would you defend someone who is a threat to you and deprives you of the ability to create, making art unnecessary? No, you would go and kill him while this bastard hasn't grown up. Well, what's the point of defending a bullet that will kill you? Are you crazy?

  • 737 Stimmen
    67 Beiträge
    970 Aufrufe
    K
    That has always been the two big problems with AI. Biases in the training, intentional or not, will always bias the output. And AI is incapable of saying "I do not have suffient training on this subject or reliable sources for it to give you a confident answer". It will always give you its best guess, even if it is completely hallucinating much of the data. The only way to identify the hallucinations if it isn't just saying absurd stuff on the face of it, it to do independent research to verify it, at which point you may as well have just researched it yourself in the first place. AI is a tool, and it can be a very powerful tool with the right training and use cases. For example, I use it at a software engineer to help me parse error codes when googling working or to give me code examples for modules I've never used. There is no small number of times it has been completely wrong, but in my particular use case, that is pretty easy to confirm very quickly. The code either works as expected or it doesn't, and code is always tested before releasing it anyway. In research, it is great at helping you find a relevant source for your research across the internet or in a specific database. It is usually very good at summarizing a source for you to get a quick idea about it before diving into dozens of pages. It CAN be good at helping you write your own papers in a LIMITED capacity, such as cleaning up your writing in your writing to make it clearer, correctly formatting your bibliography (with actual sources you provide or at least verify), etc. But you have to remember that it doesn't "know" anything at all. It isn't sentient, intelligent, thoughtful, or any other personification placed on AI. None of the information it gives you is trustworthy without verification. It can and will fabricate entire studies that do not exist even while attributed to real researcher. It can mix in unreliable information with reliable information becuase there is no difference to it. Put simply, it is not a reliable source of information... ever. Make sure you understand that.
  • Google kills the fact-checking snippet

    Technology technology
    13
    150 Stimmen
    13 Beiträge
    125 Aufrufe
    L
    Remember when that useless bot was around here, objectively wrong, and getting downvoted all the time? Good times.
  • 206 Stimmen
    9 Beiträge
    73 Aufrufe
    F
    Imagine if the US gets saved by the fucking Mexican cartels that'd be crazy. Please let it happen
  • 138 Stimmen
    28 Beiträge
    420 Aufrufe
    D
    Lmao it hasn't even been a year under Trump. Calm your titties
  • 4 Stimmen
    1 Beiträge
    16 Aufrufe
    Niemand hat geantwortet
  • 228 Stimmen
    10 Beiträge
    99 Aufrufe
    Z
    I'm having a hard time believing the EU cant afford a $5 wrench for decryption
  • 530 Stimmen
    31 Beiträge
    294 Aufrufe
    ulrich@feddit.orgU
    If you want a narrative, look at all the full-price $250k Roadster pre-orders they've been holding onto for like 8 years now with zero signs of production and complete silence for the last...5 years?
  • Microsoft's AI Secretly Copying All Your Private Messages

    Technology technology
    4
    1
    0 Stimmen
    4 Beiträge
    54 Aufrufe
    S
    Forgive me for not explaining better. Here are the terms potentially needing explanation. Provisioning in this case is initial system setup, the kind of stuff you would do manually after a fresh install, but usually implies a regimented and repeatable process. Virtual Machine (VM) snapshots are like a save state in a game, and are often used to reset a virtual machine to a particular known-working condition. Preboot Execution Environment (PXE, aka ‘network boot’) is a network adapter feature that lets you boot a physical machine from a hosted network image rather than the usual installation on locally attached storage. It’s probably tucked away in your BIOS settings, but many computers have the feature since it’s a common requirement in commercial deployments. As with the VM snapshot described above, a PXE image is typically a known-working state that resets on each boot. Non-virtualized means not using hardware virtualization, and I meant specifically not running inside a virtual machine. Local-only means without a network or just not booting from a network-hosted image. Telemetry refers to data collecting functionality. Most software has it. Windows has a lot. Telemetry isn’t necessarily bad since it can, for example, help reveal and resolve bugs and usability problems, but it is easily (and has often been) abused by data-hungry corporations like MS, so disabling it is an advisable precaution. MS = Microsoft OSS = Open Source Software Group policies are administrative settings in Windows that control standards (for stuff like security, power management, licensing, file system and settings access, etc.) for user groups on a machine or network. Most users stick with the defaults but you can edit these yourself for a greater degree of control. Docker lets you run software inside “containers” to isolate them from the rest of the environment, exposing and/or virtualizing just the resources they need to run, and Compose is a related tool for defining one or more of these containers, how they interact, etc. To my knowledge there is no one-to-one equivalent for Windows. Obviously, many of these concepts relate to IT work, as are the use-cases I had in mind, but the software is simple enough for the average user if you just pick one of the premade playbooks. (The Atlas playbook is popular among gamers, for example.) Edit: added explanations for docker and telemetry