Skip to content

Mastodon says it doesn't 'have the means' to comply with age verification laws

Technology
201 85 36
  • Government sets up page to verify age. You head to it, no referrer. Age check happens by trusted entity (your government, not some sketchy big tech ass), they create a signed cert with a short lifespan to prevent your kid using the one you created yesterday and without the knowledge which service it is for. It does not contain a reference to your identity. You share that cert with the service you want to use, they verify the signature, your age, save the passing and everyone is happy. Your government doesn't know that you're into ladies with big booties, the big booty service doesn't know your identity and you wank along in private.

    But oh no, that wouldn't work because think of the... I have no clue.

    That sounds like a very functional and rational solution to the problem of age verification. But age verification isn't the ultimate goal, it's mass surveillance, which your solution doesn't work for.

  • I give it 2 years till Netflix requires you to have an ID every time you open the app because it has rated R movies.

    This is the same principle. The account holder agreement should make the account holder responsible for the use of the service.

    The government shouldn't be parenting our minors, their guardians should be.

    Otherswise we should put digital locks on every beer bottle, pack of cigarettes, blunt raps, car door, etc. That requires you to scan your ID before every use.

    "Kids shouldn't be driving cars, it isn't safe!"
    Yes, but somehow we have made it 100 years without requiring proof of age/license to start the car.

    And the car is far more deadly than them seeing someone naked.

    Oh, I was thinking the certificate would only be needed for signups - once the account is created, it absolutely should be on the account holder, not the service provider.

  • Government sets up page to verify age. You head to it, no referrer. Age check happens by trusted entity (your government, not some sketchy big tech ass), they create a signed cert with a short lifespan to prevent your kid using the one you created yesterday and without the knowledge which service it is for. It does not contain a reference to your identity. You share that cert with the service you want to use, they verify the signature, your age, save the passing and everyone is happy. Your government doesn't know that you're into ladies with big booties, the big booty service doesn't know your identity and you wank along in private.

    But oh no, that wouldn't work because think of the... I have no clue.

    Age check happens by trusted entity (your government, not some sketchy big tech ass), they create a signed cert with a short lifespan to prevent your kid using the one you created yesterday and without the knowledge which service it is for.

    Sorry, not sufficient.

    Not secure.

    " I certify that somebody is >18, but I don't say who - just somebody "

    This is an open invitation to fraud. You are going to create at least a black market for these certificates, since they are anonymous but valid.

    And I'm sure some real fraudsters have even stronger ideas than I have.

  • This post did not contain any content.

    Lucky for Mastodon and other ActivityPub projects, they don't need to host any servers. People outside of regions where age verification is required can host the servers instead.

  • Oh, I was thinking the certificate would only be needed for signups - once the account is created, it absolutely should be on the account holder, not the service provider.

    Why not apply this to the ISP account holder and trust them to protect their own kids the way they see fit?

  • That sounds like a very functional and rational solution to the problem of age verification. But age verification isn't the ultimate goal, it's mass surveillance, which your solution doesn't work for.

    The fact that they haven't gone for this approach that delivers age verification without disclosing ID, when it's a common and well known pattern in IT services, very strongly suggests that age verification was never the goal. The goal is to associate your real identity with all the information data brokers have on you, and make that available to state security services and law enforcement. And to do this they will gradually make it impossible to use the internet until they have your ID.

    We really need to move community-run sites behind Tor or into i2p or something similar. We need networks where these laws just can't practically be enforced and information can continue to circulate openly.

    The other day my kid wanted me to tweak the parental settings on their Roblox account. I tried to do so and was confronted by a demand for my government-issued ID and a selfie to prove my age. So I went to look at the privacy policy of the company behind it, Persona. Here's the policy, and it's without a doubt the worst I've ever seen. It basically says they'll take every last bit of information about you and sell it to everyone, including governments.

    So I explained to my kid that I wasn't willing to do this. This is a taste of how everything will be soon.

  • Lucky for Mastodon and other ActivityPub projects, they don't need to host any servers. People outside of regions where age verification is required can host the servers instead.

    But what if govt block the site hosted outside? And the VPNs require you to do an age verification?

  • Age check happens by trusted entity (your government, not some sketchy big tech ass), they create a signed cert with a short lifespan to prevent your kid using the one you created yesterday and without the knowledge which service it is for.

    Sorry, not sufficient.

    Not secure.

    " I certify that somebody is >18, but I don't say who - just somebody "

    This is an open invitation to fraud. You are going to create at least a black market for these certificates, since they are anonymous but valid.

    And I'm sure some real fraudsters have even stronger ideas than I have.

    What stops non-anonymous certificates from being sold?

    If John Doe views way too much porn, then you expect the site to shut him down? They have no ability to track other site usage. The authorities have to block him after the 10,000th download.

    At that point, why does the site need to know? Either the government blocks someone's ID or they don't

  • But what if govt block the site hosted outside? And the VPNs require you to do an age verification?

    Good luck blocking Tor or I2P. China already tried that.

  • Government sets up page to verify age. You head to it, no referrer. Age check happens by trusted entity (your government, not some sketchy big tech ass), they create a signed cert with a short lifespan to prevent your kid using the one you created yesterday and without the knowledge which service it is for. It does not contain a reference to your identity. You share that cert with the service you want to use, they verify the signature, your age, save the passing and everyone is happy. Your government doesn't know that you're into ladies with big booties, the big booty service doesn't know your identity and you wank along in private.

    But oh no, that wouldn't work because think of the... I have no clue.

    Ideally, it would be handled directly on the hardware. Allow people to verify their logged in profile, using a government-run site. Then that user is now verified. Any time an age gate needs to happen, the site initiates a secure handshake directly with the device via TLS, and asks the device if the current user is old enough. The device responds with a simple yes/no using that secure protocol. Parents can verify their accounts/devices, while child accounts/devices are left unverified and fail the test.

    Government doesn’t know what you’re watching, because they simply verified the user. People don’t need to spam an underfunded government site with requests every day, because the individual user is verified. And age gates are able to happen entirely in the background without any additional effort on the user’s side. The result is that adults get to watch porn without needing to verify every time, while kids automatically get a “you’re not age-verified” wall. And kids can’t MITM the age check, due to the secure handshake. And if it becomes common enough, even a VPN would be meaningless as adult sites will just start requiring it by default.

    For instance, on a Windows machine, each individual user would be independently verified. So if the kid is logged into their account, they’d get an age wall. But if the parent is logged into their verified account, they can watch all the porn they want. Then keeping kids away from porn is simply a matter of protecting your adults’ computer password.

    But it won’t happen, because protecting kids isn’t the actual goal. The actual goal is surveillance. Google (and other big tech firms like them) is pushing to enact these laws, because they have the infrastructure set up to verify users. And requiring verification via those big tech firms allows them to track you more.

  • It does not contain a reference to your identity.

    but they know who they issued it to, and can secretly subpoena your data from your instance.

    no thank you.

    They (the govt) would know that they issued a certificate to ex. lemmy.dbzer0.com

    They can't know that the certificate is issued to conmie

    Unless, of course, the instance logs the age certificate used by each user

    And also, unless the govt's age verification service logs the certificate issued by each citizen

  • Why not apply this to the ISP account holder and trust them to protect their own kids the way they see fit?

    Philosophically I agree with you. I was just discussing a technological way to accomplish age verification without giving up users' identities to a service provider, or the government knowing what service you're using. Unfortunately, too many governments want to know what you're doing inside your pants.

  • Philosophically I agree with you. I was just discussing a technological way to accomplish age verification without giving up users' identities to a service provider, or the government knowing what service you're using. Unfortunately, too many governments want to know what you're doing inside your pants.

    Yeah, there is likely a tech answer to this that would work. Coming up with one and them choosing not to use it makes it even more clear kids’ safety isn’t their goal.

  • Government sets up page to verify age. You head to it, no referrer. Age check happens by trusted entity (your government, not some sketchy big tech ass), they create a signed cert with a short lifespan to prevent your kid using the one you created yesterday and without the knowledge which service it is for. It does not contain a reference to your identity. You share that cert with the service you want to use, they verify the signature, your age, save the passing and everyone is happy. Your government doesn't know that you're into ladies with big booties, the big booty service doesn't know your identity and you wank along in private.

    But oh no, that wouldn't work because think of the... I have no clue.

    Because it's not actually about age verification, it's about totalizing surveillance of everyone.

  • This post did not contain any content.

    Hey, UK! When you are being compared to Mississippi, you are fucking up very very badly.

  • That sounds like a very functional and rational solution to the problem of age verification. But age verification isn't the ultimate goal, it's mass surveillance, which your solution doesn't work for.

    Don't forget censorship.

  • What stops non-anonymous certificates from being sold?

    If John Doe views way too much porn, then you expect the site to shut him down? They have no ability to track other site usage. The authorities have to block him after the 10,000th download.

    At that point, why does the site need to know? Either the government blocks someone's ID or they don't

    What stops

    Not useful to look at it in such a black or white manner. The possibilities are presumably less, and surely not that obvious.

  • The fact that they haven't gone for this approach that delivers age verification without disclosing ID, when it's a common and well known pattern in IT services, very strongly suggests that age verification was never the goal. The goal is to associate your real identity with all the information data brokers have on you, and make that available to state security services and law enforcement. And to do this they will gradually make it impossible to use the internet until they have your ID.

    We really need to move community-run sites behind Tor or into i2p or something similar. We need networks where these laws just can't practically be enforced and information can continue to circulate openly.

    The other day my kid wanted me to tweak the parental settings on their Roblox account. I tried to do so and was confronted by a demand for my government-issued ID and a selfie to prove my age. So I went to look at the privacy policy of the company behind it, Persona. Here's the policy, and it's without a doubt the worst I've ever seen. It basically says they'll take every last bit of information about you and sell it to everyone, including governments.

    So I explained to my kid that I wasn't willing to do this. This is a taste of how everything will be soon.

    Fuck, I went through that with VRchat...

  • This post did not contain any content.

    If it's a law, it should be free for both businesses and users.

  • I think this starts to not work when you start to include other states that want to do this, other countries, cities, counties, etc.. How many trusted authorities should there be and how do you prevent them from being compromised and exploited to falsely verify people? How do you prevent valid certs from being sold?

    Some examples of the type of service you mentioned:

    How do you prevent valid certs from being sold?

    Sold by whom? The created cert can be time limited and single use, so the service couldn't really sell them. You could rate limit how many certs users can create and obviously make it illegal to share them in order to deter people from using them. That's not enough to prevent it completetly, but should be an improvement for the use cases I hear the most about: social media (because it reduces the network effect) and porn (because kids will at least know that they're doing some real shady shit).

  • Viral Gene Therapy: Augment Your DNA

    Technology technology
    2
    16 Stimmen
    2 Beiträge
    5 Aufrufe
    D
    Im only going to add this because it wasn’t in the description. Sometimes removing DNA might be just as good as adding DNA(mutations). It depends on the mutation.
  • 69 Stimmen
    27 Beiträge
    71 Aufrufe
    B
    I mean, you mind as well do it right then. Use free, crowd hosted roleplaying finetunes, not a predatory OpenAI frontend. https://aihorde.net/ https://lite.koboldai.net/ Reply/PM me, and I’ll spin up a 32B or 49B instance myself and prioritize it for you, anytime. I would suggest this over ollama as the bigger models are much, much smarter.
  • 2 Stimmen
    1 Beiträge
    15 Aufrufe
    Niemand hat geantwortet
  • Why do AI company logos look like buttholes?

    Technology technology
    5
    1
    36 Stimmen
    5 Beiträge
    69 Aufrufe
    ivanafterall@lemmy.worldI
    It's a nascent industry standard called The Artificial Intelligence Network Template, or TAINT.
  • video gen error

    Technology technology
    8
    2
    3 Stimmen
    8 Beiträge
    100 Aufrufe
    H
    Sorry what? You mean post to technology@lemmy.world?
  • Catbox.moe got screwed 😿

    Technology technology
    40
    55 Stimmen
    40 Beiträge
    538 Aufrufe
    archrecord@lemm.eeA
    I'll gladly give you a reason. I'm actually happy to articulate my stance on this, considering how much I tend to care about digital rights. Services that host files should not be held responsible for what users upload, unless: The service explicitly caters to illegal content by definition or practice (i.e. the if the website is literally titled uploadyourcsamhere[.]com then it's safe to assume they deliberately want to host illegal content) The service has a very easy mechanism to remove illegal content, either when asked, or through simple monitoring systems, but chooses not to do so (catbox does this, and quite quickly too) Because holding services responsible creates a whole host of negative effects. Here's some examples: Someone starts a CDN and some users upload CSAM. The creator of the CDN goes to jail now. Nobody ever wants to create a CDN because of the legal risk, and thus the only providers of CDNs become shady, expensive, anonymously-run services with no compliance mechanisms. You run a site that hosts images, and someone decides they want to harm you. They upload CSAM, then report the site to law enforcement. You go to jail. Anybody in the future who wants to run an image sharing site must now self-censor to try and not upset any human being that could be willing to harm them via their site. A social media site is hosting the posts and content of users. In order to be compliant and not go to jail, they must engage in extremely strict filtering, otherwise even one mistake could land them in jail. All users of the site are prohibited from posting any NSFW or even suggestive content, (including newsworthy media, such as an image of bodies in a warzone) and any violation leads to an instant ban, because any of those things could lead to a chance of actually illegal content being attached. This isn't just my opinion either. Digital rights organizations such as the Electronic Frontier Foundation have talked at length about similar policies before. To quote them: "When social media platforms adopt heavy-handed moderation policies, the unintended consequences can be hard to predict. For example, Twitter’s policies on sexual material have resulted in posts on sexual health and condoms being taken down. YouTube’s bans on violent content have resulted in journalism on the Syrian war being pulled from the site. It can be tempting to attempt to “fix” certain attitudes and behaviors online by placing increased restrictions on users’ speech, but in practice, web platforms have had more success at silencing innocent people than at making online communities healthier." Now, to address the rest of your comment, since I don't just want to focus on the beginning: I think you have to actively moderate what is uploaded Catbox does, and as previously mentioned, often at a much higher rate than other services, and at a comparable rate to many services that have millions, if not billions of dollars in annual profits that could otherwise be spent on further moderation. there has to be swifter and stricter punishment for those that do upload things that are against TOS and/or illegal. The problem isn't necessarily the speed at which people can be reported and punished, but rather that the internet is fundamentally harder to track people on than real life. It's easy for cops to sit around at a spot they know someone will be physically distributing illegal content at in real life, but digitally, even if you can see the feed of all the information passing through the service, a VPN or Tor connection will anonymize your IP address in a manner that most police departments won't be able to track, and most three-letter agencies will simply have a relatively low success rate with. There's no good solution to this problem of identifying perpetrators, which is why platforms often focus on moderation over legal enforcement actions against users so frequently. It accomplishes the goal of preventing and removing the content without having to, for example, require every single user of the internet to scan an ID (and also magically prevent people from just stealing other people's access tokens and impersonating their ID) I do agree, however, that we should probably provide larger amounts of funding, training, and resources, to divisions who's sole goal is to go after online distribution of various illegal content, primarily that which harms children, because it's certainly still an issue of there being too many reports to go through, even if many of them will still lead to dead ends. I hope that explains why making file hosting services liable for user uploaded content probably isn't the best strategy. I hate to see people with good intentions support ideas that sound good in practice, but in the end just cause more untold harms, and I hope you can understand why I believe this to be the case.
  • 236 Stimmen
    80 Beiträge
    1k Aufrufe
    R
    Yeah, but that's a secondary attribute. The new ones are stupid front and center.
  • 116 Stimmen
    8 Beiträge
    86 Aufrufe
    S
    Common Noyb W