Skip to content

EU age verification app to ban any Android system not licensed by Google

Technology
123 69 1.3k
  • A phone can also be shared. If it happens at scale, it will be flagged pretty quickly. It's not a real problem.

    The only real problem is the very intention of such laws.

    If it happens at scale, it will be flagged pretty quickly.

    How? In a correct implementation, the 3rd parties only receive proof-of-age, no identity. How will re-use and sharing be detected?

  • Which is why Europeans shouldn't be too eager to laugh about the US being a fascist hellhole. It could happen there again if they're not vigilant.

    Dude, I keep telling my possibly AfD voting cousin we're just a few years behind the US if things continue as they do. Our politicians aren't better people, they're just sneakier for now.

  • No one is laughing... We're horrified how the people who have been screaming "freedom" and being obnoxious about how much more free they are than anyone else in the entire universe, seem to love getting enslaved while being obnoxious about how cool it is to be enslaved.

    Europe has its problems. We've had them for generations, and right now they're getting worse. But at least we have a culture of fighting back, something americans don't.

    But at least we have a culture of fighting back, something americans don’t.

    Talk is cheap. Prove it in the coming years. I really hope you're right, because I want SOMEWHERE to not be either a coporate fascist hellholle or a collapsed country in the future..

  • Dude, I keep telling my possibly AfD voting cousin we're just a few years behind the US if things continue as they do. Our politicians aren't better people, they're just sneakier for now.

    The way that the EU has been bending over for Trump is worrying.

  • If it happens at scale, it will be flagged pretty quickly.

    How? In a correct implementation, the 3rd parties only receive proof-of-age, no identity. How will re-use and sharing be detected?

    There are 3 parties:

    1. the user
    2. the age-gated site
    3. the age verification service

    The site (2) sends the request to the user (1), who passes it on to the service (3) where it is signed and returned the same way. The request comes with a nonce and a time stamp, making reuse difficult. An unusual volume of requests from a single user will be detected by the service.

  • What's going on with Europe lately? You all really want GOOGLE of all mega corps in control of your identity?

    You're going the opposite way, it should be your right to install an alternate OS on your phone. If anything they should be banning Google licensed Android.

    We dont want it. VdL is one of the most corrupt people in policits and unfortunately has a lot of influence

  • I was saying the EU has done some great things, not that censorship has good sides

    Ah, my apologies. It was unclear

  • This post did not contain any content.

    How long before that extends to PCs and non-Windows OSes are blocked? Also, add non-Chrome browsers to that as well (that includes Edge, Chromium, Brave, etc. as well as Firefox and its forks).

  • The GDPR also applies to public institutions as far as I'm aware - but most importantly the concern here is Google and data collected by Google. This data collection is in no way necessary to provide the age verification service. Most of it is not even related to it. The state legally cannot force you to agree to some corporations (i.e. Google's) terms, even if we completely ignore the GDPR.

    Data processing mandated by law is legal. Governments can pass laws, unlike private actors. Public institutions are bound by GDPR, but can also rely on provisions that give them greater leeway.

    I don't see how that this is in any way necessary, either. But a judge may be convinced by the claim that this is industry standard best practice to keep the app safe. In any case, there may be some finer points to the law.

    The state legally cannot force you to agree to some corporations (i.e. Google’s) terms,

    I'm not too sure about that, either. For example, when you are out of work, the state will cause you trouble if you do not find offered jobs acceptable.

    It's another question, if not having access to age-gated content is so bad as to force you to do anything. Minors nominally have the same rights as full citizens, and they are to be denied access, too.

  • You're right but the example you gave seems to illustrate a different effect that's almost opposite — let me explain.

    The phrase "politically correct" is language which meant something very specific, that was then hijacked by the far-right into the culture war where its meaning could be hollowed out/watered down to just mean basically "polite", then used interchangeably in a motte-and-bailey style between the two meanings whenever useful, basically a weaponized fallacy designed to scare and confuse people — and you know that's exactly what it's doing by because no right-winger can define what this boogeyman really means. This has been done before with things like: Critical Race Theory, DEI, cancel culture, woke, cultural Marxism, cultural bolshevism/judeo bolshevism (if you go back far enough), "Great Replacement", "illegals", the list goes on.

    I see your point. I should've limited my citation to the phrase's authoritarian origins from the early 20th century.

    To clarify, the slippery slope towards "political correctness" I wanted to describe is a sort of corporate techno-feudalist language bereft of any real political philosophy or moral epistemology. It is the language of LinkedIn, the "angel investor class", financiers, cavalier buzzwords, sweeping overgeneralizations, and hyperbole. Yet, fundamentally, it will aim to erase any class awareness, empiricism, or contempt for arbitrary authority. The idea is to impose an avaricious financial-might-makes-right for whatever-we-believe-right-now way of thinking in every human being.

    What I want to convey is that there is an unspoken effort by authoritarians of the so-called "left" and "right" who unapologetically yearn for the hybridization of both Huxley's A Brave New World and Orwell's 1984 dystopian models, sometimes loudly proclaimed and other times subconsciously suggested.

    These are my opinions and not meant as gospel.

  • Its not the populace, our politicians just like in the US have gone rogue. People are voting for the nutters due to anti immigration propaganda and so increasingly getting far right. Its happening across the entire western world and its bad news for everyone.

    Except this isn't even the right wing nutters doing it. These are mainstream politicians executing their power grabbing neolib agenda, with very little democratic oversight or public debate.

  • European Digital identity

    looks inside:

    Hosted on GitHub in the US 👏

    That's ironic

  • Ah, my apologies. It was unclear

    My bad

    My instance could also hint at it 😉

  • I see your point. I should've limited my citation to the phrase's authoritarian origins from the early 20th century.

    To clarify, the slippery slope towards "political correctness" I wanted to describe is a sort of corporate techno-feudalist language bereft of any real political philosophy or moral epistemology. It is the language of LinkedIn, the "angel investor class", financiers, cavalier buzzwords, sweeping overgeneralizations, and hyperbole. Yet, fundamentally, it will aim to erase any class awareness, empiricism, or contempt for arbitrary authority. The idea is to impose an avaricious financial-might-makes-right for whatever-we-believe-right-now way of thinking in every human being.

    What I want to convey is that there is an unspoken effort by authoritarians of the so-called "left" and "right" who unapologetically yearn for the hybridization of both Huxley's A Brave New World and Orwell's 1984 dystopian models, sometimes loudly proclaimed and other times subconsciously suggested.

    These are my opinions and not meant as gospel.

    I get what you mean. You're saying we're sliding towards something that brings back political correctness in its original definition, and I agree with you.

    The idea is to impose an avaricious financial-might-makes-right

    This resonates a lot. I'd argue we're already there. All this talk of "meritocracy" (fallaciously opposed to "DEI"), the prosperity gospel (that one's even older), it's all been promoting this idea of worthiness determined by net worth. Totalitarianism needs a socially accepted might-makes-right narrative wherever it can find it, then that can be the foundation for the fascist dogma/cult that will justify the regime's existence and legitimize its disregard for human life. Bonus points if you can make that might-makes-right narrative sound righteous (e.g. "merit" determines that you "deserve" your wealth, when really it's a circular argument: merit is never questioned for those who have the wealth, it's always assumed because how else could they have made that much money!).

  • There are 3 parties:

    1. the user
    2. the age-gated site
    3. the age verification service

    The site (2) sends the request to the user (1), who passes it on to the service (3) where it is signed and returned the same way. The request comes with a nonce and a time stamp, making reuse difficult. An unusual volume of requests from a single user will be detected by the service.

    from a single user

    Neither 2 nor 3 should receive information about the identity of the user, making it difficult to count the volume of requests by user?

  • from a single user

    Neither 2 nor 3 should receive information about the identity of the user, making it difficult to count the volume of requests by user?

    Strictly speaking, neither needs to know the actual identity. However, the point is that both are supposed to receive information about the user's age. I'm not really sure what your point is.

  • Strictly speaking, neither needs to know the actual identity. However, the point is that both are supposed to receive information about the user's age. I'm not really sure what your point is.

    I must not be explaining myself well.

    both are supposed to receive information about the user's age

    Yes, that's the point. They should be receiving information about age, and age only. Therefore they lack the information to detect reuse.

    If they are able to detect reuse, they receive more (and personal identifying) information. Which shouldn't be the case.

    The only known way to include a nonce, without releasing identifying information to the 3rd parties, is using a DRM like chip. This results in the sovereignty and trust issues I referred to earlier.

  • No one is laughing... We're horrified how the people who have been screaming "freedom" and being obnoxious about how much more free they are than anyone else in the entire universe, seem to love getting enslaved while being obnoxious about how cool it is to be enslaved.

    Europe has its problems. We've had them for generations, and right now they're getting worse. But at least we have a culture of fighting back, something americans don't.

    In Hungary, we still have people who think fascism is when "evil people do evil things for the sake of evil", so when fascists want to hurt Roma, LGBTQIA+, etc. people, no one dares to call them fascists as long as said people have "receipts" in the form of cobbled together statistics, and have a not too cruel solution.

  • We dont want it. VdL is one of the most corrupt people in policits and unfortunately has a lot of influence

    VdL = Ursula von der Leyen to the uninitiated. Conservative politician, but the more boring kind, not the Orbán-style post-fascism kind.

  • I must not be explaining myself well.

    both are supposed to receive information about the user's age

    Yes, that's the point. They should be receiving information about age, and age only. Therefore they lack the information to detect reuse.

    If they are able to detect reuse, they receive more (and personal identifying) information. Which shouldn't be the case.

    The only known way to include a nonce, without releasing identifying information to the 3rd parties, is using a DRM like chip. This results in the sovereignty and trust issues I referred to earlier.

    The site would only know that the user's age is being vouched for by some government-approved service. It would not be able to use this to track the user across different devices/IPs, and so on.

    The service would only know that the user is requesting that their age be vouched for. It would not know for what. Of course, they would have to know your age somehow. EG they could be selling access in shops, like alcohol is sold in shops. The shop checks the ID. The service then only knows that you have login credentials bought in some shop. Presumably these credentials would not remain valid for long.

    They could use any other scheme, as well. Maybe you do have to upload an ID, but they have to delete it immediately afterward. And because the service has to be in the EU, government-certified with regular inspections, that's safe enough.

    In any case, the user would have to have access to some sort of account on the service. Activity related to that account would be tracked.


    If that is not good enough, then your worries are not about data protection. My worries are not. I reject this for different reasons.

  • 154 Stimmen
    13 Beiträge
    0 Aufrufe
    A
    for banned US app stores like Steam and Google Play.
  • 834 Stimmen
    161 Beiträge
    2k Aufrufe
    D
    Oh, he has some strange views, sure, but he is like that magician that tells everyone how the magic tricks are done, except this is marketing not magic, so both sides don't like him.
  • Apple Just Proved They're No Different Than Google

    Technology technology
    20
    32 Stimmen
    20 Beiträge
    216 Aufrufe
    S
    2 ads when Linus mentioned candy crush. There is zero flow to youtube anymore
  • 73 Stimmen
    18 Beiträge
    175 Aufrufe
    W
    ...and it's turned them into the state with the highest standard of living in the US....right?
  • 0 Stimmen
    1 Beiträge
    18 Aufrufe
    Niemand hat geantwortet
  • Sierpinski triangle programs by 5 AI models

    Technology technology
    7
    1
    15 Stimmen
    7 Beiträge
    70 Aufrufe
    M
    oh, wow! that's so cool!
  • Anthropic's AI is Writing Its Own Blog - Oh Wait. No It's Not

    Technology technology
    4
    67 Stimmen
    4 Beiträge
    43 Aufrufe
    mrjgyfly@lemmy.worldM
    They absolutely will. AI is great if you drastically lower your standards.
  • Catbox.moe got screwed 😿

    Technology technology
    40
    55 Stimmen
    40 Beiträge
    391 Aufrufe
    archrecord@lemm.eeA
    I'll gladly give you a reason. I'm actually happy to articulate my stance on this, considering how much I tend to care about digital rights. Services that host files should not be held responsible for what users upload, unless: The service explicitly caters to illegal content by definition or practice (i.e. the if the website is literally titled uploadyourcsamhere[.]com then it's safe to assume they deliberately want to host illegal content) The service has a very easy mechanism to remove illegal content, either when asked, or through simple monitoring systems, but chooses not to do so (catbox does this, and quite quickly too) Because holding services responsible creates a whole host of negative effects. Here's some examples: Someone starts a CDN and some users upload CSAM. The creator of the CDN goes to jail now. Nobody ever wants to create a CDN because of the legal risk, and thus the only providers of CDNs become shady, expensive, anonymously-run services with no compliance mechanisms. You run a site that hosts images, and someone decides they want to harm you. They upload CSAM, then report the site to law enforcement. You go to jail. Anybody in the future who wants to run an image sharing site must now self-censor to try and not upset any human being that could be willing to harm them via their site. A social media site is hosting the posts and content of users. In order to be compliant and not go to jail, they must engage in extremely strict filtering, otherwise even one mistake could land them in jail. All users of the site are prohibited from posting any NSFW or even suggestive content, (including newsworthy media, such as an image of bodies in a warzone) and any violation leads to an instant ban, because any of those things could lead to a chance of actually illegal content being attached. This isn't just my opinion either. Digital rights organizations such as the Electronic Frontier Foundation have talked at length about similar policies before. To quote them: "When social media platforms adopt heavy-handed moderation policies, the unintended consequences can be hard to predict. For example, Twitter’s policies on sexual material have resulted in posts on sexual health and condoms being taken down. YouTube’s bans on violent content have resulted in journalism on the Syrian war being pulled from the site. It can be tempting to attempt to “fix” certain attitudes and behaviors online by placing increased restrictions on users’ speech, but in practice, web platforms have had more success at silencing innocent people than at making online communities healthier." Now, to address the rest of your comment, since I don't just want to focus on the beginning: I think you have to actively moderate what is uploaded Catbox does, and as previously mentioned, often at a much higher rate than other services, and at a comparable rate to many services that have millions, if not billions of dollars in annual profits that could otherwise be spent on further moderation. there has to be swifter and stricter punishment for those that do upload things that are against TOS and/or illegal. The problem isn't necessarily the speed at which people can be reported and punished, but rather that the internet is fundamentally harder to track people on than real life. It's easy for cops to sit around at a spot they know someone will be physically distributing illegal content at in real life, but digitally, even if you can see the feed of all the information passing through the service, a VPN or Tor connection will anonymize your IP address in a manner that most police departments won't be able to track, and most three-letter agencies will simply have a relatively low success rate with. There's no good solution to this problem of identifying perpetrators, which is why platforms often focus on moderation over legal enforcement actions against users so frequently. It accomplishes the goal of preventing and removing the content without having to, for example, require every single user of the internet to scan an ID (and also magically prevent people from just stealing other people's access tokens and impersonating their ID) I do agree, however, that we should probably provide larger amounts of funding, training, and resources, to divisions who's sole goal is to go after online distribution of various illegal content, primarily that which harms children, because it's certainly still an issue of there being too many reports to go through, even if many of them will still lead to dead ends. I hope that explains why making file hosting services liable for user uploaded content probably isn't the best strategy. I hate to see people with good intentions support ideas that sound good in practice, but in the end just cause more untold harms, and I hope you can understand why I believe this to be the case.