A long, long time ago (within the past ten years), I had to verify my age with a site. They didn't ask for my ID, or my facial scan, but instead asked for my credit card number. They issued a refund to the card of a few cents, and I had to tell them (within 24hr) how much the refund was for, after which point they'd issue a charge to claw it back. They made it clear that debit and gift cards would not be accepted, it must be a credit card. So I grabbed my Visa card, punched in the numbers, checked my banking app to see the +$0.24 refund, entered the value, got validated, and had another -$0.24 charge to claw it back.
Voila, I was verified as an adult, because I could prove I had a credit card.
The whole point of mandating facial recognition or ID checks isn't to make sure you're an adult, but to keep records of who is consuming those services and tie their identities back to specific profiles. Providers can swear up and down they don't retain that information, but they often use third-parties who may or may not abide by those same requests, especially if the Gov comes knocking with a secret warrant or subpoena.
Biometric validation is surveillance, plain and simple.
ndriscoll 10 hours ago [-]
That was, in fact, what COPA mandated in the US in 1998, and SCOTUS struck it down as too onerous in Ashcroft v. American Civil Liberties Union, kicking off the last 20 years of essentially completely unregulated Internet porn commercially available to children with nothing more than clicking an "I'm 18" button. At the time, filtering was seen as a better solution. Nowadays filtering is basically impossible thanks to TLS (with things like DoH and ECH being deployed to lock that down even further), apps that ignore user CAs and use attestation to lock out owner control, cloud CDNs, TLS fingerprinting, and extreme consolidation of social media (e.g. discord being for both minecraft discussions and furry porn).
Dylan16807 7 hours ago [-]
Despite TLS, filtering is easier to set up now than it was in 1998. You might have to block some apps in the short term, but if you suggest apps can avoid age verification if they stop pinning certificates then they'll jump at the option.
Consolidation is the only tricky part that's new.
stavros 1 hours ago [-]
Jesus, how does your society still function when underage people can see videos of people having sex?! It's one thing for minors to be having sex, but to watch others doing it? Reprehensible.
Loic 20 minutes ago [-]
I suppose you do not have children. I am open-minded, mid 40's. The level of violence in porn you can get access to with just one click, has no comparison with what I could get access to as a kid (basically nothing).
With the net, you get access in one click to the worse and the best. It is a lot of work as a parent to educate the kids about that.
As kids, teenager and even as 20 something, if we wanted to do some experience, we had to physically access the media or be physically present. This was not on-demand over a screen.
So, I filter the access at home while also trying my best to educate. This is not easy and I can understand that non tech savvy people request more laws, even so I am personally against.
The article is pretty well balanced, we have no silver bullet here.
stavros 19 minutes ago [-]
Sure, but if the goal is to minimise access to violence, why did the GP say "they can access porn" instead of "they can access violence"? I doubt the two are synonymous.
Aeglaecia 50 minutes ago [-]
it is recommended not to employ sarcasm when counterpoints are easily available
stavros 43 minutes ago [-]
Unfortunately, when counterpoints are easily available, I expect the person to have already thought of them, hence the sarcasm.
gjsman-1000 7 hours ago [-]
This has already come up before the Supreme Court, with the argument that filtering was a less invasive technique to fulfill the government’s legitimate interests back in the early 2000s.
That ship has sailed. Even the opposition admits that trying to get everyone to filter is not going to work and is functionally insignificant. The only question is whether age verification is still too onerous.
Terr_ 7 hours ago [-]
> trying to get everyone to filter
We never needed everyone to filter, just parents busy lobbying the government to impose crap onto every possible service and website across the entire world.
Instead, they should purchase devices for their kids that have a child-lock and client-side filters. All sites have to do is add an HTTP header loosely characterizing it's content.
1. Most of the dollar costs of making it all happen will be paid by the people who actually need/use the feature.
2. No toxic Orwellian panopticon.
3. Key enforcement falls into a realm non-technical parents can actually observe and act upon: What device is little Timmy holding?
4. Every site in the world will not need a monthly update to handle Elbonia's rite of manhood on the 17th lunar year to make it permitted to see bare ankles. Instead, parents of that region/religion can download their own damn plugin.
rapind 5 hours ago [-]
There's a peer / social issue at play as well though. If you believe that smart phones are disastrous for kids (I happen to think so), and don't allow your 13yo daughter to have one, you are pretty much forcing her to be the odd one out. Maybe that's OK for some parents, but you can't deny that this cost exists.
Preventing your son from playing certain video games that all of his friends enjoy also has a social cost.
This is why I think it's great when schools ban phones in class. When left up to the parents individually it's an absolute disaster.
These are just some specific examples of where I the nanny state can be beneficial. For most things in general though I'd also prefer people govern themselves (and their kids) whenever possible.
iteria 13 minutes ago [-]
So does being vegetarian or vegan. So does being not the dominant culture in any aspect of life. That's a decision for parents to make and honestly "they'll be left out" is such a crap parenting take. Especially since it's a bunch of parents together who don't want their kid to have access thinking this together. If they actually talked to each each or just made a stand so people could see, we wouldn't even have this so called social cost.
I'm seeing this as a parent in real time. I'm actually changing my kid's friend's parent behaviors by simply being like, "Cool. But my kid isn't/is going to do that" I don't know when parenting happened by social committee, but I don't believe in it.
Terr_ 4 hours ago [-]
> This is why I think it's great when schools ban phones in class.
Agreed on the classroom angle, there are many reasons (e.g. cheating, concentration) to treat the availability of devices in a uniform way there.
> If you believe that smart phones are disastrous for kids
A focus on the handheld device also makes it easier to handle other related concerns that can't really be solved any other way, like "no social-media after bedtime."
7 hours ago [-]
chatmasta 10 hours ago [-]
Is card verification a lesser form of surveillance? And there’s a good chance your card issuer (or your bank, one hop away from it) has your biometrics anyway.
I don’t like either of them… (And why does YouTube ask me to verify my age when I’m logged into a Google account I created in 2004?)
stego-tech 10 hours ago [-]
Oh, make no mistake, I hate both of these. I loathe this forced surveillance of everyone because parents can't be bothered to supervise and teach their children about the most primary of human animal functions (sex), regardless of their reasons for it.
I take great pains to keep minors out of my adult spaces, and don't have to resort to anything as invasive as biometric surveillance or card charges. This notion that the entire world should be safe for children by default, and that anything and everything adult should be vilified and locked up, is toxic as all get-out and builds shame into the human animal over something required for the perpetuation of the species.
The adult content isn't the problem, it's the relationship some folks have towards it that's the issue. That's best corrected by healthy intervention early on, not arbitrary age checks everywhere online that mainly serve as an exercise of power by the ruling class against "undesirable" elements of society.
john01dav 9 hours ago [-]
> take great pains to keep minors out of my adult spaces, and don't have to resort to anything as invasive as biometric surveillance or card charges.
What sort of spaces are these (online or in person), and how do you enforce this? I have an online space where such non invasive measures could be useful.
stego-tech 8 hours ago [-]
Mine are rooted in the 90s/00s internet: I know the people I allow into my spaces, and extend to them a degree of trust to let others in who are also of legal age. I rotate the credentials every so often at random, forcing everyone to request the new password from me. Other spaces I inhabit also operate off this sort of "community trust" system, only letting in folks we already know ourselves. It's how we keep out minors and trolls, as well as just bad/no-longer-trusted actors.
It's inconvenient, sure, and it's not SEO-friendly, but it generally works and doesn't require checking IDs or doing biometric verifications. The thing is, I'm building a community, not a product, and therefore don't have the same concerns as, say, PornHub, for checking IDs. It's also not a scalable solution - I have to build individual rapports with people I can then trust to have the access keys to my space(s), and then monitor that trust at each password change to ensure it's not violated. It's hard work, but it's decently reliable for my needs.
For larger/at-scale providers...I think the better answer is just good-old-fashioned on-device or home-network filtering. The internet was NEVER meant to be child-friendly, and we need to make it abundantly clear to parents that it's never going to be so they take necessary steps to protect their children. I'd personally like to see more sites (in general, not just adult) contribute their domain names and CDNs to independent list maintainers (or published in a help article linked via their main footer) so individuals and organizations can have greater control over their online experience. I think if someone wants to, say, block the entire domain ranges of Amazon for whatever reason, then that information should be readily available without having to watch packet flows and analyzing CDN domain patterns.
It's just good netiquette, I think, but I'm an old-fashioned dinosaur in that regard.
jjmarr 2 hours ago [-]
> This notion that the entire world should be safe for children by default, and that anything and everything adult should be vilified and locked up, is toxic as all get-out and builds shame into the human animal over something required for the perpetuation of the species.
The world should be safe for kids because kids are the future of our society. When the world isn't safe, families won't have kids and society will start to decline. Maybe that means giving up some of the privileges you have. That's the cost of our future.
nehal3m 35 minutes ago [-]
“Censorship is telling a man he can't have a steak just because a baby can't chew it.”
― Mark Twain
2 hours ago [-]
lucb1e 2 hours ago [-]
> Is card verification a lesser form of surveillance?
It's not just about which is worse surveillance, it's also simply that everyone has a face but not everyone has a credit card. I'm not deemed creditworthy in this country I moved to (never had a debt in my life but they don't know that) so the card application got rejected. Do we want to upload biometrics or exclude poor and unknown people from "being 18"? I really don't know which is the lesser poison
> (And why does YouTube ask me to verify my age when I’m logged into a Google account I created in 2004?)
I'd guess they didn't want to bother with that edge case. Probably <0.01% of active Youtube accounts are >18 years old
Dylan16807 7 hours ago [-]
> And why does YouTube ask me to verify my age when I’m logged into a Google account I created in 2004?
Yeah those checks are super annoying. The internet has been around long enough, mechanisms for this should exist.
And even in the smaller term, if I had to be 13 to make this account, and it has been more than 5 years, maybe relax?
zoklet-enjoyer 8 hours ago [-]
Why/how would my bank have my biometrics?
sph 2 hours ago [-]
I logged into my Starling Bank account on a new phone, and I had to film my face reading a 6 digit number.
chatmasta 7 hours ago [-]
They almost certainly have a photo of your passport or other identification.
subscribed 7 hours ago [-]
For the purpose of KYC checks.
That doesn't mean every service provider (discord, roblox, pornhub) should have the same.
chatmasta 5 hours ago [-]
No, but the original thread was about providing your credit card number to these service providers. I’m saying that’s one hop from your bank, who has your biometric information.
SoftTalker 7 hours ago [-]
Paypal used this method as identity (or at least account) verification back in the very early days, IIRC. They made a very small deposit and I think they just let you keep it but I can't recall that for sure.
whiplash451 4 hours ago [-]
What you describe is called QES (Qualified Electronic Signature) and is still widely used to validate identities.
Unfortunately it is not enough to prove an identity (you could be using the credit card of your traveling uncle) and regulation requires for it to be combined with another proof.
I see a lot of people associating identity verification with evil intent (advertising, tracking).
I work in this domain and the reality is a lot less interesting: identity verification companies do this and only this, under strict scrutiny both from their customers and from the regulators.
We are not where we want to be from a privacy standpoint but the industry is making progress and the usage of identity data is strictly regulated.
high_priest 7 hours ago [-]
I had a debit card when I was 13. An absolute godsend during international travel, not having to bother with cash as a forgetful teenager.
The card providers share your identity in monetary transactions, but I don't think this data does & should include birthdate.
Symbiote 4 hours ago [-]
These checks accept only a credit card.
That's useful as one option, but can't be expected of 18 year olds in most countries, and older adults in many.
oalae5niMiel7qu 6 hours ago [-]
Credit cards are trivially traceable to your legal identity, since anti-money-laundering and know-your-customer laws require that credit card companies keep this information. The government can subpoena this information just as easily as they could with pictures of your face or ID.
pests 3 hours ago [-]
How do you prove the person typing in the credit card details is the same person who owns the card?
I know I've read stories of kids taking cards to purchase games or other things online numerous times over the last 20+ years.
jen729w 11 hours ago [-]
> Biometric validation is surveillance, plain and simple.
Eh. It's just easier and cheaper. I'll bet Discord has outsourced this to one of those services that ask you for a face scan when you sign up to [some other service].
jgaa 51 minutes ago [-]
This is never about protecting the children.
This is always about government overreach.
People are less likely to criticize the government, or even participate in political debate, if their online identities are know by the government. Governments like obedient, scared citizens.
The only ethical response to laws like this, is for websites and apps to terminate operations completely in countries that create them. Citizens who elect politicians without respect for human rights and privacy don't really deserve anything nice anyway.
9dev 39 minutes ago [-]
That’s a very strange take on governments, treating them as a singular entity. A ü PA government that deserves that name is first and foremost and elected set of representatives of the constituents, and thus like citizens that vote for them again, act in their interests.
If the government is not working like that, you have an administrative problem, not a societal one. A state is its population.
xphos 7 hours ago [-]
I don't think the problem is that young people are finding porn on the internet. There is a problem, though, and it has to deal with psychological warfare on attention
Formats like shorts or news feeds to you algorithmically with zero lag are the problem. It makes for the zombification of decision making. Endless content breaks people down precisely because it's endless. I think if you add age verification but don't root out the endless nature, you will not really help any young person or adult.
When you look at people with unhealthy content addiction, it is always a case of excess and not necessarily type of content. There are pedophiles but honestly, we have had that throughout all time, with and without the internet. But the endless feeding of the next video robs people of the ability to stop by mentally addiciting them to see just one more. And because content is not really infinite, endless feeds invariably will feed people with porn, eating disorders, and other "crap" in quantities that slowly erode people.
Hyperboreanal 4 hours ago [-]
[flagged]
timewizard 3 hours ago [-]
> 2 straight generations of porn addicts
Different types of pornography have different dangers and all of it has been broadly available since before the internet.
> And then you have shit like watchpeopledie.tv.
I think there's a broad gulf between these activities and I don't think they impact the brain in the same way as pornography. This type of violence can be found in movies and video games which also clearly predate the internet.
> Children should have been banned from the internet a decade ago
I'd rather pornography be banned.
> I'm completely willing to give up some privacy to make it happen.
Why? It should be incumbent on the people profiting from this activity to police it not on me to give up constitutional rights to protect their margins.
jjice 21 hours ago [-]
Aside from the privacy nightmare, what about someone who is 18 and just doesn't have the traditional adult facial features? Same thing for someone who's 15 and hit puberty early? I can imagine that on the edges, it becomes really hard to discern.
If they get it wrong, are you locked out? Do you have to send an image of your ID? So many questions. Not a huge fan of these recent UK changes (looking at the Apple E2E situation as well). I understand what they're going for, but I'm not sure this is the best course of action. What do I know though :shrug:.
joeyh 18 hours ago [-]
Wise (nee Transferwise) requires a passport style photo taken by a webapp for KYC when transferring money. I was recently unable to complete that process over a dozen tries, because the image processing didn't like something about my face. (Photos met all criteria.)
On contacting their support, I learned that they refused to use any other process. Also it became apparent that they had outsourced it to some other company and had no insight into the process and so no way to help. Apparently closing one's account will cause an escalation to a team who determines where to send the money, which would presumably put some human flexability back into the process.
(In the end I was able to get their web app to work by trying several other devices, one had a camera that for whatever reason satisfied their checks that my face was within the required oval etc.)
rlpb 11 hours ago [-]
> On contacting their support, I learned that they refused to use any other process.
I suspect this won't help you, but I think it's worth noting that the GDPR gives people the right to contest any automated decision-making that was made on a solely algorithmic basis. So this wouldn't be legal in the EU (or the UK).
roenxi 21 hours ago [-]
Also, key point in the framing, when was it decided that Discord supposed to be the one enforcing this? A pop-up saying "you really should be 18+" is one thing, but this sounds like a genuine effort to lock out young people. Neither Discord nor a government ratings agency should be taking final responsibility for how children get bought up, that seems like something parents should be responsible for.
This is over-reach. Both in the UK and Australia.
KaiserPro 13 hours ago [-]
When a corner shop sells cigarettes to minors, who's breaking the law?
When a TV channel broadcast porn, who gets fined?
These are accepted laws that protect kids from "harm", which are relatively uncontroversial.
Now, the privacy angle is very much the right question. But as Discord are the one that are going to get fined, they totally need to make sure kids aren't being exposed to shit they shouldn't be seeing until they are old enough. In the same way the corner shop needs to make sure they don't sell booze to 16 year olds.
Now, what is the mechanism that Discord should/could use? that's the bigger question.
Can government provide fool proof, secure, private and scalable proof of age services? How can private industry do it? (Hint: they wont because its a really good source of profile information for advertising.)
jkaplowitz 13 hours ago [-]
At least the ways that a corner shop verifies age don't have the same downsides as typical online age verifiers. They just look at an ID document; verify that it's on the official list of acceptable ID documents, seems to be genuine and valid and unexpired, appears to relate to the person buying the product, and shows an old enough age; and hand the document back.
The corner shop has far fewer false negatives, far lower data privacy risk, and clear rules that if applied precisely won't add any prejudice about things like skin color or country of origin to whatever prejudice already exists in the person doing the verification.
nonchalantsui 12 hours ago [-]
That's exactly how a digital ID system would work, and yet people argue against those all the time as well.
Additionally, the corner shop does not have far lower data privacy risks - actually it's quite worse. They have you on camera and have a witness who can corroborate you are that person on camera, alongside a paper trail for your order. There is no privacy there, only the illusion of such.
jkaplowitz 11 hours ago [-]
By data privacy risks I meant the risk of a breach, compromise, or other leak of the database of verified IDs. No information about the IDs are generally collected in a corner shop, at least when there's no suspicion of fraud; they're just viewed temporarily and returned. Not only do online service providers retain a lot of information about their required verifications, they do so for hugely more people than a typical corner shop.
Also, corner shop cameras don't generally retain data for nearly as long as typical online age verification laws would require. Depending on the country and the technical configuration, physical surveillance cameras retain data for anywhere from 48 hours to 1 year. Are you really saying that most online age verification laws worldwide require or allow comparably short retention periods? (This might actually be the case for the UK law, if I'm correctly reading Ofcom's corresponding guidance, but I doubt that's true for most of the similar US state laws.)
ndriscoll 10 hours ago [-]
At least the US laws I've looked at have all specifically mandated that data shall not be retained, some with rather steep penalties for retention (IIRC ~$10k/affected user).
YetAnotherNick 6 hours ago [-]
A lot of these shops have cameras which could similarly be compromised. In fact the camera is likely to be more vulnerable and probably already had been hacked by DDoS orgs.
I hate sites asking for photo verification, but I think it is more about convenience/reliability for me. My bigger fear is that if AI locks me out with no one to go for support.
KaiserPro 2 hours ago [-]
Its a different risk.
The cornershop does not have access to your friend graph. Also, if you pay by card, digital ID only provides corroboration, your payment acts as a much more traceable indicator.
The risk of "digital ID" is that it'll leak grosly disprocotionate amounts of data on the holder.
For Age verification, you only need a binary old enough flag, from a system that verifies the holder's ID.
The problem is, people like google and other adtech want to be the people that provide those checks, so they can tie your every action to a profile with a 1:1 link. Then combine it to card transactions to get an ad impression to purchase signal much clearer.
The risk here is much less from government but private companies.
SoftTalker 7 hours ago [-]
> and unexpired
Because certainly one's identity might totally change if one's ID card expires...
renewiltord 4 hours ago [-]
Expiry places a bound on duplication and forcing additional duplication allows you to update standards. It's a tradeoff to produce a strictness ratchet.
EA-3167 13 hours ago [-]
Cigarettes are deadly
Broadcasting porn isn't an age ID issue, it's public airwaves and they're regulated.
These aren't primarily "think of the children" arguments, the former is a major public health issue that's taken decades to begin to address, and the latter is about ownership.
I don't think that chat rooms are in the same category as either public airwaves or drugs. Besides what's the realistic outcome here? Under 18's aren't stupid, what would you have done as a kid if Discord was suddenly blocked off? Shrug and not talk to your friends again?
Or would you figure out how to bypass the checks, use a different service, or just use IRC? Telegram chats? Something even less moderated and far more open to abuse, because that's what can slip under the radar.
So no I don't think this is about protecting kids, I think it's about normalizing the loss of anonymity online.
Symbiote 13 hours ago [-]
You can swap cigarettes with another age restricted product, like pornography or 18-rated DVDs if you prefer.
The UK also has rules on what can be broadcast on TV depending on the time of day.
KaiserPro 13 hours ago [-]
> These aren't primarily "think of the children" arguments
Are you kidding me? v-chip, mary whitehouse, Sex on TV are all the result of "think of the children" moral panics. Its fuck all to do with ownership.
> I don't think that chat rooms are in the same category as either public airwaves
Discord are making cash from underage kids, in the same way that meta and google are, in the same way that disney and netflix offering kids channels.
Look I'm not saying that discord should be banned for kids, but I really do think that there is a better option than the binary "Ban it all"/"fuck it, let them eat porn"
Kids need to be able to talk to each other, but they also should be able to do that without being either preyed upon by nonces, extremists, state actors and more likely bored trolls.
Its totally possible to provide anonymous age gating, but its almost certainly going to be provided by an adtech company unless we, the community provide something cheaper and better.
threeseed 21 hours ago [-]
> This is over-reach. Both in the UK and Australia
2/3 of Australians support minimum age restrictions for social media [1] and it was in-particular popular amongst parents. Putting the responsibility solely on parents shows ignorance of the complexities of how children are growing up these days.
Many parents have tried to ban social media only for those children to experience ostracisation amongst their peer group leading to poorer educational and social developmental outcomes at a critical time in their live.
That's why you need governments and platform owners to be heavily involved.
that sounds quite puritan. my god says I can't, is one thing. my god says you can't either, is very different.
now replace god with parent.
monkeywork 11 hours ago [-]
You realize that is how EVERY law works right... The person your replying to says the public overall supports the idea/law. If following that law is a deal breaker for you you either need to persuade thos ppl to your view or move
jasonfarnon 11 hours ago [-]
maybe it's "puritan" or maybe it's a normal view and it looks puritan from where you stand. how do you know which? One bit of evidence is the 2/3 to 1/3 split.
pjc50 21 hours ago [-]
It almost certainly is overreach, but locking young people out of porn is hardly a new concern. We have variants of this argument continuously for decades. I'm not sure there is a definitive answer.
leotravis10 12 hours ago [-]
There's a SCOTUS case in FSC v. Paxton that could very well decide if age verification is enforced in the US as well so sadly this is just the beginning.
pests 3 hours ago [-]
I witnessed the Better Off Ted water fountain skit play out in real life once, it was incredible awkward. I was helping my buddy and his black friend and his wife set up accounts on online casinos in Michigan for the promos/refer-a-friend rewards. Some of the sites require the live video facial verification and we were doing it in a darkly lit space at night. It worked instantly and without issue for my friend and me but oh man, many many attempts later and many additional lights needed to get it to work for his friends.
zehaeva 21 hours ago [-]
It's a good thing to think about. I knew a guy in high school who had male pattern baldness that started at 13 or 14. Full blown by the time he was 16. Dude looked like one of the teachers.
MisterTea 20 hours ago [-]
Same in my drivers ed at 16, guy had a mans face, large stocky build, and thick full beard. I once was talking to a tall pretty woman who turned out to be a 12 year old girl. And I have a friend who for most of his 20's could pass for 13-14 and had a hell of a time getting into bars.
This facial thing feel like a loaded attempt to both check a box and get more of that sweet, sweet data to mine. Massive privacy invasion and exploitation of children dressed as security theater.
red-iron-pine 15 hours ago [-]
i went to school with a guy who had serious facial hair at like 14. dude was rocking 5 oclock shadows by the end of the school day
pezezin 10 hours ago [-]
I had a friend who had a serious beard by the age of 15; he would order whisky and cola at the bar, and nobody ever asked him for any kind of ID xD
I myself have a mighty beard but took a couple more years to develop...
it doesn't even has to be "un traditional face feature". Hpw are they going to differentiate 18yo from 17y11mo? The latter is not legally adult
mezzie2 21 hours ago [-]
It's not even edge cases - I was a pretty young looking woman and was mistaken for a minor until I was about 24-25. My mother had her first child (me) at 27 and tells me about how she and my father would get dirty looks because they assumed he was some dirty old man that had impregnated a teenager. (He was 3 years older than her).
I think, ironically, the best way to fight this would be to lean on identity politics: There are probably certain races that ping as older or younger. In addition, trans people who were on puberty blockers are in a situation where they might be 'of age' but not necessarily look like an automated system expects them to, and there might be discrepancies between their face as scanned and the face/information that's show on their ID. Discord has a large trans userbase. Nobody cares about privacy, but people make at least some show of caring about transphobia and racism.
> So many questions.
Do they keep a database of facial scans even though they say they don't? If not, what's to stop one older looking friend (or an older sibling/cousin/parent/etc.) from being the 'face' of everyone in a group of minors? Do they have a reliable way to ensure that a face being scanned isn't AI generated (or filtered) itself? What prevents someone from sending in their parent's/sibling's/a stolen ID?
Seems like security theater more than anything else.
nemomarx 21 hours ago [-]
I don't think they make much of a show of caring about trans rights in the UK right about now, unfortunately. In the US you can make a strong case that a big database of faces and IDs could be really dangerous though I think
mezzie2 20 hours ago [-]
It's mostly about the service's audience. Discord is a huge trans/queer/etc. hub. If Discord were X or Instagram etc. it wouldn't matter. Users of Discord are, as a group, more likely to be antagonistic to anything that could be transphobic or racist than the general populace. (Whereas they don't care about disability rights, which is why people with medically delayed puberty aren't a concern.)
A tactical observation more than anything else.
tbrownaw 20 hours ago [-]
> In the US you can make a strong case that a big database of faces and IDs could be really dangerous though I think
The government already has this from RealID.
nemomarx 20 hours ago [-]
Right but only your photo taken for the Id, not up to date face scans that discord is requesting
it seems to me like I'd be more hesitant to go get a govt photo taken right now at least.
StefanBatory 21 hours ago [-]
I had a colleague, that when going out with her boyfriend, police was called on him as someone believed he is a pedophile.
She was 26. She just was that young looking.
:/
21 hours ago [-]
candiddevmike 21 hours ago [-]
The right thing to do here is for Discord to ignore the UK laws and see what happens, IMO.
Is there a market for leaked facial scans?
doublerabbit 14 hours ago [-]
With the UK currently battling Apple, Discord has no chance of not getting a lawsuit.
Ofcom is a serious contender in ruling their rules especially where Discord is multi-national that even "normies" know and use.
And if they got a slap of "we will let you off this time" they would still have to create some sort of verification service to please the next time.
You might as well piss off your consumers, loose them whatever and still hold the centre stage than fight the case for not. Nothing is stopping Ofcom from launching another lawsuit there after.
> Is there a market for leaked facial scans?
There's a market for everything. Fake driver licenses with fake pictures have been around for decades, that would be no different.
daveoc64 21 hours ago [-]
It says in the article - you can send them a scan or photo of your ID if the face check doesn't work (or if you don't want to do the face scan).
Eavolution 40 minutes ago [-]
What if I don't want them to have any personally identifiable information about me in a database?
> what about someone who is 18 and just doesn't have the traditional adult facial features?
This can be challenging even with humans. My ex got carded when buying alcohol well into her mid thirties, and staff at the schools she taught at mistook her for a student all the time.
smegger001 11 hours ago [-]
I grew a beard when I was younger because I was tired of being mistaken for a highschooler its quite annoying to have people assume you are 15 when your 20. still regularly carded in my 30s
brundolf 12 hours ago [-]
Devil's advocate: couldn't this be better for privacy than other age checks because it doesn't require actual identification?
paulryanrogers 12 hours ago [-]
Considering the ubiquity of facial recognition tech, I imagine it could very quickly be abused to identify people
9283409232 21 hours ago [-]
Didn't Australia ban porn with women who have A cups under the justification of pedos like them?
Edit: This isn't how it played out. See the comment below.
threeseed 21 hours ago [-]
No it's just nonsense you invented because you were unwilling to do any research.
The actual situation was that the board refused classification where an adult was intentionally pretending to be an underage child not that they looked like one.
9283409232 19 hours ago [-]
I added an edit to correct myself however this was not something I invented. This story goes back to 09 - 2010. I will confess I didn't do any research to confirm though and that was my bad.
neilv 13 hours ago [-]
FWIW, I can confirm that user 9283409232 didn't make that up. I heard that multiple reputable places, years ago.
And it was believable, given a history of genuine but inept attempts by some to address real societal problems. (As well as given the history of fake attempts to solve problems for political points for "doing something". And also given the history of "won't someone think of the children" disingenuous pretexts often used by others to advance unrelated goals.) Basically, no one is surprised when many governments do something that seems nonsensical.
So, accusing someone of making up a story of a government doing something odd in this space might be hasty.
I suspect better would be to give a quick check and then "I couldn't find a reference to that; do you have a link?"
jofzar 8 hours ago [-]
Interestingly I actually heard this in Australia many years ago. I assumed it was real (as an Australian) but the answer is more complicated (with the actual answer being no)
They’re using the databases to go after illegal immigrants right now. Soon it’ll be using the porn databases to go after Gay people. They’re trying to use the healthcare databases to go after Trans people. All this verification is nothing but a way to commit genocide against minorities. Porn is so far down on the list of harmful things. There’s no pearl clutching over alcohol and other drugs like Americans have with porn. Nation of pansies.
gertrunde 12 hours ago [-]
I would like to think there there is a solution that can be engineered, in which a service is able to verify that a user is above an appropriate age threshold, while maintaining privacy safeguards, including, where relevant, for the age-protected service not to be privy to the identity of the user, and for the age verification service to not be privy to the nature of the age-protected service being accessed.
In this day and age, of crypto, and certificates, and sso, and all that gubbins, it's surely only a matter of deciding that this is a problem that needs solving.
(Unless the problem really isn't the age of the user at all, but harvesting information...)
michaelt 10 hours ago [-]
Unfortunately, no amount of blockchains and zero-knowledge proofs can compensate for the fact that 15 year old has a 18 year old friend. Or the fact that other 15 year old looks older than some 20 year olds. Or the fact that other 15 year old's dad often leaves his wallet, with his driving license, unattended.
Over the next five years, you can look forward to a steady trickle of stories in the press about shocked parents finding that somehow their 15 year old passed a one-time over-18 age verification check.
The fact compliance is nigh-impossible to comply with is intentional - the law is designed that way, because the intent is to deliver a porn ban while sidestepping free speech objections.
Hyperboreanal 3 hours ago [-]
[flagged]
padjo 3 hours ago [-]
By any means?
m463 11 hours ago [-]
A humorous age verification quiz for the Leisure Suit Larry game.
My boss is
a. a jerk.
b. a total jerk.
c. an absolute total jerk.
d. responsible for my paycheck.
Correct answer: d.
Talking about it or explaining it is like pulling teeth; generally just a thorough misunderstanding of the notion....even though cryptographic certificates make the modern internet possible.
Any number of entities can be certificate issuers, as long as they can be deemed sufficiently trustworthy. Schools, places of worship, police, notary, employers...they can all play the role of trust anchor.
arctek 8 hours ago [-]
This just moves the issue elsewhere though. I do agree that adding an extra step of having to notarize documents will filter many people.
But outside of this if someone is determined they can issue fake documents at this level of provenance.
Drivers licenses for example you can buy the printing machine and blanks (illegally) so you actually need to check the registrar in that location.
blibble 11 hours ago [-]
interesting idea...
how do you handle revocation when people inevitably start certifying false information?
Edmond 11 hours ago [-]
The app allows for self-revocation using the private key or a revocation code given when cert is issued, this is useful if a certificate is compromised...there is also an admin interface a trust anchor can use to revoke certificates they issue, a rogue trust anchor chain can also be revoked.
blibble 11 hours ago [-]
how does rogue anchor revocation in practice?
say if an anchor has issued tens of thousands of legitimate ids, and also ten to career fraudsters who gave them $10000 each
as you've outsourced the trust you have no idea which are legitimate, and if you revoke the lot you're going to have a lot of refunds to issue
(ultimately this is why countries only allow people who can be banned from their profession to certify documents)
Edmond 10 hours ago [-]
Each trust anchor gets issued a single certificate that can have delegation ability, ie the ability to issue new trust anchor certs to others.
So if say a UPS store is issued a cert and they go rogue, we can just revoke the trust anchor cert that was issued to the store, all certs issued further down are also automatically revoked...the revocation check is done either in the app or in the case of a third-party performing the verification they will recognize that there is a cert on the issuing chain that is revoked and reject the cert.
This is how TLS certs are handled too, if a CA goes rogue, all certs issued by that CA are revoked once the CA's root cert is revoked.
As for refund issues, that's a problem for the cert issuer to deal with.
blibble 9 hours ago [-]
> As for refund issues, that's a problem for the cert issuer to deal with.
no, it's your problem, as it's your brand slapped over everything, and now you've got tens of thousands of innocent people angry that you've revoked the IDs they paid for in good faith
this would translate into lawsuits, against you
whall6 8 hours ago [-]
When you say that “we” can revoke, I assume you are talking about your company - the app. What sort of resources would be required to constantly audit the potentially thousands or hundreds of thousands of certificate issuers on your platform?
whall6 11 hours ago [-]
Who is the entity that has the ability to revoke the certificate?
csomar 11 hours ago [-]
I don’t get it. What is to prevent a 9 year-old from buying a certificate and using it?
All certificates are cryptographically linked to an identity-anchor certificate, meaning buying a certificate would require the seller reveal the private key tied to the identity-anchor certificate, a tall order I would argue.
In the case of stolen identity certificates, they can be revoked thus making their illegitimate utility limited.
malfist 8 hours ago [-]
So an older brother gives his sibling a key.
Why would your design prevent that?
hn_throwaway_99 7 hours ago [-]
We can still have laws, e.g. that using someone else's certificate (or knowingly giving them your certificate) would constitute fraud.
We have laws against kids buying alcohol, even though kids can (and do) try to get adults to buy them booze, but I don't think that's a good reason to say we shouldn't have laws against kids drinking.
red_trumpet 4 hours ago [-]
> a service is able to verify that a user is above an appropriate age threshold, while maintaining privacy safeguards
AFAIU, the German electronic ID card ("elektronischer Personalausweis") can do this, but it is not widely implemented, and of course geographically limited.
fvdessen 11 hours ago [-]
The problem is who pays to maintain the system. There are systems that allow you to share your age anonymously (among other things) and they’re already widely used in Europe but the system knows what you’re using it for since the second party pays for the information, and some accounting info is needed for the billing. It would be completely illegal for the system to use that info for anything else though.
strangecasts 11 hours ago [-]
The problem is that it is much easier to implement such a check in a way which lets the verification service link the site to the user, with no discernable difference to the end user
e: I get the same feeling as I do reading about key escrow schemes in the Clipper chip vein, where nobody claimed it was theoretically impossible to have a "spare key" only accessible by warrant, but the resulting complexity and new threat classes [1] just was not worth it
Transferring your age and a way to verify it to any third party is by definition a privacy violation. Doing so in a safe way is literally impossible since I don't want to share that information in the first place.
packetlost 12 hours ago [-]
I feel like you could, theoretically, have a service that has an ID (as drivers license ID), perhaps operated by your government, that has an API and a notion of an ephemeral identifier that can be used to provide a digital attestation of some property without exposing that property or the exact identity of the person. It would require that the attestation system is trusted by all parties though, which is I think the core problem.
brian-armstrong 12 hours ago [-]
Wouldn't this require the API provider to know that tbe citizen is connecting to the app? Grindr users might be squeamish about letting the current US admin know about that.
packetlost 9 hours ago [-]
Not necessarily, you can define the protocol such that it's all done with opaque IDs instead of identifying info.
mschuster91 11 hours ago [-]
ICAO compliant ID cards (aka passports) and many national ID cards already are smartcards with powerful crypto processors.
Hand out certificates to porn, gambling or whatever sites, that allow requesting the age of a person from the ID card, have the user touch their ID card with their phone to sign a challenge with its key (and certificate signed by the government), that's it.
Government doesn't know what porn site you visited, and porn site only gets the age.
red_trumpet 3 hours ago [-]
This is not only theoretical, the German ID card ("elektronischer Personalausweis") can do exactly this.
nonchalantsui 12 hours ago [-]
Do you feel this way when you enter credit card information when making a purchase online?
slavik81 3 hours ago [-]
Yes.
rlpb 11 hours ago [-]
> Transferring your age and a way to verify it to any third party is by definition a privacy violation.
No it's not. Unless...
> Doing so in a safe way is literally impossible since I don't want to share that information in the first place.
...well then it is.
But it's not constructive to claim that proving your age to someone is by definition a privacy violation. If someone wants to prove their age to someone, then that's a private communication that they're entitled to choose to make.
It is true that if technology to achieve this becomes commonplace, then those not wishing to do so may find it impractical to maintain their privacy in this respect. But that doesn't give others the right to obstruct people who wish to communicate in this way.
Aurornis 12 hours ago [-]
Crypto comes up every time this topic is discussed but it misses the point.
The hard part is identifying with reasonable accuracy that the person sitting in front of the device is who they say they are, or a certain age.
Offloading everything to crypto primitive moves the problem into a different domain where the check is verifying you have access to some crypto primitive, not that it’s actually you or yours.
Any fully privacy-preserving crypto solution would have the flaw that verifications could be sold online. Someone turns 21 (or other age) and begins selling verifications with their ID because there is no attachment back to them, and therefore no consequences. So people then start imaging extra layers that would protect against this, which start eroding the privacy because you’re returning back to central verification of something.
Eavolution 36 minutes ago [-]
That sounds like a reasonable compromise to me, it's already what happens with ID for pubs etc so I don't think it's much different to the status quo
Barrin92 8 hours ago [-]
Already exists in a lot of places. German national IDs for like 10 years or something like that have an eID feature. It's basically just a public/private key signing scheme. The government and a bunch of other trusted public providers are able to issue identities, you can sign transactions with them or verify your age to commercial service providers, or transfer some data if that's required with your consent. (https://www.personalausweisportal.de/Webs/PA/EN/citizens/ele...)
Estonia and South Korea I think also have similar features on their IDs, it's already a solved problem.
Retr0id 11 hours ago [-]
I'm in the UK and discord has asked me to complete this check (but I haven't, yet). I can still use discord just fine, it just won't let me view any media it considers "adult".
I am an adult but refuse to let them scan my face as a matter of principle, so I've considered using https://github.com/hacksider/Deep-Live-Cam to "deepfake" myself and perform the verification while wearing a fake face. If it works, I'll write about it.
spacebanana7 21 hours ago [-]
I suspect the endgame of this campaign is to have mandatory ID checks for social media. Police would have access to these upon court orders etc and be able to easily prosecute anyone who posts 'harmful' content online.
woodrowbarlow 20 hours ago [-]
<tin-foil-hat> ultimately, i think the endgame is to require government ID in order to access internet services in general, a la ender's game. </tin-foil-hat>
Funnily enough, when the Philippines did this, it was decried as a violation of human rights [1]. But usually, media are so silent on such things I'd call them complicit. One already cannot so much as rent a hotel room anywhere in the EU without showing government ID.
yup, and this gives the ability to look up per-citizen location data.
sidebar: i've been trying to raise awareness about "joint communications and sensing" wherever i can lately; many companies involved in 6G standardization (esp. nokia) want the 6G network to use mmWave radio to create realtime 3d environment mappings, aka a "digital twin" of the physical world, aka a surveillance state's wet dream.
Not only rent, but in Spain there is a central database where your details are sucked in real time when you rent a room or a car, and no oversight how this data is used.
14 hours ago [-]
xvokcarts 12 hours ago [-]
You can buy (and top up) a SIM card without an ID in the EU.
Muromec 12 hours ago [-]
That depends on a country and for once there is no visible pattern or usual suspects in who requires it or not
raspyberr 20 hours ago [-]
Please walk me from scratch how you would access the internet on your own right now without any form of Government ID
zeta0134 14 hours ago [-]
Walk into a coffee shop. Look at the wifi password, usually a sign near the register. Log onto the wifi network using the wifi password. Browse in peace.
Is this sort of flow normal elsewhere? It's certainly normal where I live.
toast0 14 hours ago [-]
I'm in the US. I do have government ID, but I don't recall showing it to my network providers. Certainly, some telcos want a social security number to run credit; but that's often avoidable. I'm pretty sure could also wander down to an electronics store (maybe a grocery/drug store too) and pick up a prepaid cell phone with internet access, pay for it with cash, and get that going without government id in the US. It's a bit of a hike to get to the electronics store from where I live, but I can get part of the way there with the bus that takes cash too.
squigz 20 hours ago [-]
???
I'd walk to a local library and use their wifi. Or walk to a local McDonalds and use their wifi. Or walk to a friend's/family's house and use their wifi. Or...
bitmasher9 20 hours ago [-]
I know right. There are entire business models where “comfortable place to connect to WiFi” is an important part of the strategy.
Symbiote 13 hours ago [-]
Prepaid SIM from one of the EU countries that still has them, such as Denmark. Purchase in cash from a kiosk.
whoopdedo 13 hours ago [-]
Prepaid 5G phone bought with cash and activated by dialing 611.
nitwit005 14 hours ago [-]
I'm afraid the endgame is, all this activity tied to real identities will be repeatedly leaked, get used for blackmail, and by foreign intelligence agencies.
Followed by governments basically shrugging.
like_any_other 20 hours ago [-]
That would not be unprecedented: The first major change by the Lee Myung-bak government was to require websites with over 100,000 daily visitors to make their users register their real name and social security numbers. - https://en.wikipedia.org/wiki/Internet_censorship_in_South_K...
KaiserPro 13 hours ago [-]
They already have access to this.
If you run a social media site, then you have an API that allows government access to your data.
pjc50 21 hours ago [-]
See e.g. "Ohio social media parental notification act"
(mind you, ID/age requirements for access to adult content go way, way back in all countries)
lanfeust6 21 hours ago [-]
Which would kill social media. The cherry-picked tech giant iterations anyway.
samlinnfer 21 hours ago [-]
They already have this in China and Korea. Hasn't stopped people from using social media.
lanfeust6 20 hours ago [-]
The West isn't China and Korea. They can't opt out of authoritarian-state surveillance and great firewall, whereas we have options more amenable to privacy, even if you want to quibble that they aren't perfect.
Also the fact that UK and Australia are kind of backwards on online privacy.
That aside, this is targeted. The fediverse and vbulletin forums of old, even reddit, are all social media but will never require facial recognition. If they do, then far worse things are happening to freedom.
This law was struck down for violating the constitution.
numpad0 20 hours ago [-]
Korea is a democratic Western nation, they host US military bases and fly F-35. Korean made phones are trusted enough that American special forces use it for some parachute jumpings.
pezezin 9 hours ago [-]
Since when is East Asia considered "Western"?
I live in a Japanese city with a US military base and trust me, the only Western thing here are the few bars that cater to them.
dragonwriter 8 hours ago [-]
Among the many definitions of "Western" is the original sense of "First World", encompassing members of the geopolitical bloc centered historically on the US.
sandspar 6 hours ago [-]
If the world splits into a China bloc and an American bloc, Japan would almost certainly join the American bloc, right?
spacebanana7 21 hours ago [-]
I don't think it would kill social media, but it'd make it more similar to Chinese social media. Essentially impossible to use for protests or criticism of things the government doesn't critiques on.
charlie90 16 hours ago [-]
Why? People make social media accounts with their real name and face already. I doubt it would have any effect.
numpad0 20 hours ago [-]
It ties real world ultraviolence with social media. It won't kill social media, just make it materially toxic. IIUC South Korea in 2000s had exactly this, online dispute stories coming from there were much worse than anything I had heard locally.
ChocolateGod 21 hours ago [-]
Exactly, targeting children with their parents credit cards is a profitable business.
miohtama 14 hours ago [-]
You need to ask what would Trump do. Court order probably skipped, or from a friendly judge.
2OEH8eoCRo0 20 hours ago [-]
Good!
Why is the Internet any different than say, a porn or liquor store? Why are we so fuckin allergic to verification? I'll tell ya why- money. Don't pretend it's privacy.
7 hours ago [-]
woodrowbarlow 20 hours ago [-]
there two false equivalencies in your argument, as presented in response to GP:
1. ID checks are not the same as age verification.
2. a social media website is not the same as a porn website.
if you take the stance that social media sites should require ID verification, then i would furthermore point out that this is likely to impact any website that has a space for users to add public feedback, even forums and blogs.
spacebanana7 20 hours ago [-]
It's about power not money. The Chinese social media companies who do this are plenty profitable.
throwaway875847 18 hours ago [-]
Money? Every big ad tech company would love to be provided with all that juicy verification data.
2OEH8eoCRo0 15 hours ago [-]
If they could make more money with verification than without then we would already have it.
BlueTemplar 2 hours ago [-]
But they do : haven't you noticed the rise of "login to do X" over the last few decades ?
squigz 20 hours ago [-]
How about we don't pretend there's only 1 single facet to this issue, no matter which you think it is?
rkagerer 10 hours ago [-]
Of all the terrible, dumb-headed ideas. I would not want my kids scanning their face into who-knows-what third party's service.
I already decline this technology when finance companies want to use it for eg. KYC verification ("Sorry, I don't own a smartphone compatible with your tool. If you want my business you'll have to find another way.
Happy to provide a notarized declaration if you'd like" has worked in the past).
Hyperboreanal 3 hours ago [-]
Would you rather your kids be groomed and become addicted to porn?
sReinwald 38 minutes ago [-]
This response is a textbook example of a manipulative false dichotomy that poisons legitimate discourse about child safety online.
Presenting the only options as either "scan your child's biometric data into opaque systems" or "let your child be groomed and/or get addicted to porn" is intellectually dishonest and deliberately inflammatory. It's a rhetorical trap designed to shame parents with valid privacy concerns into compliance.
Privacy rights and child protection are not mutually exclusive. Numerous approaches exist that don't require harvesting biometric data from minors, from improved content filtering and educational initiatives to parental controls and account verification methods that don't rely on facial scanning. Corporations are simply implementing the most convenient (for them) solution that technically satisfies regulatory requirements while creating new data streams they can potentially monetize.
What's actually happening here is deeply troubling: we're normalizing the idea that children must surrender their biometric data as the price of digital participation. This creates permanent digital identifiers that could follow them throughout their lives, with their data stored in systems with questionable security, unclear retention policies, and potential for future misuse.
Weaponizing the fear of child exploitation to silence legitimate concerns about corporate overreach isn't just manipulative - it's morally reprehensible. Framing opposition to biometric surveillance as being pro-exploitation deliberately poisons the well against anyone who questions these systems.
We can and must develop approaches that protect children without surrendering their fundamental privacy rights. Pretending these are our only two options isn't just wrong - it actively undermines the nuanced conversation we should be having about both child safety and digital rights.
hedora 21 hours ago [-]
Like many other people here, I'm wondering what we'll end up having to do at work do deal with this. We don't have the resources to put a full time person on this, and the UK's not a huge market.
For unrelated reasons, we already have to implement geoblocking, and we're also intentionally VPN friendly. I suspect most services are that way, so the easy way out is to add "UK" to the same list as North Korea and Iran.
Anyway, if enough services implement this that way, I'd expect the UK to start repealing laws like this (or to start seeing China-level adoption of VPN services). That limits the blast radius to services actually based in the UK. Those are already dropping like flies, sadly.
I hope the rest of the international tech community applies this sort of pressure. Strength in numbers is about all we have left these days.
fny 20 hours ago [-]
You'll likely end up paying someone else to do it for you.
hedora 20 hours ago [-]
I'm reasonably sure we will not. Dealing with an integration like that means not shipping some other feature to the rest of the planet. The marginal gain of accepting UK users is lower than the marginal gain of increasing addressable market everywhere else.
YurgenJurgensen 10 hours ago [-]
…as will everyone else. The same company. Who will have all that data in one convenient database just waiting to be leaked.
fkyoureadthedoc 20 hours ago [-]
> I suspect most services are that way
I don't know actual numbers, but I gave up using VPN by default because in my experience they definitely are not.
20 hours ago [-]
nyanpasu64 13 hours ago [-]
Frankly I'm scared by governments and corporations going "papers, please" for people to be allowed to access the Internet. On top of endangering privacy by tying pseudonymous online interactions to real-life ID and biometrics, attempts to block under-18 people from finding information or interacting online will only amplify how society regards them as not having rights. This will isolate people (especially gay and trans teens) living with abusive parents from finding support networks, and prevent them from learning (by talking to friends in different situations) that being beaten or emotionally put down by parents is abusive and traumatizing.
I know all too well that when you grow up you're psychologically wired to assume that the way the parents treated you is normal, and if they harmed you then you deserve to be hurt. I've made friends with and assisted many teens and young adults in unsafe living situations (and talked to people who grew up in fundamentalist religions and cults), and they're dependent on online support networks to recognize and cope with abuse, get advice, and seek help in dangerous situations.
nicbou 13 hours ago [-]
To add to this, some people might be left out because companies are not financially incentivised to verify them.
In Germany, immigrants struggle to open a bank account because the banks require documents that they don't have (and that they can hardly get with a bank account). Russian, Iranian and Syrian citizens have a particularly hard time finding a bank that works for them. The most common video document verification system does not support some Indian passports, among others.
To banks, leaving these people out is a rational business decision. The same thing will happen to those deemed too risky or too much hassle by the internet's gatekeepers, but at a much bigger scale.
extraduder_ire 10 hours ago [-]
What is it about some Indian passports? Do they need to have a biometric chip to work? (just checked, and those were introduced in 2024)
Banks worldwide regularly refuse service to people who have US citizenship, so I don't think you're far off on that point.
nicbou 50 seconds ago [-]
If I remember correctly there are a dozen variants, and most of them lack a basic feature. I think it's either a signature, latin letters or biometric features.
US citizens also had issues due to FATCA requirements although it seems to have improved since they were introduced.
BlueTemplar 2 hours ago [-]
Is banking not deemed a right in Germany ? Aren't there "banks of last resort" ? Or does that right somehow not extend to non-EU refugees ?
exe34 13 hours ago [-]
> prevent them from learning (by talking to friends in different situations) that being beaten or emotionally put down by parents is abusive and traumatizing.
parents didn't know I'm gay, but they did control all flow of information (before social media) by controlling all movements outside school.
it took me until my thirties to realise how deeply abusive my childhood was. the only hints I had, in hindsight, was the first Christmas at uni, everybody was excited to go home and I couldn't fathom why on earth anybody would want to. I dismissed it as an oddity at the time.
distalx 3 hours ago [-]
This feels more like spying on everyone than making the internet safe for kids. Big companies and the government are already tracking what we do online. This just seems like a further reduction of our privacy on the internet.
Parents need to be more involved in what their kids do online, just like in real life. Grounding them isn't enough. We wouldn't let them wander into dangerous places, so we shouldn't let them wander online without adult supervision. Also, parents need to prepare for having tough conversations, like what pornography or gambling is.
Online companies need to really work to make their sites safe for everyone. They should act like they own a mall. If they let bad stuff in (like pornography, scams, gambling), it hurts their reputation, and people will leave.
Instead of banning everything, because some people take pleasure in those activities, maybe there should be separate online spaces for adults who want that kind of content, like how cities have specific areas for adult businesses. This way, it would be easier to restrict children's access to some hardcore stuff.
If we all put some effort into figuring out easy and privacy-friendly solutions to safeguard kids, we can rely on simple principles. For example, if you want to sell toys to kids, you shouldn't sell adult toys under the same roof (same domain) or have posters that can affect young minds.
sph 2 hours ago [-]
> This feels more like spying on everyone than making the internet safe for kids.
That’s always been the point. “Protecting children online” is the trojan horse against privacy, and apart from a few of us nerds, everyone is very much in favour of these laws. The fight for privacy is pretty much lost against such a weapon.
> Platkin says there were two catalysts for the investigation. One is personal: A few years ago, a family friend came to Platkin, astonished that his 10-year-old son was able to sign up for Discord, despite the platform forbidding children under 13 from registering.
> The second was the mass-shooting in Buffalo, in neighboring New York. The perpetrator used Discord as his personal diary in the lead-up to the attack.
In other words, this is yet another attack on privacy in the name of "protecting the children".
azalemeth 2 hours ago [-]
How do we fight back against this? I don't want my face scanned on a smartphone to use goods and services. Kyc checks for banks are bad enough.
I miss the internet of the early 2000s.
lambertsimnel 26 minutes ago [-]
I don't think there are any easy answers to the question of how to respond to this but you might consider:
- voting with your feet
- contacting your elected representatives
- contacting media outlets
- becoming a member or donor of civil liberties campaigns
- listening to people who don't yet get it and trying to ensure that they can switch to your view without losing face
MisterTea 21 hours ago [-]
It's interesting how the "features" which many claim IRC is missing turn out to be a huge liability. Adult content is applied via image hosting, video/audio chat, etc. All things IRC lacks.
spacebanana7 21 hours ago [-]
There is a definitely a textual privilege in media. You can write things in books that would never be allowed to be depicted in video. Even in Game of Thrones, Ramsay's sadism had to be sanitised a little for live action.
This is doubly so if your book is historic in some sense. Still find it crazy that Marquis de Sade's stuff is legal.
doublerabbit 14 hours ago [-]
> All things IRC lacks.
IRC gives you all the features of a normal client but you've got to create them yourself which itself is a dark-art that's been squandered by today's gimmicky services.
Just because it doesn't have a fancy UI to present the media doesn't mean it can't.
Encode to base64 and post in channel. Decode it back to normal format... IRC is excellent for large amounts of stringed text.
You could even stream the movie in base64 and have a client that captures the data stream and decodes.
The only thing that IRC lacks is a feature to recall conversations where if someone isn't present. But if you're someone who needs that host a bouncer or something.
I personally enjoy entering a blank slate.
blibble 13 hours ago [-]
sending any reasonably sized jpeg as base64 text will take you several minutes with typical server flood protection
doublerabbit 12 hours ago [-]
You could chunk it, compress it with gzip. Usenet uses yENC.
Public servers sure, may have protections in place but your own server and with IRCd's being easy configurable makes it non-trivial.
mvdtnz 11 hours ago [-]
And this is somehow better than a "gimmicky" service which handles images natively? Interesting.
MiddleEndian 13 hours ago [-]
Fuck this, need a law to explicitly ban face scanning
switch007 4 hours ago [-]
It won't happen. The police are using it in meatspace. It will become the norm all over the UK.
zevv 21 hours ago [-]
So, what will be the proper technology to apply here? I have no problem with verification of my age (not the date of birth, just the boolean, >18yo), but I do have a problem with sending any party a picture of my face or my passport.
someNameIG 14 hours ago [-]
Discord got me to do this about 2 weeks ago (I'm Australian so they seem to be rolling this out here too), at least for the face scan the privacy policy said it occurred on device, so if you believe that you're not sending anyone images of your face.
Retr0id 11 hours ago [-]
Fascinating. If it really isn't sending the face images, spoofing the verification could be as simple as returning a boolean to some API.
kelseyfrog 13 hours ago [-]
This is a social problem and as such cannot be solved with technology. You would have to make social media so uncool that young people didn't use it. One of the easiest ways of doing this is associating it with old people. Therefore the fastest way to get young people off discord is to get geriatric on discord and en-mass.
KaiserPro 13 hours ago [-]
Underage drinking is a social problem.
The issue isn't social media is bad, the issue is that social media has no effective moderation. If an adult is hanging out at the park talking to minors, thats easy to spot and correct. there is a strong social pressure to not let that happen.
The problem is when moving to chat, not only is a mobile private to the child, there are no safe mechanisms to allow parents to "spot the nonce". Moreover the kid has no real way of knowing they are adults until it's too late.
Its a difficult problem, doing nothing is going to ruin a generation (or already has), doing it half arsed is going to undermine privacy and not solve the problem.
londons_explore 21 hours ago [-]
Maybe someone like apple will make a "verify user looks over 18" neural net model they can run in the secure enclave of iphones, which sends some kind of "age verified by apple" token to websites without disclosing your identity outside your own device?
Having said that, I bet such a mechanism will prove easy to fake (if only by pointing the phone at grandad), and therefore be disallowed by governments in short order in favour of something that doesn't protect the user as much.
miki123211 21 hours ago [-]
Apple lets you add IDs to your wallet in some jurisdictions. I wouldn't be surprised if they eventually introduce a system-wide age verification service and let developers piggyback on it with safe, privacy-preserving assertions.
1659447091 20 hours ago [-]
OIDC4VCI(OpenID for Verifiable Credential Issuance)[0] is what I think has the most promise.
My understanding is that an issuer can issue a Credential that asserts the claims (eg, you are over 18) that you make to another entity/website and that entity can verify those claims you present to them (Verifiable Credentials).
For example, if we can get banks - who already know our full identity - to become Credential Issuers, then we can use bank provided Credentials (that assert we are over 18) to present to websites and services that require age verification WITHOUT having to give them all of our personal information. As long the site or service trust that Issuer.
You mean without giving them any personal information other than where to find your bank account.
1659447091 19 hours ago [-]
It doesn't have to be your bank if you don't want, have the DMV be an issuer or your car insurance, or health insurance or cell phone service etc.
You choose which one you want you want to have assert your claim. They already know you. It's a better option than giving every random website or service all of your info and biometric data so you can 'like' memes or bother random people with DM's or whatever people do on those types of social media platforms
stubish 7 hours ago [-]
For Australia (who will need something like this this year per current legislation), the only sensible location is the government my.gov.au central service portal. None of the other services have an incentive or requirement to do it (Medicare, drivers license issuers, Centrelink). And given the scope of the rollout (all major social media, as nominated by the gov), it would need almost all of the banks or super funds to implement the same API for the project to not fail.
But I don't think anyone has told my.gov.au that needs to happen, so we are either going to get some proprietary solution from social media companies (tricky, since they will need to defend it in court as they are liable, but maybe discord saying 'best we can do sorry' or 'better than our competitors' will let them off). Or just switching off the services for a few days until the politicians panic about the blow back and defer the rollout until some committee can come up with a workable solution (ideally in the next election cycle).
LinuxBender 14 hours ago [-]
I think the post office could suffice in most countries for this.
Or server operators could just implement RTA headers and put the liability on apps/devices to look for the header.
Hizonner 17 hours ago [-]
> It doesn't have to be your bank if you don't want,
"If I don't want"? I would get no choice at all about who it would be, because in practice the Web site (or whoever could put pressure on the Web site) would have all of the control over which issuers were or were not acceptable. Don't pretend that actual users would have any meaningful control over anything.
The sites, even as a (almost certainly captured and corrupt) consortium, wouldn't do the work to accept just any potentially trustworthy issuer. In fact they probably wouldn't even do the work to keep track of all the national governments that might issue such credentials. Nor would you get all national governments, all banks, all insurance companies, all cell phone carriers, all neighborhood busibodies, or all of any sufficiently large class of potentially "trustable" issuers to agree to become issuers. At least not without their attaching a whole bunch of unacceptable strings to the deal. What's in it for them, exactly?
Coordinating on certifying authorities is the fatal adoption problem for all systems like that. Even the X.509 CA infrastructure we have only exists because (a) it was set up when there were a lot fewer vested interests, and (b) it's very low effort, because it doesn't actually verify any facts at all about the certificate holder. The idea that you could get around that adoption problem while simultaneously preserving anything like privacy is just silly.
Furthermore, unless you use an attestation protocol that's zero-knowledge in the identity of the certifier, which OpenID is unlikely ever to specify, nor are either issuers or relying parties going to adopt this side of the heat death of the Universe, you as a user are still always giving up some information about your association with something.
Worse, even if you could in fact get such a system adopted, it would be a bad thing. Even if it worked. Even if it were totally zero-knowledge. Infrastructure built for "of adult age" verification will get applied to services that actively should not have such verification. Even more certainly, it will extended and used to discriminate on plenty of other characteristics. That discrimination will be imposed on services by governments and other pressuring entities, regardless of their own views about who they want to exclude.
And some of it will be discrimination you will think is wrong.
It's not a good idea to go around building infrastructure like that even if you can get it adopted and even if it's done "right". Which again no non-zero-knowledge system can claim to be anyway.
Counterproposal: "those types of social media platforms" get zero information about me other than the username I use to log in, which may or may not resemble the username I use anywhere else. Same for every other user. The false "need" to do age verification gets thrown on the trash heap where it belongs.
1659447091 16 hours ago [-]
> Don't pretend that actual users would have any meaningful control over anything.
You do have control, you just don't like the option of control you have which is to forgo those social/porn sites altogether. You want to dictate to businesses and the government how to run their business or country laws that you want to use. And you can sometimes, if you get a large enough group to forgo their services over their policies, or to vote in the right people for your cause. You can also wail about it til the cows come home, or you can try and find working solutions that will BOTH guard privacy and allows a business to keep providing services by complying with laws that allow them to be in business in the first place. It's not black & white and it's not instant, it's incremental steps and it's slow and sometimes requires minor compromise that comes with being an Adult and finding Adult solutions. I'm not interested in dreaming about some fantasy of a libertarian Seasteading world. Been there done that got the t-shirt. I prefer finding solutions in the real world now.
> The false "need" to do age verification gets thrown on the trash heap where it belongs.
This is something you should send to your government that makes those rules. The businesses (that want to stay in compliance) follow the government rules given to them. The ones that ask for more are not forcing you against your will to be a part of it.
I get you don't like it, I don't care for it either; but again, you can throw a fit and pout about it - or try tofind workable solutions. This is what I choose to do even though I made the choice long ago to not use social media (except for this site and GitHub for work if you want to count those) porn sites or gambling or other nonsense. So all these things don't affect me since I don't go around signing up for or caring for all the time wasting brain rot(imo) things. But I am interested in solutions because I care about data privacy
Hizonner 15 hours ago [-]
Those businesses also have control. They just don't like the option of control they have, which is to stay out of those countries altogether.
> This is something you should send to your government that makes those rules.
My government hasn't made those rules, at least not yet. Last time they tried, I joined the crowd yelling at them about it. It's easier to do that if people aren't giving them technology they can pretend solves the fundamental problems with what they're doing.
Any more bright ideas?
1659447091 12 hours ago [-]
> Those businesses also have control. They just don't like the option of control they have, which is to stay out of those countries altogether.
Yes. ?
Apparently they don't want to leave and are happy staying there and complying. If you don't like a businesses practice, don't use them. . .
> Last time they tried, I joined the crowd yelling at them about it.
Good. I hope more people that feel as strongly about the subject as you will follow your lead.
> It's easier to do that if people aren't giving them technology they can pretend solves the fundamental problems with what they're doing.
No one is "giving" them technology that pretends anything. There is a community effort to come up with privacy focused, secure solutions. If you noticed the OIDC4VC protocols are still in the draft phase. If it's fubar no one will use it. Worse than that is, if nothing comes of any proposed solutions, the state won't just say oh well you tried.
Either we will continue to deal with the current solution of businesses collecting our ids and biometrics and each one having a db of this info to sell/have stolen, or, some consultant that golfs with some gov official will tell them the tech industry can't figure it out but they have a magic solution that's even better and will build a system (using tax dollars) that uses government IDs with the added bonus of tracking and then all of our internet usage can be tracked by the government.
Wantonly dismissing any effort to make things better in an acceptable way is not going to make it magically go away forever. That ship has sailed. You can resist efforts to find a privacy focused solution and get stuck with an even worse one from the state, or, get your crowd yelling hat back on and help make sure data and privacy protections are solidly baked into these solutions the tech community is trying to build.
threeseed 21 hours ago [-]
Variation of PassKeys could work well.
Especially if it was tightly integrated into the OS so that parents could issue an AgeKey to each of their children which sites would ask for.
hedora 21 hours ago [-]
Parents?
gloosx 4 hours ago [-]
Regulators would never comprehend internet. They are making it look like they have no idea that on the internet you can: move to another country without visa in 2 minutes, change your face, voice, fingerprints to whatever you like. Get any passport, any document you want to mock any KYC or impersonate anyone without a trace, all within 10$ range.
Sure, companies have no option but to implement funny policies like these, and I'm sure any kid is much smarter than the government, so he will feel good circumventing it.
acureau 19 hours ago [-]
Maybe the start of a bigger shift to another platform. I'd wager a large portion of the Discord user-base is underage, and they've got nothing but time.
Havoc 9 hours ago [-]
The march towards digital dystopia continues
nubinetwork 12 hours ago [-]
The day discord asks me for a picture, is the day I close my account
hightrix 12 hours ago [-]
I thought the same at first. But I imagine it’d be relatively trivial to generate a fake ID to upload that would suffice.
1970-01-01 13 hours ago [-]
thispersondoesnotexist.com
Now off ya go, little rascals.
Hyperboreanal 3 hours ago [-]
Why do you want children to be grooming victims/porn addicts?
wvenable 9 hours ago [-]
It seems likely that this will be defeated by AI generated video.
voidfunc 7 hours ago [-]
Yea, I'm not doing that. What alternatives are their for 10-20 person gaming groups that want voice chat and streaming?
Note the list of "messengers that are relevant but did not make it on the list" in case none of the messengers in the comparison meets your requirements. Even that isn't exhaustive, but there are lots of options.
nixpulvis 10 hours ago [-]
Identity verification remains unsolved and likely will remain that way. Any attempts at improvement are authoritarian. And the status quo leave massive room for circumvention.
Personally, I grew up in an era before there was any expectation of validation, and enjoyed the anonymity of message boards and forums. But when people are posting blatantly illegal activity online, I can see the appeal for more validation. Just makes me sad.
Mountain_Skies 10 hours ago [-]
Which makes one wonder how much of the illegal activities are by people who really are interested in engaging in that illegal activity and how much of it is from those who see it as a means to destroy anonymity online.
switch007 4 hours ago [-]
This will definitely just apply to social media and the situation won't be abused by other companies even if they have no legal requirement, absolutely not, no sir.
miohtama 14 hours ago [-]
A book recommendation on the topic:
> This is the first book to examine the growth and phenomenon of a securitized and criminalized compliance society which relies increasingly on intelligence-led and predictive technologies to control future risks, crimes, and security threats. It articulates the emergence of a ‘compliance-industrial complex’ that synthesizes regulatory capitalism and surveillance capitalism to impose new regimes of power and control, as well as new forms of subjectivity subservient to the ‘operating system’ of a pre-crime society.
The U.S., at least, needs a national ID. That, and a verification system for businesses to use, would solve so many of these issues.
jimbob45 21 hours ago [-]
This is how you lose your comfortable market monopoly like Skype did. Recall that Skype had better P2P tech than Discord did and would still be the market leader if MS had chosen to update anything at all besides the logo bi-yearly.
ajsnigrutin 21 hours ago [-]
I think regulation could be done better...
Let's assign one or ideally two adults to each underage child, who are aware of the childs real age and can intervene and prevent the child from installing discord (and any other social media) in the first place or confiscate the equipment if the child breaks the rules. They could also regulate many other thing in the childs life, not just social network use.
jasonlotito 21 hours ago [-]
> confiscate the equipment if the child breaks the rules.
Even you acknowledge this plan is flawed and that the child can break the rules. And it's not that difficult. After all, confiscating the equipment assumes that they know about the equipment and that they can legally seize the equipment. Third parties are involved, and doing what you suggests would land these adults in prison.
I know you thought you were being smart with your suggestion that maybe parents should be parents, but really you just highlighted your ignorance.
The goal of these laws are to prevent children from accessing content. If some adults get caught in the crossfire, they don't care.
Now, I'm not defending these laws or saying anything about them. What I am saying is that your "suggestion" is flawed from the point of view of those proposing these laws.
alexey-salmin 15 hours ago [-]
You keep saying it's flawed but I don't see how or why.
What exactly is wrong with the idea that parents should look after their kids?
JoshTriplett 7 hours ago [-]
In the eyes of people who propose laws like this, what's wrong is that that wouldn't let them forcibly impose their values on everyone else.
ajsnigrutin 21 hours ago [-]
These are not 20 something college students with jobs and rented apartments, doing stuff without their parents knowing.
These are kids younger than 13, they don't have jobs, they live with their parents, no internet/data planes outside of control of their parents, no nothing.
The goal of these laws is to get ID checks on social networks for everyone, so the governments know who the "loud ones" (against whatever political cause) are. Using small kids as a reason to do so is a typical modus operandi to achieve that.
Yes, those "one or two adults" I meantioned should be the parents, and yes, parents can legally confiscate their kids phones if they're doing something stupid online. They can also check what the kid is doing online.
If a 12yo kid (or younger) can somehow obtain money and a phone and keep it hidden from their parents, that kid will also be able to avoid such checks by vpn-ing (or using a proxy) to some non-UK country, where those checks won't be mandatory. This again is solved by the parents actually parenting, again... it's kids younger than 13, at that age, parents can and should have total control of their child.
fkyoureadthedoc 20 hours ago [-]
It has to be acknowledged that some things, like social media and pornography, are harmful to children. "Maintain the status quo" isn't an attractive response to that. ID laws are not a perfect solution, maybe not even a good one.
You undermine your whole point by pretending VPNs are going to make the whole thing moot. Why do you care when you won't be affected because you can just use a VPN? Why does pornhub make such a fuss when their users can just use a VPN? Because in reality, introducing that much friction will stop a lot of people.
ajsnigrutin 20 hours ago [-]
ID laws are the end goal, the children and porn are just an excuse to get ID laws, which would give the governments a lot more control over the internet and social networks. Just imagin someone like Trump/Ursula requesting full list of names of everyone criticizing them on eg reddit (because reddit has porn, and you'd have to show your ID to be able to use reddit, because of your reasons). This is objectively bad for the people and for the internet.
Parenting is a good solution, not just giving the kids tablets so they stay quiet. Yes, kids are curious, kids will still find porn, ID laws or not, but parents should teach them and limit their access, not IDs on discord.
And of course porhub is making a fuss, are you, an (assuming an) adult going to go to your telco with your ID and say "hi, i'm John, i want to watch porn and jerk off, but you need to see my ID first"? Or will you find some other alternatives, where pornhub doesn't earn that money?
fkyoureadthedoc 19 hours ago [-]
> Parenting is a good solution
Yes, to most of society's problems. Yet they persist.
alexey-salmin 14 hours ago [-]
So? They equally persist in the face of endless laws, I don't see how it follows that piling more laws on top is a better idea than deferring this to parents.
fkyoureadthedoc 11 hours ago [-]
Parenting was the answer to kids not wearing their seatbelt, and getting maimed and killed by very survivable accidents. Simply teach your kids to wear their seatbelt. Yet seatbelt laws reduced fatality (8%) and serious injury (9%) in kids. It follows that "piling" such a law "on top", one that people decried as unconstitutional, was a better idea than deferring to the parents.
> It has to be acknowledged that some things, like social media and pornography, are harmful to children.
It only "has to be acknowledged" if it's true. The "evidence" for either of those, but especially social media (as if that were even a single well defined thing to begin with) is pretty damned shakey. Nobody "has to acknowledge" your personal prejudices.
fkyoureadthedoc 19 hours ago [-]
You've convinced me that pornography is, in fact, beneficial to children. What was I thinking? Thank you for your reply.
Hizonner 19 hours ago [-]
Nice try, but the burden of proof for your assertion is still on you.
fkyoureadthedoc 19 hours ago [-]
[flagged]
KaiserPro 13 hours ago [-]
> The goal of these laws is to get ID checks on social networks for everyone
The UK government is nowhere near competent enough to be that stealthy.
Also, it already has this ability already. Identifying a person on social media is pretty simple, All it takes is a request to the media company, and to the ISP/phone provider.
> If a 12yo kid (or younger) can somehow obtain money and a phone and keep it hidden from their parents,
Then you have bigger fucking problems. If a 12yo can do that, in your home and not let on, then you've raised a fucking super spy.
> parents can and should have total control of their child.
Like how? constantly check their phones? that's just invasion of privacy, your kid's never going to trust you. Does the average parent know how to do that, will they enforce non-disappearing messages?
Allowing kids to be social, safe and not utter little shits online is fucking hard. I'm really not sure how we can make sure kids aren't being manipulated by fucking tiktok rage bait. (I mean adults are too, but thats a different problem)
KennyBlanken 3 hours ago [-]
It's worth noting that Matt Navarra, the sole source of "this is part of a bigger shift", is an ex member of the UK government who worked in the PM's office and worked for the BBC.
This story is a tempest in a teacup. The administration found someone to spread this nonsense so every later goes "well that was inevitable, the BBC predicted it would be."
Yeah, and bank robbers can predict that a bank is going to have less cash after a certain day.
This obsession the British have with kids online is so tiresome. You want to stop child sexual assault? Maybe do something about your royalty flying to island getaways organized by a human trafficker and ultra-high-end pimp for underage kids? Or do something about your clergy diddling kids?
Maybe the reason the UK government thinks this is such a big issue is because these legislators and officials are so surrounded by people who do it...because politicians are right there next to clergy in terms of this stuff.
jillyboel 8 hours ago [-]
That's just creepy.
weikju 11 hours ago [-]
Maybe now open-source projects will get off of Discord for their official chat/support?
littlestymaar 2 hours ago [-]
Now we have a good use-case for diffusion-based image generation: bypass these insanely privacy-invasive requirements.
josefritzishere 18 hours ago [-]
This is a privacy nightmare. Mandatory biometrics are pure insanity.
switch007 4 hours ago [-]
> Mandatory biometrics are pure insanity.
Yup someone tell the US government, because visitors can't enter the US without giving biometrics
aucisson_masque 13 hours ago [-]
They will do just enough so that they comply with the law while kids will be able to easily bypass it.
Where there is a will, there is mean and teenager looking for porn... That's a big willpower.
Hizonner 21 hours ago [-]
The ophidian lubricant has entered the chat.
curtisszmania 7 hours ago [-]
[dead]
x187463 21 hours ago [-]
I see a lot of comments here arguing age requirements are overreach and these decisions should be left to the parents. To those presenting such arguments, do you think that applies to other activities as well? What about smoking/drinking/firearms? Pornography? Driving?
I haven't researched the topic of social media's effect on young people, but the common sentiment I encounter is that it's generally harmful, or at least capable of harm in a way that is difficult to isolate and manage as a parent.
The people closest to this issue, that is parents, school faculty, and those who study the psychology and health of children/teens, seem to be the most alarmed about the effects of social media.
If that's true, I can understand the need to, as a society, agree we would like to implement some barrier between kids/teens and the social media companies. How that is practically done seems to be the challenge. Clicking a box that say's, in effect, "I totally promise I am old enough." is completely useless for anything other than a thin legal shield.
plsbenice34 13 hours ago [-]
>I see a lot of comments here arguing age requirements are overreach and these decisions should be left to the parents. To those presenting such arguments, do you think that applies to other activities as well? What about smoking/drinking/firearms? Pornography? Driving?
Yes. The state has far, far too much involvement in everybody's lives.
kelseyfrog 13 hours ago [-]
This is a great stance to have if consequences have zero value.
Every time we shrug and say "let the parents decide," we gamble with the most vulnerable: the kids who don’t yet know how to refuse a cigarette, who don’t yet grasp the weight of a loaded weapon, who don’t yet understand that porn isn’t a harmless curiosity. We gamble with the soul of childhood—and when we lose, those children don’t get a second chance. They leave behind empty chairs at dinner tables, empty beds in houses that echo with what might have been. That’s the true cost of unfettered "parental freedom," and it’s a price that's easy to pay with someone else's life. But hey, Fuck those kids, right?
plsbenice34 9 hours ago [-]
I can't express strongly enough that arguing about how to raise children is an incredibly deep, contentious topic. Over and over i see that the state terrifies me deep into my soul, as does the power that a parent has over shaping its children. You're gambling either way and there will always be disturbing consequences. You do not know the optimal way to raise a child - nobody does. It is subjective. Parents NEED to take on massive responsibility and raise their own children rather than leaving it up to the state or letting the state dictate how children are raised. Do you trust Donald Trump to shape your child? Who knows who could be elected next wherever you live
dayvigo 7 hours ago [-]
I've noticed the left, right, and center have all become more obsessed than ever these past few years with the idea the state and society aren't doing enough to forcibly protect people from themselves, that preventing potential self-inflicted harm due to a poor or risky decision is worth literally any cost; 1% aggregate harm reduction is now considered preferable to freedom of choice. No amount of risk is ever acceptable, and no one is allowed to perform their own risk calculus because they don't know better. And yes, as you said, abusive parenting is a major issue as well. Hard problems to solve.
Marsymars 12 hours ago [-]
> I see a lot of comments here arguing age requirements are overreach and these decisions should be left to the parents. To those presenting such arguments, do you think that applies to other activities as well? What about smoking/drinking/firearms? Pornography? Driving?
My gut feel here mostly has to do with how I view the activity overall. Smoking I see as a social ill that both adults and children would be better off without, so I don't particularly mind an ID check that inconveniences adults, and that can be opted-out from by simply not smoking. (Social media I see as pretty akin to smoking.)
Inconveniencing adults with ID checks is probably not actually a good way to create incentives though.
(Driving is a special case due to negative externalities and danger you cause to others.)
hedora 20 hours ago [-]
> I see a lot of comments here arguing age requirements are overreach and these decisions should be left to the parents. To those presenting such arguments, do you think that applies to other activities as well? What about smoking/drinking/firearms? Pornography? Driving?
All of the things on your list are primarily enforced by parents already.
This law is regulatory capture that's going to strengthen the monopolies of the exact social media sites that you allude to. It makes it harder for smaller, focused sites to exist. Instead the only option will be sites with algorithmic feeds that currently push right-wing nazi propaganda, anti-vaxxers, flat earthers, nihilist school shooting clubs for teenagers, or whatever fresh hell the internet came up with this morning.
If you think age verification is going to fix these problems on the big sites, I suggest watching YouTube Kids. Actually, don't. I wouldn't wish that trauma on anyone. Seriously.
squigz 21 hours ago [-]
The difference is that requiring ID for those activities doesn't generally drastically erode the privacy of other people.
Instead of destroying the concept of privacy and anonymity on the Internet... how about we just stop these companies from being as harmful as they are, regardless of your age?
linuxftw 21 hours ago [-]
> I see a lot of comments here arguing age requirements are overreach and these decisions should be left to the parents.
No you don't. The bulk of the comments at this point in time don't mention things being left to parents at all.
bitmasher9 21 hours ago [-]
> To those presenting such arguments, do you think that applies to other activities as well?
You’re acting like it’s not normal for parents to decide which activities a child can do, cannot do, and must do, and to make these decisions with appropriate ages in mind. I tend to lean towards allowing parents a long leash in their own home and other private places but to regulate behavior in schools and public places.
Rendered at 10:16:48 GMT+0000 (Coordinated Universal Time) with Vercel.
Voila, I was verified as an adult, because I could prove I had a credit card.
The whole point of mandating facial recognition or ID checks isn't to make sure you're an adult, but to keep records of who is consuming those services and tie their identities back to specific profiles. Providers can swear up and down they don't retain that information, but they often use third-parties who may or may not abide by those same requests, especially if the Gov comes knocking with a secret warrant or subpoena.
Biometric validation is surveillance, plain and simple.
Consolidation is the only tricky part that's new.
With the net, you get access in one click to the worse and the best. It is a lot of work as a parent to educate the kids about that.
As kids, teenager and even as 20 something, if we wanted to do some experience, we had to physically access the media or be physically present. This was not on-demand over a screen.
So, I filter the access at home while also trying my best to educate. This is not easy and I can understand that non tech savvy people request more laws, even so I am personally against.
The article is pretty well balanced, we have no silver bullet here.
That ship has sailed. Even the opposition admits that trying to get everyone to filter is not going to work and is functionally insignificant. The only question is whether age verification is still too onerous.
We never needed everyone to filter, just parents busy lobbying the government to impose crap onto every possible service and website across the entire world.
Instead, they should purchase devices for their kids that have a child-lock and client-side filters. All sites have to do is add an HTTP header loosely characterizing it's content.
1. Most of the dollar costs of making it all happen will be paid by the people who actually need/use the feature.
2. No toxic Orwellian panopticon.
3. Key enforcement falls into a realm non-technical parents can actually observe and act upon: What device is little Timmy holding?
4. Every site in the world will not need a monthly update to handle Elbonia's rite of manhood on the 17th lunar year to make it permitted to see bare ankles. Instead, parents of that region/religion can download their own damn plugin.
Preventing your son from playing certain video games that all of his friends enjoy also has a social cost.
This is why I think it's great when schools ban phones in class. When left up to the parents individually it's an absolute disaster.
These are just some specific examples of where I the nanny state can be beneficial. For most things in general though I'd also prefer people govern themselves (and their kids) whenever possible.
I'm seeing this as a parent in real time. I'm actually changing my kid's friend's parent behaviors by simply being like, "Cool. But my kid isn't/is going to do that" I don't know when parenting happened by social committee, but I don't believe in it.
Agreed on the classroom angle, there are many reasons (e.g. cheating, concentration) to treat the availability of devices in a uniform way there.
> If you believe that smart phones are disastrous for kids
A focus on the handheld device also makes it easier to handle other related concerns that can't really be solved any other way, like "no social-media after bedtime."
I don’t like either of them… (And why does YouTube ask me to verify my age when I’m logged into a Google account I created in 2004?)
I take great pains to keep minors out of my adult spaces, and don't have to resort to anything as invasive as biometric surveillance or card charges. This notion that the entire world should be safe for children by default, and that anything and everything adult should be vilified and locked up, is toxic as all get-out and builds shame into the human animal over something required for the perpetuation of the species.
The adult content isn't the problem, it's the relationship some folks have towards it that's the issue. That's best corrected by healthy intervention early on, not arbitrary age checks everywhere online that mainly serve as an exercise of power by the ruling class against "undesirable" elements of society.
What sort of spaces are these (online or in person), and how do you enforce this? I have an online space where such non invasive measures could be useful.
It's inconvenient, sure, and it's not SEO-friendly, but it generally works and doesn't require checking IDs or doing biometric verifications. The thing is, I'm building a community, not a product, and therefore don't have the same concerns as, say, PornHub, for checking IDs. It's also not a scalable solution - I have to build individual rapports with people I can then trust to have the access keys to my space(s), and then monitor that trust at each password change to ensure it's not violated. It's hard work, but it's decently reliable for my needs.
For larger/at-scale providers...I think the better answer is just good-old-fashioned on-device or home-network filtering. The internet was NEVER meant to be child-friendly, and we need to make it abundantly clear to parents that it's never going to be so they take necessary steps to protect their children. I'd personally like to see more sites (in general, not just adult) contribute their domain names and CDNs to independent list maintainers (or published in a help article linked via their main footer) so individuals and organizations can have greater control over their online experience. I think if someone wants to, say, block the entire domain ranges of Amazon for whatever reason, then that information should be readily available without having to watch packet flows and analyzing CDN domain patterns.
It's just good netiquette, I think, but I'm an old-fashioned dinosaur in that regard.
The world should be safe for kids because kids are the future of our society. When the world isn't safe, families won't have kids and society will start to decline. Maybe that means giving up some of the privileges you have. That's the cost of our future.
It's not just about which is worse surveillance, it's also simply that everyone has a face but not everyone has a credit card. I'm not deemed creditworthy in this country I moved to (never had a debt in my life but they don't know that) so the card application got rejected. Do we want to upload biometrics or exclude poor and unknown people from "being 18"? I really don't know which is the lesser poison
> (And why does YouTube ask me to verify my age when I’m logged into a Google account I created in 2004?)
I'd guess they didn't want to bother with that edge case. Probably <0.01% of active Youtube accounts are >18 years old
Yeah those checks are super annoying. The internet has been around long enough, mechanisms for this should exist.
And even in the smaller term, if I had to be 13 to make this account, and it has been more than 5 years, maybe relax?
That doesn't mean every service provider (discord, roblox, pornhub) should have the same.
Unfortunately it is not enough to prove an identity (you could be using the credit card of your traveling uncle) and regulation requires for it to be combined with another proof.
I see a lot of people associating identity verification with evil intent (advertising, tracking).
I work in this domain and the reality is a lot less interesting: identity verification companies do this and only this, under strict scrutiny both from their customers and from the regulators.
We are not where we want to be from a privacy standpoint but the industry is making progress and the usage of identity data is strictly regulated.
The card providers share your identity in monetary transactions, but I don't think this data does & should include birthdate.
That's useful as one option, but can't be expected of 18 year olds in most countries, and older adults in many.
I know I've read stories of kids taking cards to purchase games or other things online numerous times over the last 20+ years.
Eh. It's just easier and cheaper. I'll bet Discord has outsourced this to one of those services that ask you for a face scan when you sign up to [some other service].
This is always about government overreach.
People are less likely to criticize the government, or even participate in political debate, if their online identities are know by the government. Governments like obedient, scared citizens.
The only ethical response to laws like this, is for websites and apps to terminate operations completely in countries that create them. Citizens who elect politicians without respect for human rights and privacy don't really deserve anything nice anyway.
If the government is not working like that, you have an administrative problem, not a societal one. A state is its population.
Formats like shorts or news feeds to you algorithmically with zero lag are the problem. It makes for the zombification of decision making. Endless content breaks people down precisely because it's endless. I think if you add age verification but don't root out the endless nature, you will not really help any young person or adult.
When you look at people with unhealthy content addiction, it is always a case of excess and not necessarily type of content. There are pedophiles but honestly, we have had that throughout all time, with and without the internet. But the endless feeding of the next video robs people of the ability to stop by mentally addiciting them to see just one more. And because content is not really infinite, endless feeds invariably will feed people with porn, eating disorders, and other "crap" in quantities that slowly erode people.
Different types of pornography have different dangers and all of it has been broadly available since before the internet.
> And then you have shit like watchpeopledie.tv.
I think there's a broad gulf between these activities and I don't think they impact the brain in the same way as pornography. This type of violence can be found in movies and video games which also clearly predate the internet.
> Children should have been banned from the internet a decade ago
I'd rather pornography be banned.
> I'm completely willing to give up some privacy to make it happen.
Why? It should be incumbent on the people profiting from this activity to police it not on me to give up constitutional rights to protect their margins.
If they get it wrong, are you locked out? Do you have to send an image of your ID? So many questions. Not a huge fan of these recent UK changes (looking at the Apple E2E situation as well). I understand what they're going for, but I'm not sure this is the best course of action. What do I know though :shrug:.
On contacting their support, I learned that they refused to use any other process. Also it became apparent that they had outsourced it to some other company and had no insight into the process and so no way to help. Apparently closing one's account will cause an escalation to a team who determines where to send the money, which would presumably put some human flexability back into the process.
(In the end I was able to get their web app to work by trying several other devices, one had a camera that for whatever reason satisfied their checks that my face was within the required oval etc.)
I suspect this won't help you, but I think it's worth noting that the GDPR gives people the right to contest any automated decision-making that was made on a solely algorithmic basis. So this wouldn't be legal in the EU (or the UK).
This is over-reach. Both in the UK and Australia.
When a TV channel broadcast porn, who gets fined?
These are accepted laws that protect kids from "harm", which are relatively uncontroversial.
Now, the privacy angle is very much the right question. But as Discord are the one that are going to get fined, they totally need to make sure kids aren't being exposed to shit they shouldn't be seeing until they are old enough. In the same way the corner shop needs to make sure they don't sell booze to 16 year olds.
Now, what is the mechanism that Discord should/could use? that's the bigger question.
Can government provide fool proof, secure, private and scalable proof of age services? How can private industry do it? (Hint: they wont because its a really good source of profile information for advertising.)
The corner shop has far fewer false negatives, far lower data privacy risk, and clear rules that if applied precisely won't add any prejudice about things like skin color or country of origin to whatever prejudice already exists in the person doing the verification.
Additionally, the corner shop does not have far lower data privacy risks - actually it's quite worse. They have you on camera and have a witness who can corroborate you are that person on camera, alongside a paper trail for your order. There is no privacy there, only the illusion of such.
Also, corner shop cameras don't generally retain data for nearly as long as typical online age verification laws would require. Depending on the country and the technical configuration, physical surveillance cameras retain data for anywhere from 48 hours to 1 year. Are you really saying that most online age verification laws worldwide require or allow comparably short retention periods? (This might actually be the case for the UK law, if I'm correctly reading Ofcom's corresponding guidance, but I doubt that's true for most of the similar US state laws.)
I hate sites asking for photo verification, but I think it is more about convenience/reliability for me. My bigger fear is that if AI locks me out with no one to go for support.
The cornershop does not have access to your friend graph. Also, if you pay by card, digital ID only provides corroboration, your payment acts as a much more traceable indicator.
The risk of "digital ID" is that it'll leak grosly disprocotionate amounts of data on the holder.
For Age verification, you only need a binary old enough flag, from a system that verifies the holder's ID.
The problem is, people like google and other adtech want to be the people that provide those checks, so they can tie your every action to a profile with a 1:1 link. Then combine it to card transactions to get an ad impression to purchase signal much clearer.
The risk here is much less from government but private companies.
Because certainly one's identity might totally change if one's ID card expires...
Broadcasting porn isn't an age ID issue, it's public airwaves and they're regulated.
These aren't primarily "think of the children" arguments, the former is a major public health issue that's taken decades to begin to address, and the latter is about ownership.
I don't think that chat rooms are in the same category as either public airwaves or drugs. Besides what's the realistic outcome here? Under 18's aren't stupid, what would you have done as a kid if Discord was suddenly blocked off? Shrug and not talk to your friends again?
Or would you figure out how to bypass the checks, use a different service, or just use IRC? Telegram chats? Something even less moderated and far more open to abuse, because that's what can slip under the radar.
So no I don't think this is about protecting kids, I think it's about normalizing the loss of anonymity online.
The UK also has rules on what can be broadcast on TV depending on the time of day.
Are you kidding me? v-chip, mary whitehouse, Sex on TV are all the result of "think of the children" moral panics. Its fuck all to do with ownership.
> I don't think that chat rooms are in the same category as either public airwaves
Discord are making cash from underage kids, in the same way that meta and google are, in the same way that disney and netflix offering kids channels.
Look I'm not saying that discord should be banned for kids, but I really do think that there is a better option than the binary "Ban it all"/"fuck it, let them eat porn"
Kids need to be able to talk to each other, but they also should be able to do that without being either preyed upon by nonces, extremists, state actors and more likely bored trolls.
Its totally possible to provide anonymous age gating, but its almost certainly going to be provided by an adtech company unless we, the community provide something cheaper and better.
2/3 of Australians support minimum age restrictions for social media [1] and it was in-particular popular amongst parents. Putting the responsibility solely on parents shows ignorance of the complexities of how children are growing up these days.
Many parents have tried to ban social media only for those children to experience ostracisation amongst their peer group leading to poorer educational and social developmental outcomes at a critical time in their live.
That's why you need governments and platform owners to be heavily involved.
[1] https://www.theguardian.com/australia-news/article/2024/jun/...
now replace god with parent.
This facial thing feel like a loaded attempt to both check a box and get more of that sweet, sweet data to mine. Massive privacy invasion and exploitation of children dressed as security theater.
I myself have a mighty beard but took a couple more years to develop...
https://en.wikipedia.org/wiki/21_Jump_Street
Was it Steve Buscemi toting a skateboard?
https://knowyourmeme.com/memes/how-do-you-do-fellow-kids
I think, ironically, the best way to fight this would be to lean on identity politics: There are probably certain races that ping as older or younger. In addition, trans people who were on puberty blockers are in a situation where they might be 'of age' but not necessarily look like an automated system expects them to, and there might be discrepancies between their face as scanned and the face/information that's show on their ID. Discord has a large trans userbase. Nobody cares about privacy, but people make at least some show of caring about transphobia and racism.
> So many questions.
Do they keep a database of facial scans even though they say they don't? If not, what's to stop one older looking friend (or an older sibling/cousin/parent/etc.) from being the 'face' of everyone in a group of minors? Do they have a reliable way to ensure that a face being scanned isn't AI generated (or filtered) itself? What prevents someone from sending in their parent's/sibling's/a stolen ID?
Seems like security theater more than anything else.
A tactical observation more than anything else.
The government already has this from RealID.
it seems to me like I'd be more hesitant to go get a govt photo taken right now at least.
She was 26. She just was that young looking.
:/
Is there a market for leaked facial scans?
Ofcom is a serious contender in ruling their rules especially where Discord is multi-national that even "normies" know and use.
And if they got a slap of "we will let you off this time" they would still have to create some sort of verification service to please the next time.
You might as well piss off your consumers, loose them whatever and still hold the centre stage than fight the case for not. Nothing is stopping Ofcom from launching another lawsuit there after.
> Is there a market for leaked facial scans?
There's a market for everything. Fake driver licenses with fake pictures have been around for decades, that would be no different.
https://support.discord.com/hc/en-us/articles/30326565624343...
This can be challenging even with humans. My ex got carded when buying alcohol well into her mid thirties, and staff at the schools she taught at mistook her for a student all the time.
Edit: This isn't how it played out. See the comment below.
The actual situation was that the board refused classification where an adult was intentionally pretending to be an underage child not that they looked like one.
And it was believable, given a history of genuine but inept attempts by some to address real societal problems. (As well as given the history of fake attempts to solve problems for political points for "doing something". And also given the history of "won't someone think of the children" disingenuous pretexts often used by others to advance unrelated goals.) Basically, no one is surprised when many governments do something that seems nonsensical.
So, accusing someone of making up a story of a government doing something odd in this space might be hasty.
I suspect better would be to give a quick check and then "I couldn't find a reference to that; do you have a link?"
https://tysonadams.com/2013/04/23/did-australia-ban-small-br...
In this day and age, of crypto, and certificates, and sso, and all that gubbins, it's surely only a matter of deciding that this is a problem that needs solving.
(Unless the problem really isn't the age of the user at all, but harvesting information...)
Over the next five years, you can look forward to a steady trickle of stories in the press about shocked parents finding that somehow their 15 year old passed a one-time over-18 age verification check.
The fact compliance is nigh-impossible to comply with is intentional - the law is designed that way, because the intent is to deliver a porn ban while sidestepping free speech objections.
My boss is a. a jerk. b. a total jerk. c. an absolute total jerk. d. responsible for my paycheck. Correct answer: d.
dated, and very politically incorrect...
https://allowe.com/games/larry/tips-manuals/lsl1-age-quiz.ht...
(scroll down past answers to questions and answers)
https://news.ycombinator.com/item?id=40298552#40298804
Talking about it or explaining it is like pulling teeth; generally just a thorough misunderstanding of the notion....even though cryptographic certificates make the modern internet possible.
Any number of entities can be certificate issuers, as long as they can be deemed sufficiently trustworthy. Schools, places of worship, police, notary, employers...they can all play the role of trust anchor.
But outside of this if someone is determined they can issue fake documents at this level of provenance.
Drivers licenses for example you can buy the printing machine and blanks (illegally) so you actually need to check the registrar in that location.
how do you handle revocation when people inevitably start certifying false information?
say if an anchor has issued tens of thousands of legitimate ids, and also ten to career fraudsters who gave them $10000 each
as you've outsourced the trust you have no idea which are legitimate, and if you revoke the lot you're going to have a lot of refunds to issue
(ultimately this is why countries only allow people who can be banned from their profession to certify documents)
So if say a UPS store is issued a cert and they go rogue, we can just revoke the trust anchor cert that was issued to the store, all certs issued further down are also automatically revoked...the revocation check is done either in the app or in the case of a third-party performing the verification they will recognize that there is a cert on the issuing chain that is revoked and reject the cert.
This is how TLS certs are handled too, if a CA goes rogue, all certs issued by that CA are revoked once the CA's root cert is revoked.
As for refund issues, that's a problem for the cert issuer to deal with.
no, it's your problem, as it's your brand slapped over everything, and now you've got tens of thousands of innocent people angry that you've revoked the IDs they paid for in good faith
this would translate into lawsuits, against you
https://youtu.be/92gu4mxHmTY
All certificates are cryptographically linked to an identity-anchor certificate, meaning buying a certificate would require the seller reveal the private key tied to the identity-anchor certificate, a tall order I would argue.
In the case of stolen identity certificates, they can be revoked thus making their illegitimate utility limited.
Why would your design prevent that?
We have laws against kids buying alcohol, even though kids can (and do) try to get adults to buy them booze, but I don't think that's a good reason to say we shouldn't have laws against kids drinking.
AFAIU, the German electronic ID card ("elektronischer Personalausweis") can do this, but it is not widely implemented, and of course geographically limited.
e: I get the same feeling as I do reading about key escrow schemes in the Clipper chip vein, where nobody claimed it was theoretically impossible to have a "spare key" only accessible by warrant, but the resulting complexity and new threat classes [1] just was not worth it
[1] https://academiccommons.columbia.edu/doi/10.7916/D8GM8F2W
Hand out certificates to porn, gambling or whatever sites, that allow requesting the age of a person from the ID card, have the user touch their ID card with their phone to sign a challenge with its key (and certificate signed by the government), that's it.
Government doesn't know what porn site you visited, and porn site only gets the age.
No it's not. Unless...
> Doing so in a safe way is literally impossible since I don't want to share that information in the first place.
...well then it is.
But it's not constructive to claim that proving your age to someone is by definition a privacy violation. If someone wants to prove their age to someone, then that's a private communication that they're entitled to choose to make.
It is true that if technology to achieve this becomes commonplace, then those not wishing to do so may find it impractical to maintain their privacy in this respect. But that doesn't give others the right to obstruct people who wish to communicate in this way.
The hard part is identifying with reasonable accuracy that the person sitting in front of the device is who they say they are, or a certain age.
Offloading everything to crypto primitive moves the problem into a different domain where the check is verifying you have access to some crypto primitive, not that it’s actually you or yours.
Any fully privacy-preserving crypto solution would have the flaw that verifications could be sold online. Someone turns 21 (or other age) and begins selling verifications with their ID because there is no attachment back to them, and therefore no consequences. So people then start imaging extra layers that would protect against this, which start eroding the privacy because you’re returning back to central verification of something.
Estonia and South Korea I think also have similar features on their IDs, it's already a solved problem.
I am an adult but refuse to let them scan my face as a matter of principle, so I've considered using https://github.com/hacksider/Deep-Live-Cam to "deepfake" myself and perform the verification while wearing a fake face. If it works, I'll write about it.
Funnily enough, when the Philippines did this, it was decried as a violation of human rights [1]. But usually, media are so silent on such things I'd call them complicit. One already cannot so much as rent a hotel room anywhere in the EU without showing government ID.
[1] https://en.wikipedia.org/wiki/SIM_Registration_Act
sidebar: i've been trying to raise awareness about "joint communications and sensing" wherever i can lately; many companies involved in 6G standardization (esp. nokia) want the 6G network to use mmWave radio to create realtime 3d environment mappings, aka a "digital twin" of the physical world, aka a surveillance state's wet dream.
https://www.nokia.com/blog/building-a-network-with-a-sixth-s...
Is this sort of flow normal elsewhere? It's certainly normal where I live.
I'd walk to a local library and use their wifi. Or walk to a local McDonalds and use their wifi. Or walk to a friend's/family's house and use their wifi. Or...
Followed by governments basically shrugging.
If you run a social media site, then you have an API that allows government access to your data.
(mind you, ID/age requirements for access to adult content go way, way back in all countries)
Also the fact that UK and Australia are kind of backwards on online privacy.
That aside, this is targeted. The fediverse and vbulletin forums of old, even reddit, are all social media but will never require facial recognition. If they do, then far worse things are happening to freedom.
They have this in South Korea: https://news.ycombinator.com/item?id=43716932
I live in a Japanese city with a US military base and trust me, the only Western thing here are the few bars that cater to them.
Why is the Internet any different than say, a porn or liquor store? Why are we so fuckin allergic to verification? I'll tell ya why- money. Don't pretend it's privacy.
1. ID checks are not the same as age verification.
2. a social media website is not the same as a porn website.
if you take the stance that social media sites should require ID verification, then i would furthermore point out that this is likely to impact any website that has a space for users to add public feedback, even forums and blogs.
I already decline this technology when finance companies want to use it for eg. KYC verification ("Sorry, I don't own a smartphone compatible with your tool. If you want my business you'll have to find another way. Happy to provide a notarized declaration if you'd like" has worked in the past).
Presenting the only options as either "scan your child's biometric data into opaque systems" or "let your child be groomed and/or get addicted to porn" is intellectually dishonest and deliberately inflammatory. It's a rhetorical trap designed to shame parents with valid privacy concerns into compliance.
Privacy rights and child protection are not mutually exclusive. Numerous approaches exist that don't require harvesting biometric data from minors, from improved content filtering and educational initiatives to parental controls and account verification methods that don't rely on facial scanning. Corporations are simply implementing the most convenient (for them) solution that technically satisfies regulatory requirements while creating new data streams they can potentially monetize.
What's actually happening here is deeply troubling: we're normalizing the idea that children must surrender their biometric data as the price of digital participation. This creates permanent digital identifiers that could follow them throughout their lives, with their data stored in systems with questionable security, unclear retention policies, and potential for future misuse.
Weaponizing the fear of child exploitation to silence legitimate concerns about corporate overreach isn't just manipulative - it's morally reprehensible. Framing opposition to biometric surveillance as being pro-exploitation deliberately poisons the well against anyone who questions these systems.
We can and must develop approaches that protect children without surrendering their fundamental privacy rights. Pretending these are our only two options isn't just wrong - it actively undermines the nuanced conversation we should be having about both child safety and digital rights.
For unrelated reasons, we already have to implement geoblocking, and we're also intentionally VPN friendly. I suspect most services are that way, so the easy way out is to add "UK" to the same list as North Korea and Iran.
Anyway, if enough services implement this that way, I'd expect the UK to start repealing laws like this (or to start seeing China-level adoption of VPN services). That limits the blast radius to services actually based in the UK. Those are already dropping like flies, sadly.
I hope the rest of the international tech community applies this sort of pressure. Strength in numbers is about all we have left these days.
I don't know actual numbers, but I gave up using VPN by default because in my experience they definitely are not.
I know all too well that when you grow up you're psychologically wired to assume that the way the parents treated you is normal, and if they harmed you then you deserve to be hurt. I've made friends with and assisted many teens and young adults in unsafe living situations (and talked to people who grew up in fundamentalist religions and cults), and they're dependent on online support networks to recognize and cope with abuse, get advice, and seek help in dangerous situations.
In Germany, immigrants struggle to open a bank account because the banks require documents that they don't have (and that they can hardly get with a bank account). Russian, Iranian and Syrian citizens have a particularly hard time finding a bank that works for them. The most common video document verification system does not support some Indian passports, among others.
To banks, leaving these people out is a rational business decision. The same thing will happen to those deemed too risky or too much hassle by the internet's gatekeepers, but at a much bigger scale.
Banks worldwide regularly refuse service to people who have US citizenship, so I don't think you're far off on that point.
US citizens also had issues due to FATCA requirements although it seems to have improved since they were introduced.
parents didn't know I'm gay, but they did control all flow of information (before social media) by controlling all movements outside school.
it took me until my thirties to realise how deeply abusive my childhood was. the only hints I had, in hindsight, was the first Christmas at uni, everybody was excited to go home and I couldn't fathom why on earth anybody would want to. I dismissed it as an oddity at the time.
Parents need to be more involved in what their kids do online, just like in real life. Grounding them isn't enough. We wouldn't let them wander into dangerous places, so we shouldn't let them wander online without adult supervision. Also, parents need to prepare for having tough conversations, like what pornography or gambling is.
Online companies need to really work to make their sites safe for everyone. They should act like they own a mall. If they let bad stuff in (like pornography, scams, gambling), it hurts their reputation, and people will leave.
Instead of banning everything, because some people take pleasure in those activities, maybe there should be separate online spaces for adults who want that kind of content, like how cities have specific areas for adult businesses. This way, it would be easier to restrict children's access to some hardcore stuff.
If we all put some effort into figuring out easy and privacy-friendly solutions to safeguard kids, we can rely on simple principles. For example, if you want to sell toys to kids, you shouldn't sell adult toys under the same roof (same domain) or have posters that can affect young minds.
That’s always been the point. “Protecting children online” is the trojan horse against privacy, and apart from a few of us nerds, everyone is very much in favour of these laws. The fight for privacy is pretty much lost against such a weapon.
https://www.wired.com/story/new-jersey-sues-discord/
> Platkin says there were two catalysts for the investigation. One is personal: A few years ago, a family friend came to Platkin, astonished that his 10-year-old son was able to sign up for Discord, despite the platform forbidding children under 13 from registering.
> The second was the mass-shooting in Buffalo, in neighboring New York. The perpetrator used Discord as his personal diary in the lead-up to the attack.
In other words, this is yet another attack on privacy in the name of "protecting the children".
I miss the internet of the early 2000s.
- voting with your feet
- contacting your elected representatives
- contacting media outlets
- becoming a member or donor of civil liberties campaigns
- listening to people who don't yet get it and trying to ensure that they can switch to your view without losing face
This is doubly so if your book is historic in some sense. Still find it crazy that Marquis de Sade's stuff is legal.
IRC gives you all the features of a normal client but you've got to create them yourself which itself is a dark-art that's been squandered by today's gimmicky services.
Just because it doesn't have a fancy UI to present the media doesn't mean it can't.
Encode to base64 and post in channel. Decode it back to normal format... IRC is excellent for large amounts of stringed text.
You could even stream the movie in base64 and have a client that captures the data stream and decodes.
The only thing that IRC lacks is a feature to recall conversations where if someone isn't present. But if you're someone who needs that host a bouncer or something.
I personally enjoy entering a blank slate.
Public servers sure, may have protections in place but your own server and with IRCd's being easy configurable makes it non-trivial.
The issue isn't social media is bad, the issue is that social media has no effective moderation. If an adult is hanging out at the park talking to minors, thats easy to spot and correct. there is a strong social pressure to not let that happen.
The problem is when moving to chat, not only is a mobile private to the child, there are no safe mechanisms to allow parents to "spot the nonce". Moreover the kid has no real way of knowing they are adults until it's too late.
Its a difficult problem, doing nothing is going to ruin a generation (or already has), doing it half arsed is going to undermine privacy and not solve the problem.
Having said that, I bet such a mechanism will prove easy to fake (if only by pointing the phone at grandad), and therefore be disallowed by governments in short order in favour of something that doesn't protect the user as much.
My understanding is that an issuer can issue a Credential that asserts the claims (eg, you are over 18) that you make to another entity/website and that entity can verify those claims you present to them (Verifiable Credentials).
For example, if we can get banks - who already know our full identity - to become Credential Issuers, then we can use bank provided Credentials (that assert we are over 18) to present to websites and services that require age verification WITHOUT having to give them all of our personal information. As long the site or service trust that Issuer.
[0] https://openid.net/specs/openid-4-verifiable-credential-issu...
You choose which one you want you want to have assert your claim. They already know you. It's a better option than giving every random website or service all of your info and biometric data so you can 'like' memes or bother random people with DM's or whatever people do on those types of social media platforms
But I don't think anyone has told my.gov.au that needs to happen, so we are either going to get some proprietary solution from social media companies (tricky, since they will need to defend it in court as they are liable, but maybe discord saying 'best we can do sorry' or 'better than our competitors' will let them off). Or just switching off the services for a few days until the politicians panic about the blow back and defer the rollout until some committee can come up with a workable solution (ideally in the next election cycle).
Or server operators could just implement RTA headers and put the liability on apps/devices to look for the header.
"If I don't want"? I would get no choice at all about who it would be, because in practice the Web site (or whoever could put pressure on the Web site) would have all of the control over which issuers were or were not acceptable. Don't pretend that actual users would have any meaningful control over anything.
The sites, even as a (almost certainly captured and corrupt) consortium, wouldn't do the work to accept just any potentially trustworthy issuer. In fact they probably wouldn't even do the work to keep track of all the national governments that might issue such credentials. Nor would you get all national governments, all banks, all insurance companies, all cell phone carriers, all neighborhood busibodies, or all of any sufficiently large class of potentially "trustable" issuers to agree to become issuers. At least not without their attaching a whole bunch of unacceptable strings to the deal. What's in it for them, exactly?
Coordinating on certifying authorities is the fatal adoption problem for all systems like that. Even the X.509 CA infrastructure we have only exists because (a) it was set up when there were a lot fewer vested interests, and (b) it's very low effort, because it doesn't actually verify any facts at all about the certificate holder. The idea that you could get around that adoption problem while simultaneously preserving anything like privacy is just silly.
Furthermore, unless you use an attestation protocol that's zero-knowledge in the identity of the certifier, which OpenID is unlikely ever to specify, nor are either issuers or relying parties going to adopt this side of the heat death of the Universe, you as a user are still always giving up some information about your association with something.
Worse, even if you could in fact get such a system adopted, it would be a bad thing. Even if it worked. Even if it were totally zero-knowledge. Infrastructure built for "of adult age" verification will get applied to services that actively should not have such verification. Even more certainly, it will extended and used to discriminate on plenty of other characteristics. That discrimination will be imposed on services by governments and other pressuring entities, regardless of their own views about who they want to exclude.
And some of it will be discrimination you will think is wrong.
It's not a good idea to go around building infrastructure like that even if you can get it adopted and even if it's done "right". Which again no non-zero-knowledge system can claim to be anyway.
Counterproposal: "those types of social media platforms" get zero information about me other than the username I use to log in, which may or may not resemble the username I use anywhere else. Same for every other user. The false "need" to do age verification gets thrown on the trash heap where it belongs.
You do have control, you just don't like the option of control you have which is to forgo those social/porn sites altogether. You want to dictate to businesses and the government how to run their business or country laws that you want to use. And you can sometimes, if you get a large enough group to forgo their services over their policies, or to vote in the right people for your cause. You can also wail about it til the cows come home, or you can try and find working solutions that will BOTH guard privacy and allows a business to keep providing services by complying with laws that allow them to be in business in the first place. It's not black & white and it's not instant, it's incremental steps and it's slow and sometimes requires minor compromise that comes with being an Adult and finding Adult solutions. I'm not interested in dreaming about some fantasy of a libertarian Seasteading world. Been there done that got the t-shirt. I prefer finding solutions in the real world now.
> The false "need" to do age verification gets thrown on the trash heap where it belongs.
This is something you should send to your government that makes those rules. The businesses (that want to stay in compliance) follow the government rules given to them. The ones that ask for more are not forcing you against your will to be a part of it.
I get you don't like it, I don't care for it either; but again, you can throw a fit and pout about it - or try tofind workable solutions. This is what I choose to do even though I made the choice long ago to not use social media (except for this site and GitHub for work if you want to count those) porn sites or gambling or other nonsense. So all these things don't affect me since I don't go around signing up for or caring for all the time wasting brain rot(imo) things. But I am interested in solutions because I care about data privacy
> This is something you should send to your government that makes those rules.
My government hasn't made those rules, at least not yet. Last time they tried, I joined the crowd yelling at them about it. It's easier to do that if people aren't giving them technology they can pretend solves the fundamental problems with what they're doing.
Any more bright ideas?
Yes. ?
Apparently they don't want to leave and are happy staying there and complying. If you don't like a businesses practice, don't use them. . .
> Last time they tried, I joined the crowd yelling at them about it.
Good. I hope more people that feel as strongly about the subject as you will follow your lead.
> It's easier to do that if people aren't giving them technology they can pretend solves the fundamental problems with what they're doing.
No one is "giving" them technology that pretends anything. There is a community effort to come up with privacy focused, secure solutions. If you noticed the OIDC4VC protocols are still in the draft phase. If it's fubar no one will use it. Worse than that is, if nothing comes of any proposed solutions, the state won't just say oh well you tried.
Either we will continue to deal with the current solution of businesses collecting our ids and biometrics and each one having a db of this info to sell/have stolen, or, some consultant that golfs with some gov official will tell them the tech industry can't figure it out but they have a magic solution that's even better and will build a system (using tax dollars) that uses government IDs with the added bonus of tracking and then all of our internet usage can be tracked by the government.
Wantonly dismissing any effort to make things better in an acceptable way is not going to make it magically go away forever. That ship has sailed. You can resist efforts to find a privacy focused solution and get stuck with an even worse one from the state, or, get your crowd yelling hat back on and help make sure data and privacy protections are solidly baked into these solutions the tech community is trying to build.
Especially if it was tightly integrated into the OS so that parents could issue an AgeKey to each of their children which sites would ask for.
Sure, companies have no option but to implement funny policies like these, and I'm sure any kid is much smarter than the government, so he will feel good circumventing it.
Now off ya go, little rascals.
Note the list of "messengers that are relevant but did not make it on the list" in case none of the messengers in the comparison meets your requirements. Even that isn't exhaustive, but there are lots of options.
Personally, I grew up in an era before there was any expectation of validation, and enjoyed the anonymity of message boards and forums. But when people are posting blatantly illegal activity online, I can see the appeal for more validation. Just makes me sad.
> This is the first book to examine the growth and phenomenon of a securitized and criminalized compliance society which relies increasingly on intelligence-led and predictive technologies to control future risks, crimes, and security threats. It articulates the emergence of a ‘compliance-industrial complex’ that synthesizes regulatory capitalism and surveillance capitalism to impose new regimes of power and control, as well as new forms of subjectivity subservient to the ‘operating system’ of a pre-crime society.
https://www.amazon.com/Compliance-Industrial-Complex-Operati...
Let's assign one or ideally two adults to each underage child, who are aware of the childs real age and can intervene and prevent the child from installing discord (and any other social media) in the first place or confiscate the equipment if the child breaks the rules. They could also regulate many other thing in the childs life, not just social network use.
Even you acknowledge this plan is flawed and that the child can break the rules. And it's not that difficult. After all, confiscating the equipment assumes that they know about the equipment and that they can legally seize the equipment. Third parties are involved, and doing what you suggests would land these adults in prison.
I know you thought you were being smart with your suggestion that maybe parents should be parents, but really you just highlighted your ignorance.
The goal of these laws are to prevent children from accessing content. If some adults get caught in the crossfire, they don't care.
Now, I'm not defending these laws or saying anything about them. What I am saying is that your "suggestion" is flawed from the point of view of those proposing these laws.
What exactly is wrong with the idea that parents should look after their kids?
These are kids younger than 13, they don't have jobs, they live with their parents, no internet/data planes outside of control of their parents, no nothing.
The goal of these laws is to get ID checks on social networks for everyone, so the governments know who the "loud ones" (against whatever political cause) are. Using small kids as a reason to do so is a typical modus operandi to achieve that.
Yes, those "one or two adults" I meantioned should be the parents, and yes, parents can legally confiscate their kids phones if they're doing something stupid online. They can also check what the kid is doing online.
If a 12yo kid (or younger) can somehow obtain money and a phone and keep it hidden from their parents, that kid will also be able to avoid such checks by vpn-ing (or using a proxy) to some non-UK country, where those checks won't be mandatory. This again is solved by the parents actually parenting, again... it's kids younger than 13, at that age, parents can and should have total control of their child.
You undermine your whole point by pretending VPNs are going to make the whole thing moot. Why do you care when you won't be affected because you can just use a VPN? Why does pornhub make such a fuss when their users can just use a VPN? Because in reality, introducing that much friction will stop a lot of people.
Parenting is a good solution, not just giving the kids tablets so they stay quiet. Yes, kids are curious, kids will still find porn, ID laws or not, but parents should teach them and limit their access, not IDs on discord.
And of course porhub is making a fuss, are you, an (assuming an) adult going to go to your telco with your ID and say "hi, i'm John, i want to watch porn and jerk off, but you need to see my ID first"? Or will you find some other alternatives, where pornhub doesn't earn that money?
Yes, to most of society's problems. Yet they persist.
https://www.nber.org/system/files/working_papers/w13408/w134...
It only "has to be acknowledged" if it's true. The "evidence" for either of those, but especially social media (as if that were even a single well defined thing to begin with) is pretty damned shakey. Nobody "has to acknowledge" your personal prejudices.
The UK government is nowhere near competent enough to be that stealthy.
Also, it already has this ability already. Identifying a person on social media is pretty simple, All it takes is a request to the media company, and to the ISP/phone provider.
> If a 12yo kid (or younger) can somehow obtain money and a phone and keep it hidden from their parents,
Then you have bigger fucking problems. If a 12yo can do that, in your home and not let on, then you've raised a fucking super spy.
> parents can and should have total control of their child.
Like how? constantly check their phones? that's just invasion of privacy, your kid's never going to trust you. Does the average parent know how to do that, will they enforce non-disappearing messages?
Allowing kids to be social, safe and not utter little shits online is fucking hard. I'm really not sure how we can make sure kids aren't being manipulated by fucking tiktok rage bait. (I mean adults are too, but thats a different problem)
This story is a tempest in a teacup. The administration found someone to spread this nonsense so every later goes "well that was inevitable, the BBC predicted it would be."
Yeah, and bank robbers can predict that a bank is going to have less cash after a certain day.
This obsession the British have with kids online is so tiresome. You want to stop child sexual assault? Maybe do something about your royalty flying to island getaways organized by a human trafficker and ultra-high-end pimp for underage kids? Or do something about your clergy diddling kids?
Maybe the reason the UK government thinks this is such a big issue is because these legislators and officials are so surrounded by people who do it...because politicians are right there next to clergy in terms of this stuff.
Yup someone tell the US government, because visitors can't enter the US without giving biometrics
Where there is a will, there is mean and teenager looking for porn... That's a big willpower.
I haven't researched the topic of social media's effect on young people, but the common sentiment I encounter is that it's generally harmful, or at least capable of harm in a way that is difficult to isolate and manage as a parent.
The people closest to this issue, that is parents, school faculty, and those who study the psychology and health of children/teens, seem to be the most alarmed about the effects of social media.
If that's true, I can understand the need to, as a society, agree we would like to implement some barrier between kids/teens and the social media companies. How that is practically done seems to be the challenge. Clicking a box that say's, in effect, "I totally promise I am old enough." is completely useless for anything other than a thin legal shield.
Yes. The state has far, far too much involvement in everybody's lives.
Every time we shrug and say "let the parents decide," we gamble with the most vulnerable: the kids who don’t yet know how to refuse a cigarette, who don’t yet grasp the weight of a loaded weapon, who don’t yet understand that porn isn’t a harmless curiosity. We gamble with the soul of childhood—and when we lose, those children don’t get a second chance. They leave behind empty chairs at dinner tables, empty beds in houses that echo with what might have been. That’s the true cost of unfettered "parental freedom," and it’s a price that's easy to pay with someone else's life. But hey, Fuck those kids, right?
My gut feel here mostly has to do with how I view the activity overall. Smoking I see as a social ill that both adults and children would be better off without, so I don't particularly mind an ID check that inconveniences adults, and that can be opted-out from by simply not smoking. (Social media I see as pretty akin to smoking.)
Inconveniencing adults with ID checks is probably not actually a good way to create incentives though.
(Driving is a special case due to negative externalities and danger you cause to others.)
All of the things on your list are primarily enforced by parents already.
This law is regulatory capture that's going to strengthen the monopolies of the exact social media sites that you allude to. It makes it harder for smaller, focused sites to exist. Instead the only option will be sites with algorithmic feeds that currently push right-wing nazi propaganda, anti-vaxxers, flat earthers, nihilist school shooting clubs for teenagers, or whatever fresh hell the internet came up with this morning.
If you think age verification is going to fix these problems on the big sites, I suggest watching YouTube Kids. Actually, don't. I wouldn't wish that trauma on anyone. Seriously.
Instead of destroying the concept of privacy and anonymity on the Internet... how about we just stop these companies from being as harmful as they are, regardless of your age?
No you don't. The bulk of the comments at this point in time don't mention things being left to parents at all.
You’re acting like it’s not normal for parents to decide which activities a child can do, cannot do, and must do, and to make these decisions with appropriate ages in mind. I tend to lean towards allowing parents a long leash in their own home and other private places but to regulate behavior in schools and public places.