This is a slippery slope falicy. Just because the option is provided to self-identify age, doesn’t mean that it will be replaced with more complex and direct data collection (which I am against, if it wasn’t clear) later - esspecially considering that if its based on this law, it would be literally impossible. 4a bans the collection of data from your system besides age, and the fact that it is all handled locally and sharing it is prohibited means that it would be impractical to implement anything fancier than a text box to collect data. If anything, this looks like a way to be seen “doing something” without having to change anything for most users. Hell, if California wantted to implement a law for data collection, why would they have implemented the CCPA, why would they have written this law to ban the sharing of data, and why wouldn’t they just write the data collection law instead, given (as you said) there is already significant backing for the idea.
The worst-case scenario is already happening - aforementioned facial scans are not theoretical. Only their scope has been limited, and suddenly we’re talking about legally-mandated age gating at an OS level.
Pattern recognition is a requirement for survival.
Many abuses start small so that people like you will let it happen. Some caveats only exist for you to point to while bickering with critics, and when you’re not looking, they quietly vanish. Others were just empty words the whole time.
This law is not some compromise over widely-demanded change. It would be a pointless intrusion even if, by some miracle, it stopped right here. It will not stop here. Be serious. You lived through last year; you know the general state of everything. These exact companies have been spying on you. These governments sure aren’t stopping them, for some mysterious reason. Scoffing about blindingly obvious expectations is a choice of comforting fantasy over worthwhile argument.
Okay, but should we not oppose laws about data collection and facial recognition in that case, rather than a law that implements an entirely separate, optional, user driven approach. Saying this is bad because those are bad is not an argument any more so than saying CCPA and GDPR are bad because the government want to collect data. Your argument isn’t against this law, or even the concept of having age verification in general. Its against government overreach as a broad concept. You’re again relying on slipery slope falacy to say that because I’m okay with this one specific form of age gating, I’m okay with every other one, which I have repeatedly made clear is not true.
The fallacy isn’t assuming that it will happen. Clearly, there is a significant push towards it, and its something we need to be fighting against. The reason its a slippery slope fallacy is the assumption that this law is a direct attempt to implement those systems, in spite of the fact that AB1043 implements a system that would be redundant with AI or ID based methods, technically doesn’t offer any good way to transition into an AI or ID based system (since it all has to be done locally), and legally, imposes additional data protection laws that are likely to interfere with AI-based age verification.
The problem with AI and ID age verification isn’t the age verification. Its the data collection, limits on personal freedom, and to some, the inconvenience. So far as I can tell, AB1043 doesn’t have a significant impact on data collection (it does add another metric that could be used for fingerprinting, but also adds stricter regulation on data collection when this flag is used,) or personal freedoms - esspecially not when compared to what is already the existing standard of asking the user for their age and/or if they’re over 18.
The fallacy is the expectation that following escalating events would arise from the event in question.
It’s only a fallacy if it’s unreasonable to expect the subsequent steps to occur or in this case, be attempted.
Does that mean it’s a guarantee, of course not, just that the fallacy doesn’t apply.
The intention or plan for escalating steps doesn’t have to be laid out perfectly to draw the parallels between this and previous similar events that were then subsequently used as foundations for greater reach.
Your reasoning around the technical implementation of such escalation isn’t applicable here (in the conversation about whether or not the fallacy applies)
If you want to argue that they won’t escalate, or it’s not possible , go right ahead, but raising a fallacy argument when it doesn’t apply isn’t a good start.
If you want i can address your arguments around implementation directly,as a seperate conversation? I don’t think you’re correct on that either, but as I said I also don’t think correctness in that subject matters in the context of the fallacy.
My interpretation was that slipery slope was more about the event in question (AC1043) being predicted to directly lead to escalation (AI/ID verification). As from you’re Wikipedia quote, “to result in the claimed effects”. I don’t see any reason to predict that this law will directly influence their decision to escalate or not. That said, perhaps its a disagreement on how much cultural influence a law like this would have, and how seperate a parent/user-managed system of age verification is from a government managed one technically.
I would be interested to hear your argument for technical implementation, however.
Ah, i think i see where the difference in opinion is, claiming this event leads directly to (as in the very next step is) AI/ID verification could be considered an unreasonable jump i suppose.
In my case i was interpreting the argument as this event will almost certainly lead to further encroachment events into privacy, one of which would probably be the AI/ID verification.
To me this is a reasonable assumption because it’s what has happened in pretty much all of the recent instances of similar event occurring and therefore not a slippery slope fallacy.
TL;DR
On further examination, the technical things you mention seem to be correct if you assume that this bill alone is the vector for privacy encroachment, but they don’t pan out at all if it is assumed that other steps will follow; which, given precedent, is highly likely to happen.
On the technical implementation:
The reason its a slippery slope fallacy is the assumption that this law is a direct attempt to implement those systems, in spite of the fact that AB1043 implements a system that would be redundant with AI or ID based methods,
As an aside i’m not sure anyone is claiming that this bill is a direct attempt at a hard AI/ID verification system, rather they are claiming that this another step in a series of encroachments that will lead to escalating requirements and enforcement, AI/ID verification being an obvious step in that series.
From a technical standpoint you are correct, it outright states that photo ID upload isn’t required, yet.
Opinion : A cynic might see this as indication that the politicians understand that political and public appetite for full photo id requirements is less than optimal, so this is just a small step in shifting the Overton window on this subject.
technically doesn’t offer any good way to transition into an AI or ID based system (since it all has to be done locally),
That is only correct in a very narrow set of circumstances, that local requirement isn’t set in stone at all.
All that needs to happen to go from this to full ID checks is to mandate they use a “trusted” service for verification. It wouldn’t need to be an always online thing either, think of how the bullshit online verification systems that already exist work, i.e. you need to go online every x days or your system/service/app will stop working.
opinion: I fully expect any “trusted” service they designate to be something that serves the governmental and corporate desire for as much data as they can get away with, this isn’t even a stretch, just look at the service discord was trying to implement, the one with deep ties to palantir
and legally, imposes additional data protection laws that are likely to interfere with AI-based age verification.
This isn’t wrong as much as it seems naive, we are talking about bills that change laws, any law introduced can be revoked, superseded or have “exceptions” carved out, such as the current favourite “think of the children” thin veneer they are using.
It wouldn’t take much to move from “all data is protected” to “all data is protected, unless we need it to protect the children”
That’s not even taking in to account that the laws are only as good as the system upholding them, the current US system is sketchy AF, other countries have similar issue with uneven application of laws.
Not to say we should throw out hands up, say “what’s the point?” and just do nothing, but pretending that these laws aren’t susceptible to the same issue affecting everything else doesn’t help anyone either.
The problem with AI and ID age verification isn’t the age verification. Its the data collection, limits on personal freedom, and to some, the inconvenience.
Agreed.
So far as I can tell, AB1043 doesn’t have a significant impact on data collection (it does add another metric that could be used for fingerprinting, but also adds stricter regulation on data collection when this flag is used,) or personal freedoms - esspecially not when compared to what is already the existing standard of asking the user for their age and/or if they’re over 18.
Mostly agreed.
the points i’d raise are that the whole idea of age verification is an encroachment upon personal freedoms for some, so there’s an aspect of subjectivity to that.
I addition, relying on data collection regulations at this point is almost dangerously naive, corporations and governments alike have shown that they will basically ignore them outright or make up some exception, this isn’t conjecture, this is something easily searchable, think flock, ring camera’s, stringray , PRISM, anything palantir is involved in, cambridge analytica, broad warrantless data requests etc.
There is absolutely no reason to give the benefit of the doubt to parties that have repeatedly proven to be doing sketchy shit.
By the sound of it, the disagreement is mostly in how direct an impact AB1043 will have on government plans for data collection and authoritarianism.
Like, as you said, laws can be changed or removed, but the fact that it would be necessary to do so to implement AI/ID suggests to me that this isn’t that, and is instead a disconnected route. On a legal level, having this does nothing but add a speedbump to future authoritarianism - one they are likely to cross, but it doesn’t advance their goals, legally.
Technically, I have no doubt that the government will continue to push for more data collection and more control, but it seems that a local value that the user can access/edit (even if they were to use a online-verification system, that issues tokens) isn’t going to be secure or enforceable enough to achive their goals. Anyone can copy, modify, share, reverse-engineer, ect.
Similarly with the Overton window, where it has been standard practice for over a decade to have a “are you at least 18?” popup, and for every single service to ask you your age, if not more. We absolutely need more data protections for systems such as this (ideally an outright ban on saving this information) but this doesn’t seem to make it worse.
Basically, from my understanding, this isn’t a step towards data collection or authoritarianism, and provides no significant benifit to either of those causes - its effectively a technical standard. Like, if this age-verification flag was proposed by the Linux Foundation, and agreed to by others, would the backlash be this big? Similarly, I don’t see any contradition between wanting a ban on storage/sharing of user data, and the implementation of a flag like this - even if we are able to ban all storage of user data, this law would be unaffected. That’s what I’m trying to figure out - how do people think that this leads towards those end goals? How would blocking it improve anything?
Is it just a difference in opinion about the signicance of the Overton window?
Is there a technical aspect I’m missing?
Is there some legal advantage this provides to survailance that I’ve missed?
Right now, it seems like everyone is arguing against a strawman, implying that I support the idea of government/corporate surveillance and censorship, that I don’t expect that they’ll continue to be evil, or they’re simply saying its bad because its cosmetically similar to laws that do impede on freedoms. Given how unanimous the backlash is, I must be missing something?
But sure, let’s talk about this on its merits, in a vacuum, like there’s nothing else happening. What the fuck is it for? You endlessly insist it’s super minor, barely an inconvenience, and obviously any idiot can bypass it. That is your defense. If you freely acknowledge all of the other efforts went too far and didn’t work, why is this one worth trying? How is this encroachment on all operating systems not a waste of time, at best?
This is a slippery slope falicy. Just because the option is provided to self-identify age, doesn’t mean that it will be replaced with more complex and direct data collection (which I am against, if it wasn’t clear) later - esspecially considering that if its based on this law, it would be literally impossible. 4a bans the collection of data from your system besides age, and the fact that it is all handled locally and sharing it is prohibited means that it would be impractical to implement anything fancier than a text box to collect data. If anything, this looks like a way to be seen “doing something” without having to change anything for most users. Hell, if California wantted to implement a law for data collection, why would they have implemented the CCPA, why would they have written this law to ban the sharing of data, and why wouldn’t they just write the data collection law instead, given (as you said) there is already significant backing for the idea.
The worst-case scenario is already happening - aforementioned facial scans are not theoretical. Only their scope has been limited, and suddenly we’re talking about legally-mandated age gating at an OS level.
Pattern recognition is a requirement for survival.
Many abuses start small so that people like you will let it happen. Some caveats only exist for you to point to while bickering with critics, and when you’re not looking, they quietly vanish. Others were just empty words the whole time.
This law is not some compromise over widely-demanded change. It would be a pointless intrusion even if, by some miracle, it stopped right here. It will not stop here. Be serious. You lived through last year; you know the general state of everything. These exact companies have been spying on you. These governments sure aren’t stopping them, for some mysterious reason. Scoffing about blindingly obvious expectations is a choice of comforting fantasy over worthwhile argument.
Okay, but should we not oppose laws about data collection and facial recognition in that case, rather than a law that implements an entirely separate, optional, user driven approach. Saying this is bad because those are bad is not an argument any more so than saying CCPA and GDPR are bad because the government want to collect data. Your argument isn’t against this law, or even the concept of having age verification in general. Its against government overreach as a broad concept. You’re again relying on slipery slope falacy to say that because I’m okay with this one specific form of age gating, I’m okay with every other one, which I have repeatedly made clear is not true.
If you’re going to reference the slippery slope fallacy so much, you should probably read where and when it actually applies.
From the wikipedia entry:
You yourself just acknowledged that the worst-case is already happening, so the assumption that the worst case will continue to happen is reasonable.
Unless you wish to argue that :
followed by you saying
isn’t an acknowledgement ?
The fallacy isn’t assuming that it will happen. Clearly, there is a significant push towards it, and its something we need to be fighting against. The reason its a slippery slope fallacy is the assumption that this law is a direct attempt to implement those systems, in spite of the fact that AB1043 implements a system that would be redundant with AI or ID based methods, technically doesn’t offer any good way to transition into an AI or ID based system (since it all has to be done locally), and legally, imposes additional data protection laws that are likely to interfere with AI-based age verification.
The problem with AI and ID age verification isn’t the age verification. Its the data collection, limits on personal freedom, and to some, the inconvenience. So far as I can tell, AB1043 doesn’t have a significant impact on data collection (it does add another metric that could be used for fingerprinting, but also adds stricter regulation on data collection when this flag is used,) or personal freedoms - esspecially not when compared to what is already the existing standard of asking the user for their age and/or if they’re over 18.
The fallacy is the expectation that following escalating events would arise from the event in question.
It’s only a fallacy if it’s unreasonable to expect the subsequent steps to occur or in this case, be attempted.
Does that mean it’s a guarantee, of course not, just that the fallacy doesn’t apply.
The intention or plan for escalating steps doesn’t have to be laid out perfectly to draw the parallels between this and previous similar events that were then subsequently used as foundations for greater reach.
Your reasoning around the technical implementation of such escalation isn’t applicable here (in the conversation about whether or not the fallacy applies)
If you want to argue that they won’t escalate, or it’s not possible , go right ahead, but raising a fallacy argument when it doesn’t apply isn’t a good start.
If you want i can address your arguments around implementation directly,as a seperate conversation? I don’t think you’re correct on that either, but as I said I also don’t think correctness in that subject matters in the context of the fallacy.
My interpretation was that slipery slope was more about the event in question (AC1043) being predicted to directly lead to escalation (AI/ID verification). As from you’re Wikipedia quote, “to result in the claimed effects”. I don’t see any reason to predict that this law will directly influence their decision to escalate or not. That said, perhaps its a disagreement on how much cultural influence a law like this would have, and how seperate a parent/user-managed system of age verification is from a government managed one technically.
I would be interested to hear your argument for technical implementation, however.
Ah, i think i see where the difference in opinion is, claiming this event leads directly to (as in the very next step is) AI/ID verification could be considered an unreasonable jump i suppose.
In my case i was interpreting the argument as this event will almost certainly lead to further encroachment events into privacy, one of which would probably be the AI/ID verification.
To me this is a reasonable assumption because it’s what has happened in pretty much all of the recent instances of similar event occurring and therefore not a slippery slope fallacy.
TL;DR
On further examination, the technical things you mention seem to be correct if you assume that this bill alone is the vector for privacy encroachment, but they don’t pan out at all if it is assumed that other steps will follow; which, given precedent, is highly likely to happen.
On the technical implementation:
As an aside i’m not sure anyone is claiming that this bill is a direct attempt at a hard AI/ID verification system, rather they are claiming that this another step in a series of encroachments that will lead to escalating requirements and enforcement, AI/ID verification being an obvious step in that series.
From a technical standpoint you are correct, it outright states that photo ID upload isn’t required, yet.
Opinion : A cynic might see this as indication that the politicians understand that political and public appetite for full photo id requirements is less than optimal, so this is just a small step in shifting the Overton window on this subject.
That is only correct in a very narrow set of circumstances, that local requirement isn’t set in stone at all.
All that needs to happen to go from this to full ID checks is to mandate they use a “trusted” service for verification. It wouldn’t need to be an always online thing either, think of how the bullshit online verification systems that already exist work, i.e. you need to go online every x days or your system/service/app will stop working.
opinion: I fully expect any “trusted” service they designate to be something that serves the governmental and corporate desire for as much data as they can get away with, this isn’t even a stretch, just look at the service discord was trying to implement, the one with deep ties to palantir
This isn’t wrong as much as it seems naive, we are talking about bills that change laws, any law introduced can be revoked, superseded or have “exceptions” carved out, such as the current favourite “think of the children” thin veneer they are using.
It wouldn’t take much to move from “all data is protected” to “all data is protected, unless we need it to protect the children”
That’s not even taking in to account that the laws are only as good as the system upholding them, the current US system is sketchy AF, other countries have similar issue with uneven application of laws.
Not to say we should throw out hands up, say “what’s the point?” and just do nothing, but pretending that these laws aren’t susceptible to the same issue affecting everything else doesn’t help anyone either.
Agreed.
Mostly agreed.
the points i’d raise are that the whole idea of age verification is an encroachment upon personal freedoms for some, so there’s an aspect of subjectivity to that.
I addition, relying on data collection regulations at this point is almost dangerously naive, corporations and governments alike have shown that they will basically ignore them outright or make up some exception, this isn’t conjecture, this is something easily searchable, think flock, ring camera’s, stringray , PRISM, anything palantir is involved in, cambridge analytica, broad warrantless data requests etc.
There is absolutely no reason to give the benefit of the doubt to parties that have repeatedly proven to be doing sketchy shit.
By the sound of it, the disagreement is mostly in how direct an impact AB1043 will have on government plans for data collection and authoritarianism.
Like, as you said, laws can be changed or removed, but the fact that it would be necessary to do so to implement AI/ID suggests to me that this isn’t that, and is instead a disconnected route. On a legal level, having this does nothing but add a speedbump to future authoritarianism - one they are likely to cross, but it doesn’t advance their goals, legally.
Technically, I have no doubt that the government will continue to push for more data collection and more control, but it seems that a local value that the user can access/edit (even if they were to use a online-verification system, that issues tokens) isn’t going to be secure or enforceable enough to achive their goals. Anyone can copy, modify, share, reverse-engineer, ect.
Similarly with the Overton window, where it has been standard practice for over a decade to have a “are you at least 18?” popup, and for every single service to ask you your age, if not more. We absolutely need more data protections for systems such as this (ideally an outright ban on saving this information) but this doesn’t seem to make it worse.
Basically, from my understanding, this isn’t a step towards data collection or authoritarianism, and provides no significant benifit to either of those causes - its effectively a technical standard. Like, if this age-verification flag was proposed by the Linux Foundation, and agreed to by others, would the backlash be this big? Similarly, I don’t see any contradition between wanting a ban on storage/sharing of user data, and the implementation of a flag like this - even if we are able to ban all storage of user data, this law would be unaffected. That’s what I’m trying to figure out - how do people think that this leads towards those end goals? How would blocking it improve anything?
Is it just a difference in opinion about the signicance of the Overton window?
Is there a technical aspect I’m missing?
Is there some legal advantage this provides to survailance that I’ve missed?
Right now, it seems like everyone is arguing against a strawman, implying that I support the idea of government/corporate surveillance and censorship, that I don’t expect that they’ll continue to be evil, or they’re simply saying its bad because its cosmetically similar to laws that do impede on freedoms. Given how unanimous the backlash is, I must be missing something?
Mandatory OS integration is not separate, optional, or user-driven.
I have explicitly argued against this, in itself, for its own sake.
Under the other submission, I am even arguing against age verification in general.
But sure, let’s talk about this on its merits, in a vacuum, like there’s nothing else happening. What the fuck is it for? You endlessly insist it’s super minor, barely an inconvenience, and obviously any idiot can bypass it. That is your defense. If you freely acknowledge all of the other efforts went too far and didn’t work, why is this one worth trying? How is this encroachment on all operating systems not a waste of time, at best?