• 0 Posts
  • 4 Comments
Joined 3 years ago
cake
Cake day: August 8th, 2023

help-circle
  • Ah, i think i see where the difference in opinion is, claiming this event leads directly to (as in the very next step is) AI/ID verification could be considered an unreasonable jump i suppose.

    In my case i was interpreting the argument as this event will almost certainly lead to further encroachment events into privacy, one of which would probably be the AI/ID verification.

    To me this is a reasonable assumption because it’s what has happened in pretty much all of the recent instances of similar event occurring and therefore not a slippery slope fallacy.


    TL;DR

    On further examination, the technical things you mention seem to be correct if you assume that this bill alone is the vector for privacy encroachment, but they don’t pan out at all if it is assumed that other steps will follow; which, given precedent, is highly likely to happen.


    On the technical implementation:

    The reason its a slippery slope fallacy is the assumption that this law is a direct attempt to implement those systems, in spite of the fact that AB1043 implements a system that would be redundant with AI or ID based methods,

    As an aside i’m not sure anyone is claiming that this bill is a direct attempt at a hard AI/ID verification system, rather they are claiming that this another step in a series of encroachments that will lead to escalating requirements and enforcement, AI/ID verification being an obvious step in that series.

    From a technical standpoint you are correct, it outright states that photo ID upload isn’t required, yet.

    Opinion : A cynic might see this as indication that the politicians understand that political and public appetite for full photo id requirements is less than optimal, so this is just a small step in shifting the Overton window on this subject.

    technically doesn’t offer any good way to transition into an AI or ID based system (since it all has to be done locally),

    That is only correct in a very narrow set of circumstances, that local requirement isn’t set in stone at all.

    All that needs to happen to go from this to full ID checks is to mandate they use a “trusted” service for verification. It wouldn’t need to be an always online thing either, think of how the bullshit online verification systems that already exist work, i.e. you need to go online every x days or your system/service/app will stop working.

    opinion: I fully expect any “trusted” service they designate to be something that serves the governmental and corporate desire for as much data as they can get away with, this isn’t even a stretch, just look at the service discord was trying to implement, the one with deep ties to palantir

    and legally, imposes additional data protection laws that are likely to interfere with AI-based age verification.

    This isn’t wrong as much as it seems naive, we are talking about bills that change laws, any law introduced can be revoked, superseded or have “exceptions” carved out, such as the current favourite “think of the children” thin veneer they are using.

    It wouldn’t take much to move from “all data is protected” to “all data is protected, unless we need it to protect the children”

    That’s not even taking in to account that the laws are only as good as the system upholding them, the current US system is sketchy AF, other countries have similar issue with uneven application of laws.

    Not to say we should throw out hands up, say “what’s the point?” and just do nothing, but pretending that these laws aren’t susceptible to the same issue affecting everything else doesn’t help anyone either.

    The problem with AI and ID age verification isn’t the age verification. Its the data collection, limits on personal freedom, and to some, the inconvenience.

    Agreed.

    So far as I can tell, AB1043 doesn’t have a significant impact on data collection (it does add another metric that could be used for fingerprinting, but also adds stricter regulation on data collection when this flag is used,) or personal freedoms - esspecially not when compared to what is already the existing standard of asking the user for their age and/or if they’re over 18.

    Mostly agreed.

    the points i’d raise are that the whole idea of age verification is an encroachment upon personal freedoms for some, so there’s an aspect of subjectivity to that.

    I addition, relying on data collection regulations at this point is almost dangerously naive, corporations and governments alike have shown that they will basically ignore them outright or make up some exception, this isn’t conjecture, this is something easily searchable, think flock, ring camera’s, stringray , PRISM, anything palantir is involved in, cambridge analytica, broad warrantless data requests etc.

    There is absolutely no reason to give the benefit of the doubt to parties that have repeatedly proven to be doing sketchy shit.


  • The fallacy is the expectation that following escalating events would arise from the event in question.

    It’s only a fallacy if it’s unreasonable to expect the subsequent steps to occur or in this case, be attempted.

    Does that mean it’s a guarantee, of course not, just that the fallacy doesn’t apply.

    The intention or plan for escalating steps doesn’t have to be laid out perfectly to draw the parallels between this and previous similar events that were then subsequently used as foundations for greater reach.

    Your reasoning around the technical implementation of such escalation isn’t applicable here (in the conversation about whether or not the fallacy applies)

    If you want to argue that they won’t escalate, or it’s not possible , go right ahead, but raising a fallacy argument when it doesn’t apply isn’t a good start.

    If you want i can address your arguments around implementation directly,as a seperate conversation? I don’t think you’re correct on that either, but as I said I also don’t think correctness in that subject matters in the context of the fallacy.


  • If you’re going to reference the slippery slope fallacy so much, you should probably read where and when it actually applies.

    From the wikipedia entry:

    When the initial step is not demonstrably likely to result in the claimed effects, this is called the slippery slope fallacy.

    You yourself just acknowledged that the worst-case is already happening, so the assumption that the worst case will continue to happen is reasonable.

    Unless you wish to argue that :

    The worst-case scenario is already happening

    followed by you saying

    Okay, but

    isn’t an acknowledgement ?


  • Not who replied to you originally but,

    You aren’t wrong (you even stated that more is probably better) , just not necessarily presenting the whole picture.

    Ram compression isn’t a benefit only scenario, there is a cost in processing power to make that happen.

    So it’s a trade off of memory utilisation vs processing requirements.

    Whether or not it’s worth it is down to circumstance, though i agree that generally i think it’s worth the tradeoff.

    Unified memory is useful in specific circumstances, most notably LLM/ML scenarios where high vram utilisation is part of the process.

    It’s not an apples to apples comparison by any means.