• PlzGivHugs@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    3
    ·
    edit-2
    1 day ago

    The fallacy isn’t assuming that it will happen. Clearly, there is a significant push towards it, and its something we need to be fighting against. The reason its a slippery slope fallacy is the assumption that this law is a direct attempt to implement those systems, in spite of the fact that AB1043 implements a system that would be redundant with AI or ID based methods, technically doesn’t offer any good way to transition into an AI or ID based system (since it all has to be done locally), and legally, imposes additional data protection laws that are likely to interfere with AI-based age verification.

    The problem with AI and ID age verification isn’t the age verification. Its the data collection, limits on personal freedom, and to some, the inconvenience. So far as I can tell, AB1043 doesn’t have a significant impact on data collection (it does add another metric that could be used for fingerprinting, but also adds stricter regulation on data collection when this flag is used,) or personal freedoms - esspecially not when compared to what is already the existing standard of asking the user for their age and/or if they’re over 18.

    • Senal@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      24 hours ago

      The fallacy is the expectation that following escalating events would arise from the event in question.

      It’s only a fallacy if it’s unreasonable to expect the subsequent steps to occur or in this case, be attempted.

      Does that mean it’s a guarantee, of course not, just that the fallacy doesn’t apply.

      The intention or plan for escalating steps doesn’t have to be laid out perfectly to draw the parallels between this and previous similar events that were then subsequently used as foundations for greater reach.

      Your reasoning around the technical implementation of such escalation isn’t applicable here (in the conversation about whether or not the fallacy applies)

      If you want to argue that they won’t escalate, or it’s not possible , go right ahead, but raising a fallacy argument when it doesn’t apply isn’t a good start.

      If you want i can address your arguments around implementation directly,as a seperate conversation? I don’t think you’re correct on that either, but as I said I also don’t think correctness in that subject matters in the context of the fallacy.

      • PlzGivHugs@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        24 hours ago

        My interpretation was that slipery slope was more about the event in question (AC1043) being predicted to directly lead to escalation (AI/ID verification). As from you’re Wikipedia quote, “to result in the claimed effects”. I don’t see any reason to predict that this law will directly influence their decision to escalate or not. That said, perhaps its a disagreement on how much cultural influence a law like this would have, and how seperate a parent/user-managed system of age verification is from a government managed one technically.

        I would be interested to hear your argument for technical implementation, however.

        • Senal@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          12 hours ago

          Ah, i think i see where the difference in opinion is, claiming this event leads directly to (as in the very next step is) AI/ID verification could be considered an unreasonable jump i suppose.

          In my case i was interpreting the argument as this event will almost certainly lead to further encroachment events into privacy, one of which would probably be the AI/ID verification.

          To me this is a reasonable assumption because it’s what has happened in pretty much all of the recent instances of similar event occurring and therefore not a slippery slope fallacy.


          TL;DR

          On further examination, the technical things you mention seem to be correct if you assume that this bill alone is the vector for privacy encroachment, but they don’t pan out at all if it is assumed that other steps will follow; which, given precedent, is highly likely to happen.


          On the technical implementation:

          The reason its a slippery slope fallacy is the assumption that this law is a direct attempt to implement those systems, in spite of the fact that AB1043 implements a system that would be redundant with AI or ID based methods,

          As an aside i’m not sure anyone is claiming that this bill is a direct attempt at a hard AI/ID verification system, rather they are claiming that this another step in a series of encroachments that will lead to escalating requirements and enforcement, AI/ID verification being an obvious step in that series.

          From a technical standpoint you are correct, it outright states that photo ID upload isn’t required, yet.

          Opinion : A cynic might see this as indication that the politicians understand that political and public appetite for full photo id requirements is less than optimal, so this is just a small step in shifting the Overton window on this subject.

          technically doesn’t offer any good way to transition into an AI or ID based system (since it all has to be done locally),

          That is only correct in a very narrow set of circumstances, that local requirement isn’t set in stone at all.

          All that needs to happen to go from this to full ID checks is to mandate they use a “trusted” service for verification. It wouldn’t need to be an always online thing either, think of how the bullshit online verification systems that already exist work, i.e. you need to go online every x days or your system/service/app will stop working.

          opinion: I fully expect any “trusted” service they designate to be something that serves the governmental and corporate desire for as much data as they can get away with, this isn’t even a stretch, just look at the service discord was trying to implement, the one with deep ties to palantir

          and legally, imposes additional data protection laws that are likely to interfere with AI-based age verification.

          This isn’t wrong as much as it seems naive, we are talking about bills that change laws, any law introduced can be revoked, superseded or have “exceptions” carved out, such as the current favourite “think of the children” thin veneer they are using.

          It wouldn’t take much to move from “all data is protected” to “all data is protected, unless we need it to protect the children”

          That’s not even taking in to account that the laws are only as good as the system upholding them, the current US system is sketchy AF, other countries have similar issue with uneven application of laws.

          Not to say we should throw out hands up, say “what’s the point?” and just do nothing, but pretending that these laws aren’t susceptible to the same issue affecting everything else doesn’t help anyone either.

          The problem with AI and ID age verification isn’t the age verification. Its the data collection, limits on personal freedom, and to some, the inconvenience.

          Agreed.

          So far as I can tell, AB1043 doesn’t have a significant impact on data collection (it does add another metric that could be used for fingerprinting, but also adds stricter regulation on data collection when this flag is used,) or personal freedoms - esspecially not when compared to what is already the existing standard of asking the user for their age and/or if they’re over 18.

          Mostly agreed.

          the points i’d raise are that the whole idea of age verification is an encroachment upon personal freedoms for some, so there’s an aspect of subjectivity to that.

          I addition, relying on data collection regulations at this point is almost dangerously naive, corporations and governments alike have shown that they will basically ignore them outright or make up some exception, this isn’t conjecture, this is something easily searchable, think flock, ring camera’s, stringray , PRISM, anything palantir is involved in, cambridge analytica, broad warrantless data requests etc.

          There is absolutely no reason to give the benefit of the doubt to parties that have repeatedly proven to be doing sketchy shit.

          • PlzGivHugs@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            6 hours ago

            By the sound of it, the disagreement is mostly in how direct an impact AB1043 will have on government plans for data collection and authoritarianism.

            Like, as you said, laws can be changed or removed, but the fact that it would be necessary to do so to implement AI/ID suggests to me that this isn’t that, and is instead a disconnected route. On a legal level, having this does nothing but add a speedbump to future authoritarianism - one they are likely to cross, but it doesn’t advance their goals, legally.

            Technically, I have no doubt that the government will continue to push for more data collection and more control, but it seems that a local value that the user can access/edit (even if they were to use a online-verification system, that issues tokens) isn’t going to be secure or enforceable enough to achive their goals. Anyone can copy, modify, share, reverse-engineer, ect.

            Similarly with the Overton window, where it has been standard practice for over a decade to have a “are you at least 18?” popup, and for every single service to ask you your age, if not more. We absolutely need more data protections for systems such as this (ideally an outright ban on saving this information) but this doesn’t seem to make it worse.

            Basically, from my understanding, this isn’t a step towards data collection or authoritarianism, and provides no significant benifit to either of those causes - its effectively a technical standard. Like, if this age-verification flag was proposed by the Linux Foundation, and agreed to by others, would the backlash be this big? Similarly, I don’t see any contradition between wanting a ban on storage/sharing of user data, and the implementation of a flag like this - even if we are able to ban all storage of user data, this law would be unaffected. That’s what I’m trying to figure out - how do people think that this leads towards those end goals? How would blocking it improve anything?

            Is it just a difference in opinion about the signicance of the Overton window?

            Is there a technical aspect I’m missing?

            Is there some legal advantage this provides to survailance that I’ve missed?

            Right now, it seems like everyone is arguing against a strawman, implying that I support the idea of government/corporate surveillance and censorship, that I don’t expect that they’ll continue to be evil, or they’re simply saying its bad because its cosmetically similar to laws that do impede on freedoms. Given how unanimous the backlash is, I must be missing something?