Skip to content

A ban on state AI laws could smash Big Tech’s legal guardrails

Technology
10 7 0
  • Senate Commerce Republicans have kept a ten year moratorium on state AI laws in their latest version of President Donald Trump’s massive budget package. And a growing number of lawmakers and civil society groups warn that its broad language could put consumer protections on the chopping block.

    Republicans who support the provision, which the House cleared as part of its “One Big Beautiful Bill Act,” say it will help ensure AI companies aren’t bogged down by a complicated patchwork of regulations. But opponents warn that should it survive a vote and a congressional rule that might prohibit it, Big Tech companies could be exempted from state legal guardrails for years to come, without any promise of federal standards to take their place.

    Not to mention, if/when federal standards are created, the standards will be determined by a federal government that is being run by the broligarchs. These are the people we need to be protected from. They've had the idea of federal "regulations" that will allow them to do whatever they need to succeed planned since at least 2019.

    Relying only on federal AI regulations to protect Americans in 2025, would be like the federal government relying on George Wallace to create the Civil Rights Act in the 1960s.

    Sam Altman, 2025:

    Altman also later cautioned against a patchwork regulatory framework for AI.

    “It is very difficult to imagine us figuring out how to comply with 50 different sets of regulations,” said Altman. “One federal framework that is light touch, that we can understand, and it lets us move with the speed that this moment calls for, seems important and fine.”

    Peter Thiel protege, Michael Kratsios regarding AI regulation in 2019

    “A patchwork of regulation of technology is not beneficial for the country. We want to avoid that. Facial recognition has important roles—for example, finding lost or displaced children. There are use cases, but they need to be underpinned by values.”

  • Senate Commerce Republicans have kept a ten year moratorium on state AI laws in their latest version of President Donald Trump’s massive budget package. And a growing number of lawmakers and civil society groups warn that its broad language could put consumer protections on the chopping block.

    Republicans who support the provision, which the House cleared as part of its “One Big Beautiful Bill Act,” say it will help ensure AI companies aren’t bogged down by a complicated patchwork of regulations. But opponents warn that should it survive a vote and a congressional rule that might prohibit it, Big Tech companies could be exempted from state legal guardrails for years to come, without any promise of federal standards to take their place.

    Not to mention, if/when federal standards are created, the standards will be determined by a federal government that is being run by the broligarchs. These are the people we need to be protected from. They've had the idea of federal "regulations" that will allow them to do whatever they need to succeed planned since at least 2019.

    Relying only on federal AI regulations to protect Americans in 2025, would be like the federal government relying on George Wallace to create the Civil Rights Act in the 1960s.

    Sam Altman, 2025:

    Altman also later cautioned against a patchwork regulatory framework for AI.

    “It is very difficult to imagine us figuring out how to comply with 50 different sets of regulations,” said Altman. “One federal framework that is light touch, that we can understand, and it lets us move with the speed that this moment calls for, seems important and fine.”

    Peter Thiel protege, Michael Kratsios regarding AI regulation in 2019

    “A patchwork of regulation of technology is not beneficial for the country. We want to avoid that. Facial recognition has important roles—for example, finding lost or displaced children. There are use cases, but they need to be underpinned by values.”

    These two raging assholes just want easier monopolies.

    Anything either of those two says, the rest of us can be sure it's not to our benefit.

  • Senate Commerce Republicans have kept a ten year moratorium on state AI laws in their latest version of President Donald Trump’s massive budget package. And a growing number of lawmakers and civil society groups warn that its broad language could put consumer protections on the chopping block.

    Republicans who support the provision, which the House cleared as part of its “One Big Beautiful Bill Act,” say it will help ensure AI companies aren’t bogged down by a complicated patchwork of regulations. But opponents warn that should it survive a vote and a congressional rule that might prohibit it, Big Tech companies could be exempted from state legal guardrails for years to come, without any promise of federal standards to take their place.

    Not to mention, if/when federal standards are created, the standards will be determined by a federal government that is being run by the broligarchs. These are the people we need to be protected from. They've had the idea of federal "regulations" that will allow them to do whatever they need to succeed planned since at least 2019.

    Relying only on federal AI regulations to protect Americans in 2025, would be like the federal government relying on George Wallace to create the Civil Rights Act in the 1960s.

    Sam Altman, 2025:

    Altman also later cautioned against a patchwork regulatory framework for AI.

    “It is very difficult to imagine us figuring out how to comply with 50 different sets of regulations,” said Altman. “One federal framework that is light touch, that we can understand, and it lets us move with the speed that this moment calls for, seems important and fine.”

    Peter Thiel protege, Michael Kratsios regarding AI regulation in 2019

    “A patchwork of regulation of technology is not beneficial for the country. We want to avoid that. Facial recognition has important roles—for example, finding lost or displaced children. There are use cases, but they need to be underpinned by values.”

    Do it! Regulate these fuckers out of existence!

  • Do it! Regulate these fuckers out of existence!

    Exactly!

    Oh regulations would cut into your profits? Boo fucking hoo

  • These two raging assholes just want easier monopolies.

    Anything either of those two says, the rest of us can be sure it's not to our benefit.

    Exactly, they created this nightmare dystopia, sunk all their money into AI and if we don't allow them to just invade our privacy like it's their personal kingdom, and we exist to feed their data centers, they're fucked.

    The entire economy is fucked, but that's 100% on them.

    They wanted to just dive in head first, cut a bunch of jobs and replace everyone with AI. Who in their right mind would think that we should allow these people fewer regulations now, so they can make more money via exploitation of humans?

  • Senate Commerce Republicans have kept a ten year moratorium on state AI laws in their latest version of President Donald Trump’s massive budget package. And a growing number of lawmakers and civil society groups warn that its broad language could put consumer protections on the chopping block.

    Republicans who support the provision, which the House cleared as part of its “One Big Beautiful Bill Act,” say it will help ensure AI companies aren’t bogged down by a complicated patchwork of regulations. But opponents warn that should it survive a vote and a congressional rule that might prohibit it, Big Tech companies could be exempted from state legal guardrails for years to come, without any promise of federal standards to take their place.

    Not to mention, if/when federal standards are created, the standards will be determined by a federal government that is being run by the broligarchs. These are the people we need to be protected from. They've had the idea of federal "regulations" that will allow them to do whatever they need to succeed planned since at least 2019.

    Relying only on federal AI regulations to protect Americans in 2025, would be like the federal government relying on George Wallace to create the Civil Rights Act in the 1960s.

    Sam Altman, 2025:

    Altman also later cautioned against a patchwork regulatory framework for AI.

    “It is very difficult to imagine us figuring out how to comply with 50 different sets of regulations,” said Altman. “One federal framework that is light touch, that we can understand, and it lets us move with the speed that this moment calls for, seems important and fine.”

    Peter Thiel protege, Michael Kratsios regarding AI regulation in 2019

    “A patchwork of regulation of technology is not beneficial for the country. We want to avoid that. Facial recognition has important roles—for example, finding lost or displaced children. There are use cases, but they need to be underpinned by values.”

    What ever happened to "state's rights"?

    I guess that was just a convenient excuse to keep people as slaves, huh? 🤷

  • Senate Commerce Republicans have kept a ten year moratorium on state AI laws in their latest version of President Donald Trump’s massive budget package. And a growing number of lawmakers and civil society groups warn that its broad language could put consumer protections on the chopping block.

    Republicans who support the provision, which the House cleared as part of its “One Big Beautiful Bill Act,” say it will help ensure AI companies aren’t bogged down by a complicated patchwork of regulations. But opponents warn that should it survive a vote and a congressional rule that might prohibit it, Big Tech companies could be exempted from state legal guardrails for years to come, without any promise of federal standards to take their place.

    Not to mention, if/when federal standards are created, the standards will be determined by a federal government that is being run by the broligarchs. These are the people we need to be protected from. They've had the idea of federal "regulations" that will allow them to do whatever they need to succeed planned since at least 2019.

    Relying only on federal AI regulations to protect Americans in 2025, would be like the federal government relying on George Wallace to create the Civil Rights Act in the 1960s.

    Sam Altman, 2025:

    Altman also later cautioned against a patchwork regulatory framework for AI.

    “It is very difficult to imagine us figuring out how to comply with 50 different sets of regulations,” said Altman. “One federal framework that is light touch, that we can understand, and it lets us move with the speed that this moment calls for, seems important and fine.”

    Peter Thiel protege, Michael Kratsios regarding AI regulation in 2019

    “A patchwork of regulation of technology is not beneficial for the country. We want to avoid that. Facial recognition has important roles—for example, finding lost or displaced children. There are use cases, but they need to be underpinned by values.”

    I thought they cared about states rights?

    How are they able to even ban states from passing laws

  • I thought they cared about states rights?

    How are they able to even ban states from passing laws

    I'm not 100% sure about actually preventing states from creating laws, but given what's happening in my city rn I would imagine, if this passes, it gives federal agencies and private companies the ability to legally ignore any city and state regulations that might be passed.

    My city used to have a complete ban on facial recognition and predictive policing tech after they were caught secretly working with Palantir. In 2022, the mayor requested the ban be lifted and replaced with an ordinance.

    Police in my city got caught violating the very weak ordinance that regulates how facial recognition is supposed to be used.

    Since WaPo exposed them, they've allegedly paused using the tech. However, the tech is provided by a private company, and the city can't enforce their regulations on the state police and ICE agents that are still using the tech with zero oversight.

    Given how we know states like TX have already signed up to have their national guard invade other states in order to enforce Trump's immigration policy, this could provide legal protection for the Texas national guard to come into a state like California and use it however they deem necessary.

    They could start out by saying it's necessary to enforce immigration (which would be fucked up enough). Very quickly it becomes necessary to protect ICE agents from protestors, and they begin using facial recognition to track protestors and anyone loosely associated with protestors.

    There's no way for the city or state laws to do anything about this bc the Texas National Guard have essentially been given blanket protection by a federal law to use AI to enforce federal immigration policy. Essentially, instead of the national guard being sent to southern states to enforce civil rights like what happened in the 1960s, the national guard from a red state would be sent into a blue state to enforce a dystopian cyber-surveillance nightmare created by the federal government.

    Keep in mind this is just one possibility. Even without all that happening, the best case scenario of allowing a ban on state regulations, is you're providing legal protection for private corporations to collect data however they want and do whatever they want with it once it's collected.

  • What ever happened to "state's rights"?

    I guess that was just a convenient excuse to keep people as slaves, huh? 🤷

    Look, I don't understand whether they want states' rights or not. Because one moment they're in favor of the federal government having power, and the next they're in favor of the states having power.

    Because if they want states to have power, they should only be concerned with things that affect their state. If another state makes bad decisions, we can only hope that its population corrects those mistakes as quickly as possible.

    Or if they want the federal government to have power, everyone should be informed of changes, both the population and the states. These laws should benefit the majority.

    There is also a third option: there should be a balance between state and federal power.

  • What ever happened to "state's rights"?

    I guess that was just a convenient excuse to keep people as slaves, huh? 🤷

    It's always been "states rights" to enrich rulers at the expense of everyone else.

  • 72 Stimmen
    7 Beiträge
    0 Aufrufe
    F
    I think the issue is people started buying etf instead of using Bitcoin themselves. Bitcoin as such has no value at all, it's only valuable if people use it for transactions.
  • Why Decentralized Social Media Matters

    Technology technology
    45
    1
    386 Stimmen
    45 Beiträge
    3 Aufrufe
    fizz@lemmy.nzF
    Yeah we're kinda doing well. Retaining 50k mau from the initial user burst is really good and Lemmy was technologically really bad at the time. Its a lot more developed today. I think next time reddit fucks uo we spike to over 100k users and steadily grow from there.
  • 0 Stimmen
    1 Beiträge
    1 Aufrufe
    Niemand hat geantwortet
  • Catbox.moe got screwed 😿

    Technology technology
    40
    55 Stimmen
    40 Beiträge
    13 Aufrufe
    archrecord@lemm.eeA
    I'll gladly give you a reason. I'm actually happy to articulate my stance on this, considering how much I tend to care about digital rights. Services that host files should not be held responsible for what users upload, unless: The service explicitly caters to illegal content by definition or practice (i.e. the if the website is literally titled uploadyourcsamhere[.]com then it's safe to assume they deliberately want to host illegal content) The service has a very easy mechanism to remove illegal content, either when asked, or through simple monitoring systems, but chooses not to do so (catbox does this, and quite quickly too) Because holding services responsible creates a whole host of negative effects. Here's some examples: Someone starts a CDN and some users upload CSAM. The creator of the CDN goes to jail now. Nobody ever wants to create a CDN because of the legal risk, and thus the only providers of CDNs become shady, expensive, anonymously-run services with no compliance mechanisms. You run a site that hosts images, and someone decides they want to harm you. They upload CSAM, then report the site to law enforcement. You go to jail. Anybody in the future who wants to run an image sharing site must now self-censor to try and not upset any human being that could be willing to harm them via their site. A social media site is hosting the posts and content of users. In order to be compliant and not go to jail, they must engage in extremely strict filtering, otherwise even one mistake could land them in jail. All users of the site are prohibited from posting any NSFW or even suggestive content, (including newsworthy media, such as an image of bodies in a warzone) and any violation leads to an instant ban, because any of those things could lead to a chance of actually illegal content being attached. This isn't just my opinion either. Digital rights organizations such as the Electronic Frontier Foundation have talked at length about similar policies before. To quote them: "When social media platforms adopt heavy-handed moderation policies, the unintended consequences can be hard to predict. For example, Twitter’s policies on sexual material have resulted in posts on sexual health and condoms being taken down. YouTube’s bans on violent content have resulted in journalism on the Syrian war being pulled from the site. It can be tempting to attempt to “fix” certain attitudes and behaviors online by placing increased restrictions on users’ speech, but in practice, web platforms have had more success at silencing innocent people than at making online communities healthier." Now, to address the rest of your comment, since I don't just want to focus on the beginning: I think you have to actively moderate what is uploaded Catbox does, and as previously mentioned, often at a much higher rate than other services, and at a comparable rate to many services that have millions, if not billions of dollars in annual profits that could otherwise be spent on further moderation. there has to be swifter and stricter punishment for those that do upload things that are against TOS and/or illegal. The problem isn't necessarily the speed at which people can be reported and punished, but rather that the internet is fundamentally harder to track people on than real life. It's easy for cops to sit around at a spot they know someone will be physically distributing illegal content at in real life, but digitally, even if you can see the feed of all the information passing through the service, a VPN or Tor connection will anonymize your IP address in a manner that most police departments won't be able to track, and most three-letter agencies will simply have a relatively low success rate with. There's no good solution to this problem of identifying perpetrators, which is why platforms often focus on moderation over legal enforcement actions against users so frequently. It accomplishes the goal of preventing and removing the content without having to, for example, require every single user of the internet to scan an ID (and also magically prevent people from just stealing other people's access tokens and impersonating their ID) I do agree, however, that we should probably provide larger amounts of funding, training, and resources, to divisions who's sole goal is to go after online distribution of various illegal content, primarily that which harms children, because it's certainly still an issue of there being too many reports to go through, even if many of them will still lead to dead ends. I hope that explains why making file hosting services liable for user uploaded content probably isn't the best strategy. I hate to see people with good intentions support ideas that sound good in practice, but in the end just cause more untold harms, and I hope you can understand why I believe this to be the case.
  • Telegram partners with xAI to bring Grok to over a billion users

    Technology technology
    36
    1
    38 Stimmen
    36 Beiträge
    2 Aufrufe
    R
    So you pay taxes to Putin. Good to know who actually helps funding the regime. I suggest you go someplace else. I won't take this from a jerk from likely one of the countries buying fossil fuels from said regime, that have also supported it after a few falsified elections starting in 1996, which is also the year I was born. And of course "paying taxes to Putin" can't be even compared to what TG is doing, so just shut up and go do something you know how to do, like I dunno what.
  • 66 Stimmen
    9 Beiträge
    4 Aufrufe
    F
    HE is amazing. their BGP looking glass tool is also one of my favorite troubleshooting tools for backbone issues. 10/10 ISP
  • 148 Stimmen
    8 Beiträge
    0 Aufrufe
    L
    Whenever these things come up you always hear "then the company won't survive!" CEO and managers make bank somehow but it doesn't matter that the workers can't live on that wage. It's always so weird how when workers actually take a pay cut, that the businesses get used to it. When the CEOs get bonuses they have to get used to that too.
  • *deleted by creator*

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    4 Aufrufe
    Niemand hat geantwortet