GET GRANTS Broadband or AI Regulation? Congress’s False Choice Risks Progress for Black Communities AdminJune 30, 2025018 views Danielle Davis, Esq. June 30, 2025 Congress is currently considering a proposal that will make broadband funding contingent on whether states regulate artificial intelligence (AI). The provision — referred to as the AI moratorium — is included in the budget reconciliation package known as the “One Big Beautiful Bill.” If enacted, it would require states to certify that they have not passed or enforced any laws that “restrict” certain AI systems in order to access newly appropriated broadband funds. In addition, states that fail to comply could risk losing broadband infrastructure funding they were already allocated under the Broadband Equity, Access, and Deployment (BEAD) program. Accordingly, this post explains how we got here, how the amendment works, and why it matters for state governance, AI accountability, and broadband access. How We Got Here In 2021, Congress passed the Infrastructure Investment and Jobs Act, which created the BEAD program — a $42.45 billion initiative to help states expand affordable, high-speed internet in communities that have been historically underserved, particularly rural and low-income communities. As part of ongoing budget negotiations in 2025, Congress began drafting a budget reconciliation bill, a legislative tool that allows certain tax and spending measures to bypass the Senate filibuster and pass with a simple majority. While BEAD funding has already been allocated, the new bill introduced an additional $500 million in funding. Though smaller in size, this new funding comes with a significant condition: to access it, states must agree not to pass or enforce certain laws regulating AI. The Condition: Broadband Funding at the Cost of AI Oversight To receive any money from this new $500 million, states must certify that they have not passed or enforced any laws that “restrict” artificial intelligence systems. That includes laws regarding: Biometric privacy (e.g., facial recognition bans); Hiring discrimination protections involving algorithms; and Consumer data rights that affect AI. If a state accepts even a small portion of the new $500 million it must certify that it has not enacted or enforced any law that “restricts” AI systems. According to legal experts, this certification could extend beyond the new funds and apply to a state’s entire broadband grant, including the funds previously allocated under the BEAD program. Because the moratorium’s terms are vague and the Department of Commerce has de-obligation authority, states could face retroactive funding claw backs if found noncompliant. This creates significant legal uncertainty and may discourage states from adopting or enforcing AI regulations. Simply put, a state could accept funding to connect rural households to the internet, pass a law regulating AI (such as one addressing biometric privacy or algorithmic discrimination) and then potentially be forced to return that funding due to noncompliance with the AI moratorium. Why the Revised AI Moratorium Still Threatens Broadband Access and State Authority The latest version of the AI moratorium was introduced through the Blackburn Amendment, which made two key revisions. First, it shortened the moratorium period from ten years to five years. Under this new version, states would still be barred from enacting or enforcing laws that “restrict” covered AI systems during that time, but the duration of the pause was reduced. Second, the amendment attempted to carve out limited exemptions for certain categories of state law. Specifically, it lists laws addressing child sexual abuse material (CSAM), children’s online safety, and recording artists’ rights such as Tennessee’s ELVIS Act as examples of “generally applicable” laws that might be protected. However, it notably fails to include any carve-out for civil rights protections, such as state laws that address algorithmic discrimination, biometric data use, or AI-driven harms that disproportionately impact marginalized communities. However, these exemptions only apply if those laws do not impose an “undue or disproportionate burden” on AI models or systems. But the issue is that the phrase “undue or disproportionate burden” is not defined, and legal experts have warned that it creates broad uncertainty. Even the types of laws the amendment aimed to protect — such as the ELVIS Act — could potentially be challenged and invalidated under this standard. The amendment also leaves untouched the bill’s private enforcement mechanism, which could possibly allow private parties to sue states for enforcing laws they believe conflict with the moratorium. Courts have previously interpreted similar language in other federal laws as allowing private lawsuits, making this a credible legal risk. As a result, states not only face the possibility of losing broadband dollars, but they may also be exposed to prolonged litigation over consumer protection, privacy, and biometric laws already on the books. But most importantly, the Blackburn Amendment does not address one of the most significant concerns raised by civil rights and tech policy advocates: the risk that states could lose their existing BEAD broadband funding if they are found to be in violation after accepting even a portion of the new $500 million fund. That aspect remains fully intact. How the Moratorium Impacts Black Communities The ambiguity of this policy creates significant legal and regulatory uncertainty, placing Black communities at the intersection of two persistent policy failures: inadequate internet access and insufficient AI accountability. Here’s what we know: Black communities are disproportionately disconnected. Many live in areas that still lack access to affordable, high-speed internet — precisely what BEAD was designed to address. Black communities are also disproportionately harmed by AI. From facial recognition errors that lead to false arrests, to biased hiring algorithms and surveillance technologies, the risks are well-documented and growing. In response, some states have enacted laws to mitigate these harms, such as biometric privacy protections, AI audit requirements, and bans on algorithmic discrimination. These are exactly the kinds of laws the moratorium could suppress. Furthermore, the burden of this policy won’t fall evenly across states. It will disproportionately impact those with fewer resources, particularly southern and rural states — many of which have large Black populations — that may feel compelled to comply in order to secure essential broadband funding. In effect, the moratorium threatens to deepen inequality by forcing states to choose between digital access and digital safety. What Needs to Happen Now Congress should remove this provision from the reconciliation bill. A responsible national AI strategy must leave room for states to implement public interest safeguards. Tying broadband funding to limitations on AI regulation undermines that goal — and sets a troubling precedent. Black communities — and all communities — deserve both connectivity and meaningful protections. Access to broadband should not come at the expense of regulatory autonomy. As the reconciliation package enters “vote-a-rama,” this is one of the final opportunities to address the AI moratorium provision before the bill is finalized. The question is straightforward: Will we use federal investments to empower communities, or restrict them? Congress must act now to ensure that digital infrastructure and digital rights are not put at odds. Source link