Artificial intelligence is evolving faster than regulators can keep up. In the absence of federal guidance, states have taken matters into their own hands. California’s S.B. 53 is only one example of a state attempting to shape how AI is built and used. Although these laws are well-intentioned and help protect consumers and promote transparency on a small scale, the problem is that these laws treat AI as if it were only a local issue. In the grand scheme, AI is borderless, cloud native, and woven through global infrastructure. It simply does not follow state lines.
In the 2025 legislative session, every state in the country, along with Puerto Rico, the Virgin Islands, and Washington, D.C., introduced proposals related to AI. This year alone, 38 states adopted or enacted roughly 100 measures. Yet these laws rely on different definitions and different compliance and enforcement approaches. The result is a patchwork regulatory landscape: as complex as the technology itself, but without the consistency and interoperability needed to govern AI effectively.
The accelerated expansion of state-level regulation highlights the problem’s growing urgency. It also points to a widening disconnect: AI is advancing rapidly, and new laws are proliferating, but coordination hasn’t kept pace. As a result, policy and security leaders are navigating a fast-paced regulatory landscape without a clear, unified direction.
The geographic fallacy of state-level AI laws
A fragmented regulatory scene creates real challenges for organizations that want to build or use AI responsibly. Each new state law introduces its own set of requirements for testing, reporting, documentation, or oversight. Security and risk teams then must map every workflow against all of the different (and sometimes conflicting) requirements. Even the basic definition of what counts as AI varies across states. The same system that may be regulated in one jurisdiction might be unregulated in another.
Large enterprises can usually keep up. With dedicated legal and compliance teams—and the budget to match—they can absorb the cost of audits, system changes, and frequent policy updates. Small and midsize companies don’t have this luxury. Early-stage AI innovators now face an unnecessary choice: devote limited resources to tracking and meeting dozens of regulatory obligations or slow development and risk falling behind. Even when well-intentioned, fragmentation becomes a gatekeeper—creating an environment where only the largest companies can operate at scale. This distorts the market by concentrating innovation in the most well-funded firms and making it harder for smaller teams to break through. The result is an uneven AI ecosystem shaped more by regulatory barriers than by technical capability.
The growing divide
The effects of widespread, conflicting regulations and expectations extend far beyond mere inconvenience. Fragmentation weakens security, reduces public trust, and increases risk across the full AI supply chain. When organizations must focus primarily on compliance, safety and ethics become secondary. Teams spend more time tracking state-level requirements than building the controls that matter most—creating potential gaps in oversight, testing, and transparency.
Regulatory inconsistencies also let large organizations gravitate toward jurisdictions with the most favorable rules. In practice, they can design their practices around minimum standards, rather than the strongest ones. Smaller companies cannot do this; to stay compliant, they often have to meet multiple sets of requirements at once. This uneven burden puts them at a disadvantage and creates a multi-track environment in which safety practices vary widely.
Organizations invite risk with inconsistent standards. In cybersecurity, fragmented controls are never effective. AI security is no different. Attackers exploit the weakest point. When rules vary widely, so do protections, which leaves openings for misuse, bias, faulty automation, and other cascading failures in interconnected systems. A world where AI safety depends on geography is not a world that advances trust.
The only sustainable path
A unified federal framework is required to establish clear expectations for transparency, accountability, and responsible innovation. AI operates across borders, and oversight must operate across borders as well.
The window for federal leadership is closing, and the economic consequences of inaction are becoming harder to ignore. As AI advances faster than state legislatures can respond, the patchwork of rules becomes more complex and more burdensome—especially for startups and smaller innovators who lack the resources to navigate it. Without swift national guidance, the U.S. risks hard coding a system where only the largest enterprises can afford to compete, stifling innovation long before consistent protections are ever put in place.
Advocacy organizations such as Build American AI play a valuable role in advancing this shift. Groups like this are rare, and they shouldn’t be. Clear federal guidance can support innovation while ensuring meaningful safeguards. Consistent national standards would reduce ambiguity, close regulatory loopholes, and give organizations a clear set of expectations that govern their work.
Such consistency benefits security teams, policymakers, and developers across the ecosystem. A unified approach enables organizations to invest in the protections that matter rather than diverting attention toward managing conflicting requirements. It encourages competition by allowing smaller companies to focus on innovation instead of compliance triage. It also raises the overall standard for safe AI development.
Transparency, governance, and a path forward
A more secure and consistent AI landscape begins with federal alignment. A single national framework that is capable of efficiency and flexibility would replace the state-level requirements that currently conflict and delay AI development. This would prevent situations where an identical AI model faces one set of obligations in California and an entirely different set in Florida. With a unified baseline, organizations could invest in long term safeguards rather than repeatedly adjusting to shifting geographic rules.
Internal governance plays an equally important role. An ethics-centered approach ensures that organizations are building systems that are safe even when regulations are unclear or incomplete. This includes responsible data practices, model testing, and ongoing issues such as bias drift or inaccurate outputs. A team designing an AI tool for patient intake, for example, needs a clearly defined process for detecting, documenting, and resolving errors. These internal controls strengthen both security and trust.
Transparency and interpretability round out the foundation for responsible AI. Systems that allow teams to understand how decisions are made, make it easier to catch misuse or unintended behavior. A fraud detection model that shows which signals influence its decisions is easier to audit and fix than a “closed box” model that doesn’t. Organizations that are early adopters of explainable and auditable tools will be better prepared for future oversight and better equipped to respond when risks emerge.
Aligning oversight with the reality of AI
A unified federal approach to AI could provide benefits across the entire AI ecosystem. Innovation can expand because smaller organizations are no longer hindered by conflicting state requirements. Security would improve because consistent expectations eliminate weak links and close opportunities for misuse. Trust will grow as transparent interpretable systems become the norm rather than the exception.
AI does not recognize borders. Regulation should reflect that reality. Unified guidance does not slow the evolution of technology. It creates a stronger, safer, and more sustainable environment that supports responsible innovation for everyone.
Kevin Kirkwood is the chief information security officer at Exabeam.
The post AI doesn’t care if it’s in California or Texas. It just runs. appeared first on CyberScoop.
Artificial intelligence is evolving faster than regulators can keep up. In the absence of federal guidance, states have taken matters into their own hands. California’s S.B. 53 is only one example of a state attempting to shape how AI is built and used. Although these laws are well-intentioned and help protect consumers and promote transparency
The post AI doesn’t care if it’s in California or Texas. It just runs. appeared first on CyberScoop. Read MoreCyberScoop
