AI is progressing faster than the ability of global communities to keep up. Regulators are busy with hearings and writing laws, while major tech companies roll out new AI models every few months. Without enforceable rules, AI might worsen inequality, destabilize economies, and threaten democracy. The gap between innovation and regulation keeps growing, and this creates serious risks for everyone. At the national level, governments continue to fall behind.
In the United States, Congress has yet to pass a comprehensive AI law, instead relying on fragmented statutes and existing agency authority. The end consequence is a patchwork of state-level legislation and agency guidelines that leaves developers in the dark and citizens underprotected. As a result, we see a more fragmented approach producing inconsistent protections and uncertain obligations across the country. California may prohibit certain types of algorithmic prejudice, while another state permits them, creating incentives for businesses to base operations where the restrictions are the most relaxed. This patchwork looks more like a regulatory free-for-all than a strategy.
Europe has attempted to move faster, but national laws remain reactive, restricted, and fragmented, creating a clear mismatch with the global, fast-changing nature of AI. The European Union passed the AI Act, which prohibits certain high-risk systems while imposing stringent limits on others. In contrast to the U.S., which faces a patchwork of regulations, the EU grapples with the challenge of enforcing its laws in 27 member states, each with its own pace and capacity for adoption. As the World Economic Forum notes, the bloc appears to be united on paper but fractured in practice.
AI Without Borders
One of AI’s defining features is its ability to roam smoothly across borders. A model trained in Beijing can generate outputs in Boston. A technology developed in San Francisco can handle supply chains in São Paulo. National rules cannot include tools that were supposed to be global from the start. Without international cooperation, one country’s liberal legislation becomes another’s loophole.
This is why researchers are increasingly advocating for a worldwide regulatory framework. Only agreed standards can prevent regulatory arbitrage and ensure that governments ensure that their competitors are not taking shortcuts. There are multiple structures: some proposals mimic nuclear treaties, while others propose certification systems in which conforming states can trade freely, while noncompliant ones face constraints. Establishing a multilateral AI regulatory body may not solve every challenge, but it is a necessary first step: without a central forum to set and enforce shared standards, national efforts will always lag behind the global nature of technology. Whatever the design, the idea remains the same: rules cannot end at the water’s edge.
However, the barriers to cooperation are substantial. Nations protect their sovereignty and fear losing a competitive advantage. Philosophies on how AI should be governed differ greatly, with the EU emphasizing human rights, China prioritizing control, and the United States promoting innovative incentives. Enforcement across borders remains challenging, particularly when models are opaque and proprietary. And technology advances so swiftly that today’s concept of “high-risk” may become obsolete within a year. However, inaction is worse. Without global rules, the door opens to propaganda floods, AI weapons escalation, and financial instability.
Falling Further Behind
Failure to control results in direct consequences. Companies launch products with little testing, and authorities struggle to respond after the fact. Hiring algorithms contain bias. Predictive policing methods perpetuate bigotry. Authoritarian nations use misinformation engines to stifle free speech. The public has lost trust. Meanwhile, businesses face increased expenses and complexity as they attempt to comply with dozens of competing regulatory systems. Fragmentation stifles innovation, whereas those who establish the first practical norms develop worldwide standards.
Some governments have begun to grasp the gravity of the situation. More than 50 countries agreed to safeguard openness and human rights standards when they joined the Framework Convention on Artificial Intelligence in 2024. The agreement was a significant move, but it omitted defense and national security applications and provided little information on enforcement. Over 200 international leaders have since advocated for “red lines” on high-risk AI by 2026, and events such as the AI Safety Summit now bring together governments, businesses, and civil society. The momentum is there, but the gap between symbolic commitments and binding rules remains significant.
Building a Path Forward
Catching up will necessitate action on three fronts. First, national governments must improve their own regulations. This entails implementing tiered frameworks: lesser standards for common consumer tools and stronger requirements for high-stakes systems in fields such as healthcare, banking, and defense. Transparency reports, incident declarations, and independent audits should become commonplace. These measurements will not cure every problem, but they will establish a baseline for accountability.
Second, regional organizations such as the European Union, the Association of Southeast Asian Nations (ASEAN), and the North Atlantic Treaty Organization (NATO) must step up. Trade blocs and alliances can develop standards among member governments, easing enforcement and lowering the compliance burden on businesses. The United Nations is well-positioned to provide a global forum, but regional bodies often move faster and can serve as testing grounds for governance models. Regional frameworks also act as stepping stones toward global cooperation, allowing us to see what works and what doesn’t.
Finally, the world requires a global backbone. An international AI governance system should integrate principles and incentives. Market access, trade privileges, and access to modern gear should all be contingent on compliance. States are unlikely to cooperate purely out of benevolence, but they will if their economic interests require it. A flexible, modular treaty structure would enable regulations to extend and evolve as technology advances, resulting in a living framework rather than a static document.
A Closing Window
The world is already behind us, and the window is closing. Laws move slowly, businesses move swiftly, and AI doesn’t wait. Without binding foundations, governments would be reactive, constantly playing catch-up. The threats are not theoretical. They include disinformation operations, destabilized financial markets, autonomous weaponry, and a loss of public trust.
The only path ahead is to move from fragmented guidelines to comprehensive, enforceable laws, and from summits to enforcement. If governments fail to intervene, the laws of AI will be defined not by elected officials, but by firms competing for market dominance. Either we create the rules for AI, or AI creates a world with no rules.