The Battle Over AI Governance

How the US, Europe, and China are shaping rival AI rules, and what it reveals about power, innovation, and the future of intelligence.

Written by James Richards

Artificial intelligence is often described as borderless. Models are trained in one country, deployed in another, and used everywhere at once. Yet the rules that govern those systems remain stubbornly territorial. As AI shifts from experimental software to foundational infrastructure - embedded in finance, healthcare, logistics, defence and media - governments are discovering that how intelligence is governed matters as much as who builds it.

Rather than a shared global framework, what is emerging is a growing divergence in regulatory philosophy. The European Union is constructing the world’s first comprehensive AI rulebook, rooted in rights, risk classification and market harmonisation. The United States is pursuing a looser, more decentralised approach - leaning on executive action, voluntary standards and national-security controls to preserve innovation velocity. China, meanwhile, is integrating AI governance into a broader system of state supervision, information control and industrial strategy.

AI risks, rights and opportunities

These differences are not cosmetic. They reflect deep assumptions about the relationship between the state, the market and the individual. In Europe, AI is framed primarily as a risk to be managed - to fundamental rights, consumer protection and democratic norms. In the United States, it is treated as a capability to be accelerated, constrained mainly where it threatens security, competition or civil rights. However, in China, AI is understood as a systemic force that must remain aligned with social stability and political authority.

The consequences extend far beyond regulatory compliance. Governance shapes incentives. It determines which firms can scale, which business models survive, and which technical approaches become economically viable. A system optimised for rapid iteration will produce different kinds of AI than one optimised for auditability and legal certainty. Over time, those differences compound.

There is also a geopolitical dimension that is increasingly difficult to ignore. Advanced AI depends on scarce resources - high-end semiconductors, specialised data centres, energy and talent. As states seek to secure those inputs, regulation becomes entangled with export controls, supply-chain policy and strategic alliances. AI governance is no longer just about ethics or safety. It is a tool of economic statecraft.

Rather than a shared global framework, what is emerging is a growing divergence in regulatory philosophy.

This article maps how that landscape is taking shape. It examines the governing logics emerging in Europe, the United States and China, before turning to the “second-tier” powers that are quietly shaping global norms through standards, diplomacy and selective regulation. The goal is not to crown a winner, but to understand the trade-offs each system is making - and what those choices reveal about where AI, and power, may settle in the years ahead.

What global AI governance actually means

Before comparing national approaches, it is worth clarifying what governments are really trying to govern. “AI regulation” is often discussed as if it were a single lever. In practice, it spans several overlapping layers, each with its own logic, risks and political sensitivities.

At the model layer, governance focuses on the creation of AI systems themselves. This includes foundation models, training processes, access to large-scale compute resources, evaluation methods, safety testing and incident reporting. Concerns here tend to cluster around scale and capability: at what point does a model become systemically risky, and who decides? This is where debates about frontier models (for example ChatGPT5.2 and xAI Grok 4.1), red-teaming and disclosure requirements are most intense.

The deployment layer is more familiar territory for regulators. It covers how AI systems are used in specific contexts such as hiring, credit scoring, healthcare, education, law enforcement and welfare administration. Rules at this level often resemble traditional product or service regulation, addressing bias, explainability, accountability and liability. An identical model can be benign in one setting and harmful in another, which is why many governments concentrate their efforts here.

Next is the information layer, where AI intersects with trust, truth and public discourse. This includes requirements around transparency, content labelling, watermarking, political advertising and synthetic media. As generative systems blur the line between authentic and artificial content, governments are increasingly concerned with how AI reshapes information environments rather than individual decisions.

Finally, there is the geopolitical layer. This is where AI governance overlaps with national security, trade and industrial policy. Export controls on advanced chips, restrictions on cloud access, investment screening and public funding for domestic AI capacity all sit in this category. These measures are less about harm prevention than about strategic positioning.

AI governance takes place at various layers
Regulatory layer What is governed Primary risks & concerns Typical policy tools
Model layer Development of AI systems themselves – foundation models, training processes, access to large-scale compute, evaluation and safety testing. Scale and capability risk; emergent behaviours; loss of oversight at the frontier. Compute thresholds, model evaluations, red-teaming requirements, incident reporting, disclosure obligations.
Deployment layer Use of AI in specific social and economic contexts such as hiring, credit scoring, healthcare, education and policing. Bias, discrimination, lack of explainability, unclear accountability. Sector-specific rules, risk classifications, audit requirements, liability frameworks, human-in-the-loop mandates.
Information layer AI’s impact on information environments, public discourse and perceptions of truth. Misinformation, erosion of trust, synthetic media abuse, political manipulation. Content labelling, watermarking, transparency rules, political advertising restrictions.
Geopolitical layer Strategic control of AI capabilities through national security, trade and industrial policy. Strategic dependency, technological dominance, loss of domestic capacity. Export controls, investment screening, cloud access limits, public funding for domestic AI infrastructure.

Different regions prioritise these layers differently. Europe places greatest weight on deployment and information risks. The United States concentrates on model capability and geopolitical leverage, while leaving deployment largely to sector regulators. China integrates all four layers into a single system of oversight.

International efforts to harmonise AI governance, such as the principles promoted by the Organisation for Economic Co-operation and Development (OECD), reflect this layered reality. They offer shared language and norms, but stop short of dictating how trade-offs should be resolved.

Understanding these layers makes one thing clear. AI governance is not about choosing whether to regulate. It is about choosing wherehow hard and to what end. Those choices reveal as much about political priorities as they do about technology.

Europe’s rights-based approach to AI

Europe’s approach to artificial intelligence is the most explicit and ambitious of any major jurisdiction. With the AI Act, which came into law in 2024, the European Union has become the first major economic bloc to attempt a comprehensive, legally binding framework for governing AI across an entire market.

The structure of the Act reflects a familiar European instinct. Rather than regulating innovation directly, it regulates risk. AI systems are categorised according to the potential harm they pose to fundamental rights, safety and public trust. Certain practices are prohibited outright. Others are deemed “high-risk” and subjected to obligations around data governance, documentation, human oversight and post-market monitoring. General-purpose AI models are treated as a distinct category, reflecting their capacity to propagate risk across sectors.

This design mirrors the EU’s broader regulatory tradition, shaped by consumer protection law, competition policy and internal market harmonisation. The aim is to create legal certainty across 27 member states while embedding safeguards directly into market behaviour. Compliance is intended to shape system design from the outset, not merely constrain deployment after the fact.

The EU approach to AI governance prioritises protecting citizens from the threats and risks from AI
Europe's approach to AI governance has been to prioritise and protecting citizen's rights

EU: protecting people from AI risks

Beneath the technical architecture lies a clear political logic: Europe frames AI primarily as a potential threat to individual rights and democratic norms. Concerns about discrimination, opacity and automated decision-making are treated as first-order governance issues rather than secondary side effects of innovation. This explains the emphasis on transparency, accountability and auditability, even where those requirements introduce friction into development cycles.

The AI Act also reflects a strategic ambition that extends beyond Europe’s borders. By regulating a market of more than 400 million consumers, the EU increases the likelihood that global firms will adopt EU-compliant practices as a default. Maintaining parallel product architectures for different regulatory regimes is costly, and aligning with the strictest rules can be commercially rational. In this way, European regulation exerts influence well beyond Europe itself.

That influence comes with trade-offs. Compliance costs are significant, particularly for smaller firms and start-ups. Critics argue that the burden risks entrenching incumbents with the resources to absorb regulatory complexity. There is also concern that regulatory caution may slow Europe’s own AI ecosystem at a moment when global competition is intensifying.

Yet Europe’s bet is a long-term one. The AI Act treats artificial intelligence less as a fast-moving product category and more as foundational infrastructure. Like financial markets or food safety, AI is governed as a system whose failures can have diffuse and systemic consequences. The aim is not to halt innovation, but to domesticate it - embedding it within a legal order that prioritises trust, accountability and social legitimacy.

Whether this model proves globally dominant remains uncertain. What is already clear is that Europe has forced the terms of the debate. Any serious discussion of AI governance now begins, whether in agreement or opposition, with the framework Brussels has placed on the table.

Beneath the technical architecture, Europe frames AI primarily as a potential threat to individual rights and democratic norms.

The United States - standards, security and speed

The American approach to AI governance is best understood not as a single framework, but as a constellation of instruments. There is no equivalent to the EU’s AI Act, and no immediate prospect of one. Instead, the United States has opted for a flexible, decentralised model that privileges innovation speed while intervening selectively where risks are deemed unacceptable.

This reflects a long-standing regulatory instinct. In the US, technology policy has typically focused on outcomes rather than processes. Rather than prescribing how systems must be built, regulators intervene when harms materialise - discrimination, fraud, safety failures, market abuse - or when national security is implicated. AI has largely followed this pattern.

The centre of gravity for federal AI governance has been executive action. A sweeping executive order issued in late 2023 established a coordinating framework across federal agencies, covering issues ranging from safety testing and reporting requirements for advanced models to the use of AI in government services. Crucially, it did so without imposing direct constraints on private-sector research and development.

NIST as a standard framework?

One of the most influential elements of this system is the role of standards. The National Institute of Standards and Technology (NIST) has emerged as a quiet but powerful actor through its AI Risk Management Framework. While formally voluntary, the framework increasingly functions as a reference point for corporate governance, procurement decisions and liability assessments. In practice, standards shape behaviour even when laws do not.

This reliance on standards reflects a belief that technical expertise, rather than statutory detail, should guide risk mitigation. It also gives the US a form of international leverage. Companies operating across borders often align with NIST-style frameworks because they are perceived as flexible, technically grounded and compatible with rapid iteration.

National security provides the harder edge of American AI governance. Export controls on advanced semiconductors, restrictions on access to high-end compute and scrutiny of foreign investment in AI-related infrastructure all signal a growing willingness to treat AI as a strategic asset. In this context, governance is less about consumer protection and more about maintaining technological advantage.

The US approach to AI governance prioritises standards and speed
The US seeks to enable rapid AI development through a hands-off approach to regulation

Faith in markets, fear of adversaries

Beneath this approach sits a strategic assumption that is rarely stated explicitly. Advanced AI capability is increasingly viewed in Washington as a domain of relative advantage, particularly in relation to China. Concerns about falling behind – in model capability, compute access or deployment at scale – help explain the reluctance to impose heavy ex ante constraints on development. In this framing, regulation is not rejected outright, but deferred, calibrated against a perceived need to maintain leadership in a technology with profound economic and security implications.

The result is a system designed to encourage rapid development while drawing firm red lines around specific threats. Those threats are defined broadly: risks to critical infrastructure, military capability, economic competitiveness and democratic processes. What the system does not attempt is comprehensive control of how AI systems are designed or deployed across the economy.

For now, the United States is betting that standards, security controls and market dynamics will do most of the governing work. It is a high-velocity approach, optimised for leadership rather than harmonisation - and one that reflects a deep confidence in the corrective power of markets, institutions and scale.

China: AI as a system of supervision

China’s approach to AI governance is often described in terms of restriction and control, but that framing only captures part of the picture. In reality, China treats AI not as a discrete technology to be regulated, but as a systemic force to be integrated into existing structures of governance, economic planning and social management.

Unlike the European focus on rights or the American emphasis on innovation velocity, China’s governing logic prioritises alignment. AI systems are expected to advance economic productivity, strengthen state capacity and reinforce social stability simultaneously. Regulation is designed to ensure those goals do not come into conflict.

This logic is most visible in China’s rules governing generative AI. Providers are subject to registration requirements, security assessments and ongoing oversight. They are held accountable not only for technical performance, but for the social effects of their systems. Content must conform to defined norms, and systems that shape public opinion or emotional engagement attract heightened scrutiny. In this sense, governance extends beyond the model and deployment layers into the realm of interaction itself.

China is aiming to fully integrate AI into its governance and organisational processes
China is aiming to fully integrate AI into its governance and organisational processes

Governing the cumulative impact of AI

These measures reflect a core assumption: that AI systems are inseparable from the information environments they shape. Where Western regulators tend to focus on discrete harms - bias, discrimination, safety failures - Chinese regulation is concerned with cumulative effects. Scale, virality and behavioural influence matter as much as accuracy or explainability.

At the same time, regulation operates alongside strong state support for AI development. Public funding, strategic planning and procurement all play a role in accelerating domestic capability. China does not see regulation and innovation as opposing forces. Rather, regulation is used to channel innovation in directions deemed socially and politically acceptable.

This produces a distinctive ecosystem. Domestic firms operate within clear boundaries, but benefit from scale, coordination and long-term policy certainty. Foreign firms face a more complex environment, shaped by data localisation rules, content requirements and geopolitical sensitivities.

China’s integrated approach to AI governance 

From a governance perspective, China’s system is unusually integrated. The model layer, deployment layer, information layer and geopolitical layer are not treated as separate domains, but as parts of a single oversight framework. This reduces ambiguity about state priorities, even as it limits autonomy for developers and users.

Internationally, China’s approach is unlikely to be widely adopted in full. Its political assumptions are too specific, its controls too tightly coupled to domestic governance structures. Yet elements of its model - particularly the emphasis on information integrity and systemic risk - are beginning to resonate elsewhere as governments grapple with AI-driven misinformation and social manipulation.

China’s AI governance is therefore best understood not as an outlier, but as one pole in a widening spectrum. It demonstrates what AI regulation looks like when stability and control are treated as first-order objectives. As global AI deployment accelerates, the questions it raises about influence, behaviour and authority are unlikely to remain confined within China’s borders.

From a governance perspective, China’s system is unusually integrated. This reduces ambiguity about state priorities, even as it limits autonomy for developers and users.

Swing states, standards brokers and pragmatism 

Beyond the three dominant blocs sits a group of countries whose influence lies less in scale than in positioning. These states are unlikely to define AI governance alone, but they play an important role in shaping norms, standards and points of convergence.

The United Kingdom has framed its approach as deliberately pro-innovation. Rather than introducing a single AI statute, the UK has opted to empower existing regulators, encouraging them to adapt AI oversight within their respective domains. The strategy prioritises flexibility and speed, reflecting a desire to remain attractive to AI investment while avoiding the compliance overhead associated with more prescriptive regimes. The UK’s influence has been amplified through its focus on AI safety diplomacy, positioning itself as a convenor rather than a rule-setter.

Japan has taken a similarly pragmatic path. Its governance model relies heavily on guidance, voluntary principles and close coordination with industry. AI is treated as an enabler of productivity and demographic resilience, particularly in healthcare and automation. The emphasis is on adoption and trust-building rather than legal enforcement, reinforcing Japan’s reputation as a standards-aligned, business-facing regulator.

Singapore occupies a distinct niche. With limited domestic scale but strong institutional capacity, it has positioned itself as a governance laboratory. By developing model AI governance frameworks intended for international use, Singapore exports regulatory infrastructure rather than regulation itself. This allows it to punch above its weight in shaping how “responsible AI” is operationalised across Asia and beyond.

The rise of second-tier nations 

Other jurisdictions illustrate how Europe’s influence is spreading unevenly. Canada and Brazil have both advanced ambitious legislative proposals grounded in rights-based thinking, echoing elements of the EU approach while adapting them to domestic political realities. Their progress has been slower and more contested, underscoring how difficult comprehensive AI legislation remains outside highly integrated markets.

Taken together, these second-tier actors form a connective layer. They translate, adapt and sometimes soften the governing logics set by larger powers. In doing so, they help determine whether global AI governance fragments cleanly into blocs or retains some shared connective tissue.

Three futures for AI governance: 

As these different approaches harden, the question is no longer whether AI will be governed, but how the world absorbs the consequences of divergent governance. Three broad futures are beginning to take shape.

1) Fragmentation becomes structural

In the first scenario, regulatory divergence deepens. AI systems are increasingly developed, trained and deployed with specific jurisdictions in mind. Models, interfaces and even core capabilities begin to vary by region, shaped by incompatible rules around transparency, content, liability and oversight.

In this world, compliance becomes a strategic moat. Firms with the resources to navigate multiple regimes dominate, while smaller players are pushed toward regional specialisation. Innovation continues, but it is unevenly distributed. Cross-border deployment slows, and global platforms quietly re-architect themselves around regulatory boundaries.

The risk here is not stagnation, but balkanisation. AI remains powerful, but less interoperable. Governance differences harden into economic and cultural ones.

2) Convergence through standards, not law

A second path leads to partial convergence. Rather than harmonising legislation, governments converge indirectly through technical standards, procurement rules and risk frameworks. Firms align with the strictest or most credible baselines - often European-style deployment safeguards combined with American-style model evaluation practices - and apply them globally for simplicity and trust.

In this scenario, law recedes slightly as the primary driver. Audits, certifications and industry norms do the heavy lifting. Regulators enforce at the margins, while standards bodies and large buyers shape behaviour through incentives.

This future favours incumbents and institutional players, but it preserves a degree of global coherence. AI remains broadly interoperable, even as national preferences persist.

3) AI governance becomes openly geopolitical

The third future is the most confrontational. As AI capabilities increasingly underpin military, economic and informational power, governance becomes inseparable from geopolitics. Access to advanced models, compute and data is treated as a strategic privilege. Alliances form around shared standards, security assumptions and supply chains.

In this world, AI governance resembles energy or defence policy more than consumer regulation. Export controls tighten. Collaboration narrows. Innovation accelerates in some blocs while slowing sharply in others.

The upside is clarity. The downside is escalation. Once governance is framed primarily in terms of advantage and threat, cooperation becomes fragile.

One winner, or a blended picture?  

None of these futures is exclusive. Elements of all three are already visible. What is clear is that AI governance is no longer a technical footnote. It is a defining arena in which political values, economic models and technological power intersect.

Artificial intelligence, therefore, may be universal in ambition. The systems that shape it will not be. The rules being written now will determine not only what AI can do, but whose assumptions about intelligence become embedded in the infrastructure of everyday life.

James Richards headshot

James Richards

Lead Writer, No Latency

James is a professional writer and editor with a background in journalism and publishing, specialising in clear, structured writing on complex technical and commercial subjects.

He has over fifteen years’ experience working across journalism, publishing and professional writing, producing content for both B2B and B2C audiences. His work spans technology, finance and professional services, combining narrative discipline with a deep respect for accuracy and tone.

Peter Franks headshot

Peter Franks

Founder & Editor, No Latency

Peter writes long-form analysis on technology, gaming and artificial intelligence - focusing on the systems, incentives and strategic decisions shaping the modern software economy.

He has spent 20+ years working with software and games companies across Europe, advising founders, executives and investors on leadership and organisational design. He is also the founder of Neon River, a specialist executive search firm.