The AI governance landscape in 2025 is a patchwork of competing approaches, reflecting different values, priorities, and political systems. The EU has established the world's most comprehensive AI regulation. The US has taken a more hands-off approach, relying primarily on executive actions and voluntary commitments. China has implemented its own distinctive framework. The result is a fragmented global regulatory environment that creates compliance challenges for international AI companies and leaves fundamental questions about AI safety unresolved.
The EU AI Act: The World's First Major AI Law
The EU AI Act, which entered into force in August 2024, establishes a risk-based regulatory framework for AI systems used in the EU. Prohibited AI practices — banned outright — include biometric categorization using sensitive characteristics, untargeted scraping of facial images for recognition databases, emotion recognition in workplaces and educational institutions, and AI that manipulates people through subliminal techniques. High-risk AI systems face stringent requirements: conformity assessments, registration in an EU database, human oversight mechanisms, and transparency obligations.
General Purpose AI Models Under the EU Act
GPAI models with systemic risk — those trained on more than 10^25 FLOPs, a threshold that currently captures only the most powerful frontier models — face additional obligations including adversarial testing, incident reporting to the European AI Office, and cybersecurity measures. This creates a two-tier system within the GPAI category. Mistral successfully lobbied for provisions that distinguish between open-source models (lower regulatory burden) and closed proprietary systems.
US Approach: Deregulation Under Trump
The US has not enacted comprehensive federal AI legislation. The Biden Executive Order on AI (October 2023) required safety testing for powerful AI systems and directed federal agencies to develop sector-specific AI guidance. The Trump administration, taking office in January 2025, revoked the Biden Executive Order and announced an AI Action Plan focused on promoting innovation and US competitiveness — explicitly deprioritizing prescriptive regulation in favor of industry self-governance.
China's Distinctive Approach
China has implemented targeted regulations rather than a comprehensive AI law. Regulations on algorithmic recommendation systems, deep synthesis technology, and generative AI services target specific applications. Generative AI services in China must go through a government security assessment before public release, ensuring alignment with core socialist values — a requirement that has no equivalent in Western regulation. This creates a distinctive environment where AI capabilities and political control are deeply intertwined.
Impact on Business
The fragmented regulatory landscape creates significant compliance costs for international AI companies. A generative AI product serving EU customers must comply with AI Act transparency requirements. The same product in the US faces minimal federal oversight. In China, it requires government approval. The compliance burden falls most heavily on smaller companies and startups, which lack the legal resources of major tech companies to navigate complex international regulation.
The Path Forward
Efforts at international AI governance coordination have produced voluntary frameworks but no binding agreements. The geopolitical tension between the US and China makes binding international coordination extraordinarily difficult. The regulatory patchwork is likely to persist, and organizations building AI products globally must be prepared to navigate different requirements in different jurisdictions — a complexity that will only grow as more governments develop AI-specific rules.