The EU AI Act has entered into force. The US has issued executive orders and then partially reversed them. China has its own framework for generative AI. The result is a patchwork of regulation that multinational AI companies must navigate — and that significantly shapes where AI development happens and how it's deployed.

The EU AI Act: The World's First Comprehensive AI Law

The European Union's AI Act represents the most ambitious attempt to regulate AI comprehensively. It categorizes AI systems by risk level: unacceptable risk (banned outright, including social scoring systems and real-time biometric surveillance), high risk (heavily regulated, including medical devices, critical infrastructure, and educational systems), limited risk (transparency requirements), and minimal risk (largely unregulated). General-purpose AI models above a certain capability threshold — currently pegged at 10^25 FLOPs of training compute — face additional requirements including transparency reporting and safety evaluations.

The US Approach: Sectoral and Voluntary

The United States has taken a markedly different approach. Rather than comprehensive legislation, the US has relied on sector-specific regulation through existing agencies (FDA for medical AI, SEC for financial AI), voluntary commitments from major AI labs, and executive orders that can shift with administrations. The Biden-era executive order on AI safety was substantially scaled back under the subsequent administration. The result is a lighter regulatory touch that enables faster deployment but provides less certainty about liability and safety standards.

China's Generative AI Rules

China has moved quickly on AI regulation with a focus on content control and national security. Requirements that generative AI systems align with 'socialist core values' and that content not undermine state authority are straightforward restrictions on what models can say. These rules have shaped how Chinese AI companies build and deploy systems — including content filtering layers that go well beyond what Western companies typically implement. Whether these constraints meaningfully limit capability development is debated; DeepSeek's performance suggests they haven't been fatal.

The Business Impact

For companies building AI products, the regulatory divergence creates genuine complexity. A product compliant with EU law may need significant changes to meet Chinese requirements, and vice versa. Some AI capabilities legal in the US may trigger high-risk designation under the EU AI Act. The practical response for most companies is building modular systems where regional compliance configurations can be applied without rebuilding core functionality. Regulatory compliance is increasingly a product design consideration from day one, not an afterthought.