EU moves to ban abusive deepfakes as the US signals a hands-off AI race—what’s next for global AI power?
The cluster centers on fast-moving AI governance and security signals rather than a single technical breakthrough. On May 7, 2026, Handelsblatt reported that EU institutions reached an agreement to prohibit AI used for abuse deepfakes, following a wave of scandal-driven political pressure. Separately the same day, the EU Council and Parliament agreed to simplify and streamline AI rules, indicating a push to reduce compliance friction while still tightening enforcement. In the US, Bloomberg reported that White House Chief of Staff Susie Wiles said the administration would not “pick winners” in the AI race, a notable policy posture as President Donald Trump prepares new AI directives. Meanwhile, a study cited by bsky.app claimed AI can predict personality traits such as agreeableness and emotional stability with up to 60% accuracy, raising new concerns about profiling and consent. Geopolitically, the story is about who sets the rules of the AI economy and how quickly enforcement can scale. The EU’s deepfake ban and regulatory streamlining suggest a strategy to constrain misuse while keeping the market competitive, potentially forcing US and other vendors to adapt products for compliance. The US “no winners” stance signals a preference for market-led competition and executive-branch guidance rather than industrial policy, which could create friction with EU-style prescriptive regulation. The power dynamic is therefore regulatory sovereignty versus innovation flexibility: Brussels aims to limit harm and standardize guardrails, while Washington frames AI competition as broader and less politicized. The study on personality prediction adds a security dimension by implying that AI systems can infer sensitive psychological attributes, which can amplify risks of manipulation, discrimination, and targeted influence operations. Market and economic implications are most visible in compliance, risk, and platform governance spending. EU deepfake restrictions and streamlined rules are likely to raise demand for identity verification, content provenance, moderation tooling, and legal/compliance services, benefiting vendors across RegTech and cybersecurity. The personality-prediction capability—if deployed in consumer or workplace contexts—could increase scrutiny for privacy and employment-related analytics, pressuring firms that rely on psychometric inference. In the US, the “won’t pick winners” message may support a wider set of AI developers and cloud providers, but it also leaves uncertainty about how quickly federal agencies will translate policy into procurement standards. Financially, the near-term impact is more about sentiment and sector rotation toward governance and security enablers than a direct commodity shock, with potential upside for cybersecurity and digital trust instruments and downside for companies exposed to deepfake or profiling controversies. Next, investors and policymakers should watch how the EU operationalizes the deepfake prohibition—especially definitions of “abuse,” enforcement timelines, and penalties—and whether it aligns with existing AI Act implementation schedules. On the US side, the key trigger is how Trump’s administration converts the “no winners” posture into concrete directives: procurement rules, export controls, and agency guidance for model deployment. A practical indicator will be whether major platforms and model providers publish provenance, watermarking, and detection standards that map to EU requirements. Another signal is whether research and product teams move from personality inference experiments toward explicit consent frameworks, auditability, and opt-out mechanisms. Escalation risk rises if deepfake enforcement is delayed while high-profile incidents continue, but de-escalation is possible if compliance standards become clear and industry adoption accelerates within weeks rather than months.
Geopolitical Implications
- 01
Regulatory sovereignty competition between Brussels and Washington.
- 02
Cross-border AI deployment may face compliance-driven product changes.
- 03
Psychometric inference capabilities raise security and privacy stakes.
- 04
Potential standards divergence could affect procurement and interoperability.
Key Signals
- —EU publishes operational definitions and enforcement timelines for the deepfake ban.
- —US issues concrete directives translating “no winners” into procurement and deployment rules.
- —Platforms adopt watermarking/provenance and audit practices aligned to EU expectations.
- —Consent and audit frameworks expand for personality-inference features.
Topics & Keywords
Related Intelligence
Full Access
Unlock Full Intelligence Access
Real-time alerts, detailed threat assessments, entity networks, market correlations, AI briefings, and interactive maps.