On the twenty-fourth of February, Governor Christopher Waller of the Federal Reserve delivered a speech at the Boston Fed's Technology-Enabled Disruption Conference entitled "Operationalising AI at the Federal Reserve." He described a common internal AI platform now available to all Reserve Bank employees and emphasised the need for "clear guardrails" and "human accountability for decisions." The same week, the European Central Bank's supervisory arm published a speech with a more pointed title: "Technology is neutral, governance is not." The juxtaposition was not accidental; it was, rather, an unusually legible statement of the philosophical divide that now separates the two most consequential regulatory jurisdictions in global finance.

For those who recall the historical arc of the Basel Accords, the present moment carries an unmistakable sense of recurrence. When the original Basel Capital Accord was adopted in 1988, its strength lay in its universality. By the time Basel II was implemented, the gulf between European and American interpretations had widened to such an extent that the accord's authors at the BIS conceded, in characteristically restrained language, that "national discretions" had produced "material differences in capital requirements for similar exposures."[4] The AI regulatory landscape is following an analogous trajectory, only at a considerably accelerated pace.

The European Union's Artificial Intelligence Act, which entered into force on 1 August 2024, represents perhaps the most comprehensive and prescriptive regulatory approach to algorithmic systems yet attempted by any major jurisdiction. Full applicability for high-risk systems will arrive in August 2026, which means that credit scoring models, insurance risk assessments, and anti-money laundering detection systems will be subject to mandatory conformity assessments, rigorous data governance protocols, and strict transparency obligations.[3] The European Banking Authority has been at pains to clarify the implications for financial institutions, and the guidance is unambiguous: firms deploying such systems will face both structural costs and operational constraints of a magnitude that cannot easily be absorbed.

The American regulatory approach, by contrast, has moved in the opposite direction. The revocation of the previous administration's AI safety executive order in January 2025 was followed by what the financial services industry has taken to calling "permissionless innovation." Governor Waller's speech exemplified the philosophy: AI is a tool to be operationalised, not a risk to be regulated into submission.[1] The Federal Reserve has published draft supervisory guidance in early 2026, but it reads less as regulation than as encouragement, replete with assurances that firms need not fear regulatory reprisal for deploying advanced models provided they maintain certain baseline documentation practices and human oversight structures. US banks are deploying algorithmic tools across credit, trading, and operational domains with a level of velocity and experimental freedom that would be unthinkable in Europe.

For the multinational banks that constitute the true hub of global finance, this divergence has proven to be far more than an academic curiosity. JPMorgan, HSBC, Deutsche Bank, BNP Paribas, and their peers now find themselves maintaining parallel compliance frameworks. A credit scoring model deployed in Frankfurt requires a full conformity assessment under the AI Act, detailed explainability documentation, evidence of human oversight, and regular performance audits.[3] The identical model deployed in New York requires none of these things. The model deployed in London exists in a state of regulatory limbo as British regulators attempt to thread the needle between European rigour and American permissiveness. The cost of this divergence is substantial, falling squarely upon the institutions that operate across both jurisdictions.

There is, to be fair, a philosophical case to be made for regulatory divergence. Market competition between different regulatory regimes, the argument runs, produces valuable information about which approach works best. The European Union may discover that heavy-handed regulation stifles genuine innovation and drives capital toward less scrupulous jurisdictions. The United States may discover that light-touch oversight produces systemic risks that eventually demand a reckoning. The mechanism of competition between regimes, in this view, serves as a natural form of epistemic discipline, preventing any single jurisdiction from imposing an obviously deficient approach upon the entire world.

The counter-argument, however, is far more compelling. Unlike competition in product markets, regulatory failures in financial systems tend to be systemic and contagious in their consequences. A crisis originating in an under-regulated jurisdiction does not respect borders; it propagates through global payment systems, credit networks, and asset markets in a matter of hours. The 2008 financial collapse provided ample evidence that regulatory arbitrage between jurisdictions, once allowed to proliferate, becomes not a mechanism for discovering truth but a mechanism for disaster.

The institution best positioned to broker some form of convergence is the Bank for International Settlements. The BIS has published papers on AI governance in central banks and financial policy,[4,5] and its influence over regulatory thinking remains considerable. However, the track record suggests that the BIS functions more as an advisor than as an enforcer. Its policy recommendations tend to lag well behind the speed at which technology and regulatory reality move. The most likely scenario, absent some dramatic precipitating event, is that the current divergence will persist and perhaps deepen. Multinational banks will continue to bear the cost, and the infrastructure of global finance will become increasingly fragmented in its algorithmic governance.

The critical juncture will arrive in August 2026, when the EU AI Act reaches full applicability for high-risk systems. The enforcement actions that follow will signal whether the European Commission intends the Act to function as a serious regulatory constraint or as a broad framework with accommodating implementation guidance. Simultaneously, watch for any significant systemic incident involving an AI model in a lightly regulated environment, particularly one emanating from the United States. Should such an incident occur, the political calculus across all jurisdictions would shift overnight, and the case for convergence would become irresistible.

Return, finally, to the Basel parallel. The divergences between European and American implementation of Basel II were eventually resolved, but only in the brutal crucible of the 2008 financial crisis. The collapse demonstrated conclusively that regulatory arbitrage in capital standards had consequences; jurisdictions could no longer pretend that their approaches were compatible with stability. It would be far preferable to achieve convergence on AI governance before a similar demonstration is required. It would also, given the historical record and the present incentive structures, be deeply optimistic to expect it.