The Bank of England's Prudential Regulation Authority published a discussion paper on Thursday entitled "Algorithmic Systemic Risk in Financial Services: A Framework for Assessment," inviting responses by 30 June 2026 and signalling, in language unusual for the typically measured prose of Threadneedle Street, that the concentration of consequential financial decisions in a small number of foundation models represents a category of systemic risk for which the existing supervisory toolkit is inadequate. The paper is 47 pages, has seven co-authors drawn from the PRA's supervisory and research divisions, and was approved by the Financial Policy Committee for release — the last detail being the most significant, since FPC publications carry the implicit weight of potential macroprudential intervention.
The paper's central analytical contribution is the concept of what it terms "model monoculture risk." Traditional systemic risk frameworks — developed in the wake of the 2008 crisis and refined through successive iterations of the stress-testing regime — assume that individual institution failures can be correlated through common asset exposures, funding dependencies, or counterparty chains. The new framework argues that AI models introduce a fourth correlation channel that existing stress tests cannot capture: the possibility that many institutions, having deployed similar or identical foundation models for credit, trading, and risk management decisions, will respond to novel market conditions in the same way simultaneously, amplifying rather than dampening shocks.
The paper cites, as a concrete historical antecedent, the behaviour of value-at-risk models in August 2007, when the simultaneous deleveraging by multiple hedge funds employing near-identical quantitative strategies produced a liquidity event that was not predicted by any individual firm's risk model because each model was calibrated on the same historical data and therefore encoded the same blind spots. The analogy to the current situation is direct: if the major UK clearing banks — each of which has deployed or is deploying large language models for credit assessment and market risk — are using models trained on similar corpora with similar objective functions, their collective response to an event outside the training distribution may be correlated in ways that the firms themselves cannot observe.
The regulatory implications the paper gestures toward are significant. It raises the possibility of requiring "model diversity attestations" — documentation that a systemically important institution's AI decision systems are not derived from the same underlying foundation model as those of its peers — and the inclusion of model-specific scenarios in the Bank of England's annual stress-testing exercise. Both would be novel supervisory requirements with no current parallel in any jurisdiction. The paper explicitly notes that neither the EU AI Act nor the US federal banking agencies' existing model risk guidance addresses the concentration dimension of AI risk; it calls for international coordination through the Financial Stability Board and the Basel Committee.
The industry response will be interesting to observe. The major banks have invested substantially in AI deployment and will resist any framework that creates competitive disadvantage or imposes costs without clear empirical evidence of the risk the PRA is attempting to manage. The paper's authors pre-empt this objection by noting that the absence of a historical event attributable to model monoculture is not evidence that the risk does not exist — it may reflect instead that the technology has not yet been deployed at sufficient scale and systemic depth for the correlation to manifest. The Bank of England, characteristically, is attempting to identify and address the risk before the incident rather than after it. The June 30 consultation deadline will tell us whether the industry agrees this is the right problem to be solving, and what it believes the solution should look like.