On the first day of January 2025, Regulation (EU) 2024/1623, better known to the industry as the Capital Requirements Regulation III and to the wider world as the European implementation of Basel IV, entered into force across the European Union [1]. The regulation introduces, among its many provisions, an output floor: a constraint that limits the minimum capital requirements calculated under a bank's own internal models to no less than a specified percentage of what the standardised approach would yield for the same exposures. The floor begins at fifty per cent in 2025 and rises, in annual increments, to seventy-two point five per cent by 2030 [1]. It is the most consequential change to European prudential standards since the post-crisis reforms of 2010, and it has provoked, with rather impressive speed, a new and strategically sophisticated lobbying argument from the continent's largest lenders: that machine-learning credit models are sufficiently superior to their predecessors that the output floor itself, in its present form, represents an unnecessary constraint on efficient capital allocation.
The argument deserves to be taken seriously, which is precisely why it deserves to be examined with some care. The banks are not wrong that machine-learning techniques have materially improved the granularity and predictive accuracy of credit risk assessment. The European Central Bank's own supervisory review, published in July 2025, acknowledges as much, devoting an entirely new section of its revised guide to internal models to the treatment of such techniques [2]. Where the banks and the regulators diverge is on the question of whether improved prediction translates, in a prudential sense, into reduced unexpected loss variance sufficient to warrant capital relief, and whether the new architecture of risk, built upon the opacity and inter-correlation of machine-learning systems operating across institutions, does not introduce precisely the kind of systemic fragility that higher capital requirements exist to buffer against. This is not an abstract disagreement. It is a dispute with direct consequences for the quantum of capital held against European credit portfolios, which, in aggregate, determines the degree to which the banking system can absorb a shock without recourse to public funds.
The Output Floor and the Capital Shortfall
The European Banking Authority's monitoring of the Basel IV implementation has been exhaustive and, in its data, somewhat sobering. The EBA's most recent capital requirements monitoring report estimated a minimum shortfall of €124.8 billion for EU banks under full CRR3 implementation [3]. Of that total, approximately one third is attributable to the output floor, the remainder arising from revisions to the treatment of operational risk, market risk, and credit valuation adjustment. The figure is not a prediction of actual capital raising; many institutions have been building their Common Equity Tier 1 ratios for precisely this eventuality, and the EBA Risk Dashboard for the second quarter of 2025 recorded the aggregate CET1 ratio for significant institutions at an all-time high of 16.3 per cent [4]. The estimated impact of the output floor as of the same period was a mere two billion euros against total risk-weighted assets, reflecting the slow phase-in schedule and the current dominance of the standardised approach in certain portfolios [4].
The phase-in schedule, reproduced above, reveals the temporal logic of the lobbying effort. The floor at fifty per cent in 2025 bites only modestly, as the EBA data confirms. The pressure intensifies with each successive year, reaching its full weight at seventy-two point five per cent in 2030. The banks lobbying most vigorously for AI-model capital relief are, in the main, those with the largest internal-models portfolios: the institutions whose credit risk-weighted assets under the Internal Ratings Based approach diverge most significantly from standardised approach equivalents. For these institutions, the output floor represents not merely an accounting adjustment but a fundamental alteration of their competitive model, which has depended, since the introduction of Basel II in the mid-2000s, on the ability to hold materially less capital against internally assessed exposures than their standardised-approach competitors are required to hold against nominally identical ones.
Approximately sixty per cent of significant banks' credit risk currently employs internal models [2], a figure that understates the concentration of the effect: the institutions most dependent on internal models are precisely the largest, most systemically important ones, for whom the capital implications of a tightening floor are most acute. It is therefore not surprising that the lobbying has been organised and persistent. What is noteworthy is the particular form the argument has taken: not a frontal assault on the output floor itself, which enjoys the imprimatur of the Basel Committee and the political support of the European Commission's post-crisis regulatory architecture, but rather a more technically sophisticated claim that the internal models themselves have been transformed, by machine learning, into instruments of such precision that their outputs should be afforded greater regulatory credibility than the Basel Committee's framework currently allows.
The Banks' Argument: Precision as a Prudential Virtue
The case being advanced, in various forms, in consultation responses, supervisory dialogue, and the occasional published position paper, runs as follows. Traditional internal ratings-based models, including those that were the subject of the post-2008 critique, were built on statistical techniques, principally logistic regression and scorecards, that imposed substantial linearity and simplifying assumptions on inherently complex credit dynamics. Machine-learning models, specifically gradient-boosted trees, neural network architectures, and ensemble methods applied to granular transaction and behavioural data, do not labour under the same constraints. They capture non-linear relationships between borrower characteristics and default probability that their predecessors could not resolve. The consequence, the argument continues, is not merely improved prediction accuracy at the portfolio level but a measurable reduction in the variance of unexpected losses: the category of loss against which regulatory capital is, in the formal Basel framework, explicitly designed to provide protection.
The argument that better models justify lower capital is not new. In 2007, the industry made the same case with considerable sophistication. In 2008, the answer arrived.
This is not a frivolous argument, and one should resist the temptation to dismiss it on grounds of institutional suspicion alone. The predictive improvements documented in academic literature and in the ECB's own supervisory data are real. The ECB's revised guide to internal models, published in July 2025, acknowledges that machine-learning techniques "can improve model performance" and dedicates a new section to their permissible use within the internal models framework [2]. The supervisory guidance does not prohibit such techniques. It imposes conditions: explainability requirements, governance standards, documentation obligations, and the requirement that the added complexity of a machine-learning architecture be justified by a demonstrable improvement in model performance relative to simpler alternatives [2]. These conditions are not unreasonable. They are, in fact, precisely what one would expect a cautious regulator to require when confronted with a powerful new technique whose failure modes are not fully understood.
The Regulator's Counter: Explainability and Systemic Correlation
The ECB's position, as articulated through its supervisory guidance and the observations of senior officials, rests on two distinct concerns. The first is explainability: a machine-learning model that cannot provide a comprehensible account of how a credit decision was reached cannot be adequately challenged, audited, or validated by either supervisory staff or the institution's own risk governance function [2]. This is not a merely philosophical objection. Supervisory validation of internal models is the mechanism by which the regulatory system verifies that the models banks use to calculate their own capital requirements are not systematically optimistic; without the ability to interrogate that mechanism, the supervisory oversight that gives internal models their regulatory legitimacy collapses into a form of deference that is inconsistent with the lesson, learnt at considerable public expense in 2008 and 2009, that bank self-assessment of capital adequacy requires rigorous external scrutiny.
The second concern is structural and, in my view, the more consequential of the two. The Basel Committee on Banking Supervision's working group on the use of artificial intelligence and machine learning in financial institutions has noted that the widespread adoption of similar machine-learning architectures across the banking sector creates a risk of correlated model behaviour that has no analogue in the era of bespoke, institution-specific statistical models [5]. When a significant proportion of the European banking system employs gradient-boosted tree models trained on broadly similar datasets, and those models are calibrated against a decade of relatively benign credit experience between 2015 and 2024, the possibility that they share similar blind spots, and that those blind spots manifest simultaneously under conditions of economic stress, is not merely theoretical. It is a structural feature of the new architecture that deserves to be weighed against the undoubted improvements in average-case predictive performance that the models deliver.
The BIS working group's concern is reinforced by a consideration that the lobbying effort tends not to dwell upon: the output floor is not merely a constraint on banks that have good models. It is a constraint on banks that claim to have good models. The history of prudential regulation is substantially a history of the difficulty of distinguishing between these two categories in real time. The Basel Committee's introduction of internal models in Basel II was accompanied by its own sophisticated supervisory framework, including stress testing, back-testing, and validation requirements that were, in the abstract, more than adequate to the task. In practice, as the events of 2007 and 2008 demonstrated, the complexity of the models outpaced the capacity of supervisors to validate them, and the incentive structures within banks themselves were not aligned with the production of conservatively calibrated risk estimates.
The Echo of Basel II
I have read the consultations that preceded the introduction of the Internal Ratings Based approach under Basel II with the advantage of retrospect, and I confess that the family resemblance between those documents and the current arguments for ML-model capital relief is more than superficial. The Basel II consultation papers of 2003 argued, with considerable analytical sophistication, that banks with advanced internal models were better placed to assess their own credit risk than any standardised approach could be, that the granularity of internal assessment reduced unexpected loss variance, and that rewarding this analytical sophistication with lower capital requirements would simultaneously improve the efficiency of capital allocation and incentivise the development of better risk management across the system [6]. Each of these claims was, in isolation, defensible. In aggregate, the framework they supported proved insufficiently robust to the conditions that emerged between 2007 and 2009, when the complexity and opacity of the models interacted with incentive misalignments and correlated exposures to produce outcomes that no individual institution's model had predicted.
The current argument for machine-learning capital relief is, in important respects, more technically sophisticated than its predecessor. The models are genuinely better at predicting individual default in the conditions in which they were trained. The explainability requirements that the ECB is insisting upon are a meaningful improvement over the supervisory framework of the early 2000s. The phase-in structure of the output floor provides time for the regulatory community to develop more refined tools for assessing machine-learning model quality. These are real differences, and they deserve acknowledgement. What they do not do is resolve the fundamental tension between the efficiency case for capital optimisation and the systemic stability case for conservatism, a tension that is as old as the relationship between private banking and public regulation, and that has never been resolved definitively in favour of the banks' preferred position without eventual cost to the public purse.
What to Watch
The next three years will determine whether the lobbying effort succeeds in any meaningful form. The EBA's ongoing monitoring of CRR3 implementation will produce capital shortfall estimates that grow as the output floor rises; by 2027 and 2028, when the floor reaches sixty-five per cent, the aggregate impact will be considerably more visible than the two billion euros recorded in mid-2025. This will intensify pressure on the European Commission to introduce some form of recognition of machine-learning model quality within the internal models framework, whether through a formal amendment to CRR3 or through supervisory guidance that effectively softens the floor's application to institutions whose models meet a defined standard of explainability and governance. The ECB's willingness to accommodate such a development will depend in part on the outcome of its ongoing model review programme and in part on the degree to which the BIS's systemic correlation concerns can be addressed by the diversification of model architectures across institutions.
The outcome is genuinely uncertain, which is a rarer condition in regulatory affairs than the confident pronouncements of either side would suggest. What is not uncertain is the character of the question being asked. It is the same question that has been asked, in different technical languages, since the invention of the internal models approach: how much should regulators trust banks to assess their own risk? The answer that the Basel Committee arrived at, after 2008, was: less than they did before, and the output floor is the institutional expression of that answer. Those who would revise it bear a burden of proof that is not met merely by demonstrating that the models have improved. They must also demonstrate that the system of oversight that governs those models is adequate to the task of distinguishing genuine improvement from the sophisticated appearance of it, and that the architecture of the new risk system does not carry within it the correlated fragilities that its individual components, taken separately, do not reveal. That is a considerably more demanding case to make, and as yet it has not been made to the satisfaction of those who must ultimately answer for the consequences if it proves wrong.
- European Parliament and Council of the European Union. "Regulation (EU) 2024/1623 of the European Parliament and of the Council of 31 May 2024 amending Regulation (EU) No 575/2013 (CRR3)." Official Journal of the European Union. 19 June 2024. eur-lex.europa.eu
- European Central Bank. "ECB Guide to Internal Models." European Central Bank Banking Supervision. Published 28 July 2025. bankingsupervision.europa.eu
- European Banking Authority. "Basel III Monitoring Report." European Banking Authority. 2025. eba.europa.eu
- European Banking Authority. "EBA Risk Dashboard: Data as of Q2 2025." European Banking Authority. September 2025. eba.europa.eu
- Basel Committee on Banking Supervision. "Newsletter on the Use of Artificial Intelligence and Machine Learning in Financial Institutions." Bank for International Settlements. 2024. bis.org
- Basel Committee on Banking Supervision. "The New Basel Capital Accord: Consultative Document." Bank for International Settlements. April 2003. bis.org
- European Banking Authority. "CRR3 Implementation: Impact Assessment." European Banking Authority. 2024. eba.europa.eu
- Bank for International Settlements. "Basel Framework: CRE50 — Overview of the Output Floor." BIS Basel Framework. Updated 2025. bis.org