On the seventh of December 2023, the Court of Justice of the European Union issued a judgment that received considerably less attention than it deserved outside specialist data protection circles, given that it established, for the first time in European law, that a credit score calculated by a reference agency constitutes an automated individual decision within the meaning of Article 22 of the General Data Protection Regulation, even when it is a third party, not the scoring agency itself, that ultimately refuses the loan [2]. The case, brought by a German applicant identified as OQ against SCHUFA, the dominant German credit reference bureau, had a quality that regulatory milestones often lack: it began with a specific and comprehensible human situation, namely, a woman who was refused credit and was told, when she asked why, that SCHUFA's methodology was a trade secret. The Court held that this answer was not adequate. The draft regulation published by the Commission this week is, at its core, an attempt to codify that inadequacy at scale, and to answer OQ's question not merely for German courts but for every lender deploying AI across a continent.
The legal architecture that the Commission is now building upon is layered and, to be candid, still somewhat contradictory. The GDPR's Article 22 provides individuals subject to purely automated decisions with a right to obtain human intervention, to express a point of view, and to receive "meaningful information about the logic involved." What "meaningful information" requires in practice has been the subject of litigation and scholarly dispute since the Regulation entered into force in 2018; the SCHUFA judgment clarified that a credit score which a lender follows "in almost all cases" is itself a decision, not a preparatory act, and therefore within Article 22's scope, but it did not resolve the deeper question of what information is operationally meaningful when the model generating the score has hundreds of thousands of parameters [2]. Into this gap, which the GDPR's architects were aware of but chose not to fill with technical specificity, the EU AI Act of 2024 now arrives, and the Commission's draft regulation is supplementary to both.
Under Annex III of the AI Act (Regulation EU 2024/1689, which entered into force on the first of August 2024), AI systems used to evaluate the creditworthiness of natural persons, or to establish their credit score, are classified as high-risk, without any monetary threshold [1]. This classification was not contested in the legislative process; the concern it addresses, namely the risk of algorithmic discrimination against protected groups, is well-evidenced and broadly accepted. What the Act does not specify in operative terms is what a compliant explanation looks like. Article 13 requires that high-risk AI systems "be designed and developed in such a way as to ensure that their operation is sufficiently transparent" to enable deployers to interpret the outputs, and that deployers of systems affecting natural persons inform those persons that a high-risk AI system is involved, together with information about "the main features of the high-risk AI system" [1]. These are framework obligations. The Commission's draft regulation attempts to translate them into an audit requirement, specifying not merely that an explanation must be provided but that it must be independently verified. The January 2027 deadline for the audit requirement, which sits beyond the Act's own August 2026 high-risk compliance date, presumably reflects the Commission's recognition that the verification infrastructure does not yet exist.
The European Banking Federation's objection, that the explainability requirement is technically infeasible for transformer-based models and will force institutions to retreat to simpler, less accurate heuristics, deserves a precise rather than a dismissive response. It is correct that post-hoc explanation techniques such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic Explanations) produce approximations of a model's behaviour rather than true representations of its internal reasoning. A SHAP value assigned to an input feature reflects the average marginal contribution of that feature across the model's predictions; it does not reveal how the model processes that feature, and two models producing identical credit scores may produce entirely different SHAP profiles. The EBA noted this explicitly in its 2023 follow-up report on the use of machine learning for internal ratings-based models, observing that "more complex models may yield better performance" while being "more difficult to explain or comprehend," and that post-hoc interpretability tools "can mitigate explainability issues" without resolving them [3]. What the Federation does not acknowledge with equal candour is that simpler models of the logistic regression variety, which have been subjected to explainability requirements under supervisory guidance for decades, also produce explanations that can be formally correct without being operationally meaningful to an applicant. The question of what constitutes a genuine explanation of a credit decision is older and harder than the Federation's position implies.
The ECB addressed part of this problem directly in its revised guide to internal models published in July 2025, which contains a new section on machine learning techniques specifying that models "using these techniques" must be "adequately explainable" and that explainability must be demonstrated through "specific explainability tools," with banks required to ensure that staff have "sufficient training to interpret model outputs" [5]. The ECB's formulation carefully avoids specifying which tools are adequate, a choice that is prudent given the state of the science but that leaves open precisely the question the Commission's audit requirement raises: who decides whether an explanation is sufficient, and by what standard. The technical standards that are expected to operationalise the AI Act's vague terms, with drafts anticipated in the spring and summer of this year, will define whether the audit requirement is workable or merely aspirational. Until those standards are published, the January 2027 deadline for the Commission's draft regulation is a target for a compliance exercise whose specifications are still being written.
The most honest formulation of the position is probably this: explainability for high-complexity credit models is technically difficult but not technically impossible; it is possible for institutions, using a combination of model design choices, post-hoc approximation methods, and structured documentation of training data and validation processes (as required under the Act's Annex IV), to produce accounts of their credit decisions that are meaningfully more informative than "our methodology is confidential." What is not yet possible, and what no regulatory text currently in force or in consultation requires, is a complete causal account of why a particular model assigned a particular score to a particular individual: the information content of a trained neural network is not reducible to a human-readable narrative without loss of fidelity. The Commission's draft regulation, read carefully, does not appear to require the impossible; it requires an audit that verifies the possible has been done. The Federation's objection may be aimed more at the compliance cost, and at the competitive disadvantage that falls on European lenders relative to their counterparts in jurisdictions where no equivalent requirement exists, than at the technical impossibility of any explanation whatsoever.
That competitive dimension is real and worth tracking. The EBA's November 2025 factsheet on the AI Act's implications for EU banking found no significant contradictions between the Act and existing banking legislation, which is reassuring as a formal matter [4]. What it does not address is the compliance burden differential: a European bank deploying a credit model in 2027 will be subject to the AI Act's Annex III documentation requirements, the GDPR's Article 22 obligations as interpreted by the SCHUFA and subsequent case law, the ECB's ML explainability expectations under its internal models guide, and the Commission's new audit requirement. An American bank deploying an equivalent model in the same year will be subject to SR 11-7, the Equal Credit Opportunity Act's adverse action notice requirements, and whatever the Consumer Financial Protection Bureau has said most recently about algorithmic credit models, which is a lighter and less technically prescriptive set of obligations. Whether that asymmetry produces better-governed credit systems in Europe or merely less-automated ones is the question on which the regulation's long-term assessment will turn.
References
- European Parliament and Council. Regulation (EU) 2024/1689 of the European Parliament and of the Council (EU AI Act). Official Journal of the European Union, L 2024/1689. 12 July 2024. eur-lex.europa.eu
- Court of Justice of the European Union. SCHUFA Holding and Others (Scoring), Case C-634/21. Judgment of 7 December 2023. eur-lex.europa.eu
- European Banking Authority. "Follow-up Report on the Use of Machine Learning for IRB Models." 2023. eba.europa.eu
- European Banking Authority. "AI Act: Implications for the EU Banking and Payments Sector" (Factsheet). November 2025. eba.europa.eu
- European Central Bank. "Guide to Internal Models" (Revised). July 2025. bankingsupervision.europa.eu
- Kozyreva, A., and Liefooghe, B. "The Future of Credit Underwriting and Insurance Under the EU AI Act." Harvard Data Science Review, Issue 7.3 (Summer 2025). hdsr.mitpress.mit.edu