The Board of Governors of the Federal Reserve has announced the appointment of Dr. Priya Chandrasekaran as its first Chief AI Officer, a position created to examine and mitigate systemic risk arising from the correlated deployment of artificial intelligence and machine learning systems across the banking sector. The appointment, disclosed on 10 March, reflects a marked shift in the Board's assessment of AI-related risks: from an emerging concern monitored at the periphery of supervisory attention to a core pillar of financial stability policy.

Chandrasekaran comes to the Board from DeepMind, where she spent seven years leading work on the robustness and failure modes of large-scale AI systems. Her published research focuses on adversarial perturbations, model brittleness, and the challenge of predicting failure cascades in complex systems—expertise directly relevant to the Board's new mandate. In her remarks to the Board of Governors on 10 March, Chair Powell characterised the appointment as essential to understanding "a failure mode unprecedented in the history of banking regulation: the possibility that multiple institutions will fail in correlated ways not because of traditional counterparty exposure, but because they have all deployed variants of the same flawed model."

The economic logic underlying the appointment is straightforward and sobering. If multiple banks train machine learning models on overlapping datasets and deploy them to similar problems—credit decisions, fraud detection, portfolio rebalancing, liquidity risk assessment—the systems will fail in correlated ways during periods of market stress. This is not metaphorical risk. It is a technical inevitability. When Fed staff modelled a scenario in which 60% of the banking sector had deployed AI models trained on common datasets during the 2008-2010 period, they found that correlation in model errors during the crisis would have amplified market stress by an estimated 15–25 basis points in spreads, with secondary effects on liquidity and capital adequacy that compounded the initial loss.

Chandrasekaran's office will be housed within the Board's Division of Supervision and Regulation, with a reporting line to the Board itself. Her mandate includes assessment of model architectures deployed at supervised institutions, analysis of correlation risk across the sector, and development of supervisory guidance on AI governance. Chair Powell indicated that the Board expects to issue a formal proposal for AI model oversight by the end of Q2 2026. The proposal will likely address capital requirements for AI-driven decisions, validation standards for models used in capital adequacy calculations, and mandatory diversity requirements on training datasets to reduce correlation risk.

Market reaction has been cautiously positive. Bank equity futures rose 0.8% on the news, and credit spreads tightened modestly, suggesting that markets interpret the appointment as a sign that the Fed is taking AI risk seriously rather than retreating into hands-off supervision. The larger question—whether the Fed can develop supervisory tools to manage a risk that is fundamentally technical and probabilistic in nature—will occupy the next twelve months.