On the nineteenth of March 2024, the Policy Board of the Bank of Japan voted seven to two to raise its short-term policy interest rate from negative ten basis points to a target range of zero to ten basis points, ending eight years of negative interest rate policy and seventeen years since the institution had last increased borrowing costs [1]. The announcement attracted considerable comment in the financial press, organised for the most part around the historic significance of Japan's departure from the final negative-rate regime in the developed world, the implications for the yen carry trade, and the question of what would follow. By December of 2025, the Bank had raised rates to seventy-five basis points, a level not seen since 1995, as consumer prices sustained their advance above the two per cent target for a forty-fourth consecutive month [2]. The policy normalisation, viewed in the context of the preceding three decades, was extraordinary.
What received rather less attention, in the understandable excitement over a rate cycle that would have been dismissed as fantasy as recently as 2021, was a quieter revolution that had been proceeding within the Bank's Research and Statistics Department since approximately 2018: the systematic construction of an alternative data and machine-learning augmented analytical framework, built in direct response to the intellectual failure of the conventional tools that had guided the institution through three decades of near-zero inflation and near-zero rates. The policy revolution and the analytical revolution are connected, though the connection is less obvious than it might appear, and understanding it requires some patience with the history of Japanese macroeconomic forecasting, which is not an uplifting subject but is, I think, an instructive one.
The Forecasting Problem That Preceded the Solution
The Bank of Japan's quarterly Outlook for Economic Activity and Prices, the document in which the Policy Board publishes its projections for growth and inflation, has a long and, in retrospect, somewhat melancholy history as a document of systematic optimism. Throughout the 2010s and into the early 2020s, successive editions of the Outlook projected a return of inflation to the two per cent target that consistently failed to materialise; the projections were, in the technical sense, unbiased in expectation but persistently optimistic in practice, reflecting not dishonesty but a genuine intellectual difficulty: the analytical frameworks being applied were calibrated on historical relationships between economic variables that had operated under entirely different monetary and pricing conditions. The Bank's own retrospective, its December 2024 "Review of Monetary Policy from a Broad Perspective," acknowledged candidly that the effect of large-scale monetary easing on inflation expectations had been more limited than anticipated, and that "adaptive expectations formation" (the tendency of households and businesses to project recent experience into the future rather than anchor to a target) had played a more dominant role than the models had assigned to it [3].
This is not a criticism that can fairly be levelled specifically at the Bank of Japan; the difficulty of forecasting inflation in persistently low-inflation economies was a challenge that confounded institutions with considerably greater analytical resources and international reputations. What is distinctive about the Japanese case is the duration and depth of the experience: the Bank had operated in a near-zero rate environment for so long, and had calibrated its analytical apparatus so thoroughly to that environment, that the tools it possessed at the onset of the post-pandemic inflation episode were, in some respects, particularly poorly suited to the challenge of understanding a regime change. Standard econometric models trained on thirty years of deflationary data cannot easily be asked to estimate the dynamics of an inflationary episode; the parameters simply do not transfer across regimes. An International Monetary Fund working paper published in September 2024, examining inflation forecasting for Japan in the post-pandemic period, found that sparse machine-learning methods, specifically a LASSO-regularised model incorporating household inflation expectations, inbound tourism data, exchange rates, and the output gap, outperformed both traditional econometric benchmarks and professional consensus forecasters, which the authors observed had exhibited "a consistent downward bias" throughout 2022 and 2023 [4]. The conventional apparatus had underestimated the inflation that had finally arrived.
The Alternative Data Programme: A Documented History
The Bank of Japan's formal engagement with alternative data and machine-learning techniques predates the post-pandemic inflation episode, which is significant: the programme was not a reactive scramble but a considered methodological evolution that had been building institutional depth for several years before the analytical challenge it would be needed to address became apparent. A review article published by the Bank's Research and Statistics Department in January 2022 (authored by Kameda Seisaku) documented four distinct categories of alternative data then in active use within the institution: high-frequency mobility data derived from mobile phone location records, textual data from surveys and public sources, granular transaction-level data obtained from financial institutions and payment processors, and climate-related data for financial stability analysis [5]. Each category represented a departure from the official statistics that had been the traditional raw material of central bank research, and each carried with it new methodological requirements, the most significant of which was the need for statistical techniques capable of handling high-dimensional, frequently unstructured inputs.
The specific machine-learning applications that emerged from this programme were, by 2022, sufficiently numerous and varied to constitute something approaching a systematic toolkit. Working papers published by the Bank in the second half of that year covered the construction of a nowcasting model for industrial production using gradient-boosted machine-learning techniques applied to mobility and electricity demand data [6]; the development of a GDP nowcasting framework employing elastic net sparse estimation applied to hundreds of daily internet search series combined with weekly retail sales data [7]; the construction of an "Alternative Data Consumption Index" incorporating JCB credit card transaction records, point-of-sale data, and personal spending records, which became available approximately three weeks before the Bank's own official Consumption Activity Index [8]; and a nowcasting model for exports constructed from Automatic Identification System vessel-tracking data processed with machine-learning methods, which successfully captured movements in Japanese export volumes even during the supply chain disruptions of 2020 and 2021 [9]. This is not the output of an institution experimenting with new methods at the margins. It is the output of a department that has made a deliberate decision to rebuild its analytical infrastructure.
The models built during the deflation decades were calibrated to conditions that no longer obtain. The BoJ's machine learning programme is, in its essence, a response to that problem.
Text, Sentiment, and the Economy Watchers Survey
Among the more revealing specific applications is the Bank's use of Naive Bayes text classification applied to free-text responses from the Cabinet Office's Economy Watchers Survey, a monthly questionnaire in which approximately 2,000 people with direct exposure to current economic conditions, including taxi drivers, supermarket managers, and restaurant proprietors, comment in their own words on what they are observing [10]. The Bank's researchers trained a text classifier to sort these comments into four categories: those implying an increase in prices, those implying a decrease, those implying stability, and those not referring to prices at all. The resulting "Price Sentiment Index" was found to precede official consumer price data by several months and to capture both demand-side and cost-side price pressures simultaneously, including the effects of raw material price movements and exchange rate changes on domestic retail prices. This is a methodologically modest application, Naive Bayes is among the simpler text classification algorithms, but its practical value is considerable: it converts a qualitative data source that had previously been summarised by human analysts into a quantitative, reproducible, real-time indicator that can be tracked systematically across time.
The foundational work on alternative data for price measurement dates to 2018, when researchers at the Bank published a paper on the use of support vector machine algorithms to construct experimental price indices from web-scraped pricing data, using Kakaku.com (Japan's largest price comparison platform) to track quality-adjusted prices for home appliances and electronics [11]. The paper demonstrated that the traditional Matched-Model Method for quality adjustment showed significant downward bias, and that the machine-learning approach to product matching was more cost-effective than the standard hedonic regression method. It was, in retrospect, the first public indication that the Bank's Research Department had decided that the task of understanding Japanese prices required methods fundamentally different from those that had served it during the era of stable, near-zero inflation.
The Institutional Signal of the 2025 Workshop
Institutional commitment to new methodologies can be inferred from working papers and academic output, but it is more reliably read in the priorities set for the institution's own internal forums. The Bank of Japan's Institute for Monetary and Economic Studies held its eleventh annual Finance Workshop on the twenty-eighth of November 2025, choosing as its explicit theme "Applications of Machine Learning and AI to Financial Analysis" [12]. Director-General Shingo Watanabe's opening remarks noted that machine-learning and AI technologies had been utilised in finance "from an early stage" and expressed the Institute's interest in exploring broader applications of large language models in the analysis of financial markets. The three presentations delivered at the workshop covered the use of machine-learning techniques to identify non-linear determinants of Japanese government bond market liquidity, the application of SHAP interpretability methods to explain cross-sectional stock return predictions, and a comparison of pre-trained large language models against traditional dictionary-based approaches in analysing the sentiment of Japanese securities reports, in which the former were found to predict stock market reactions with greater accuracy [12]. The choice of this theme for a formal Institute workshop, attended by external academics and Bank staff alike, is a deliberate institutional statement. It is not the kind of event one organises around a methodology one regards as peripheral.
At a separate research workshop held in February 2025, a Bank economist presented findings from a study on Japanese recession forecasting using mixed-frequency data and machine-learning methods, comparing support vector machines, random forests, and LightGBM against a logistic regression benchmark [13]. The machine-learning models demonstrated greater forecasting accuracy than the benchmark, and the incorporation of newspaper text data as an alternative data input was found to improve short-term recession prediction. A paper published by the Bank's Research Laboratory in November 2025 described an experimental application of the GPT-4o mini language model to economic simulations, using the model to generate synthetic consumer responses to wage and price changes [14]. The authors were appropriately cautious about the practical implications of the results, noting that reproducibility remained challenging due to the inherent randomness in large language model outputs and that further verification would be required before widespread application. That caveat itself is characteristic of the institution: the Bank of Japan does not announce findings it cannot stand behind.
What Governor Ueda Said Seven Days Ago
On the third of March 2026, seven days before this article is published, Governor Kazuo Ueda addressed the FIN/SUM 2026 conference in Tokyo. His remarks on artificial intelligence are worth quoting with care, because they represent the most senior public endorsement of machine-learning applications within the Bank's analytical work to date. Ueda stated that "generative AI is utilized to sort, by type of goods, large volumes of price data collected from markets, which allows users to generate real-time price indices and gain an accurate grasp of inflationary developments," and described this as an instance of "nowcasting," an analytical method that he characterised as emerging and significant [15]. He also observed, more broadly, that AI "enables the analysis and processing of a variety of big data, whether it be with speed and precision or by taking an unconventional approach." These are carefully chosen words from a careful speaker, and they confirm what the working paper record had already suggested: the Bank's leadership regards the machine-learning analytical programme as a legitimate and institutionally endorsed part of its toolkit, not as a speculative research exercise.
The caveat that must accompany this account is equally important. The Bank of Japan's formal policy framework, as it appears in the quarterly Outlook for Economic Activity and Prices and in the deliberations of the Policy Board, continues to rest on conventional econometric analysis and the judgment of the Board members themselves. The machine-learning and alternative data tools are, at present, inputs to the analysis that staff economists provide to the Board, not to the formal projection methodology published in the Outlook. The BIS survey of central bank practices, submitted to the G20 in October 2025, confirmed this pattern across institutions: the primary deployment of machine learning in central banks is in staff research, real-time monitoring, and economic surveillance, rather than in the formal models that underpin official policy projections [16]. This distinction matters, because it means that the analytical revolution is less complete than a reading of the working paper record alone might suggest.
The Question That Remains
The Bank of Japan's machine-learning programme was built, over approximately seven years of patient institutional development, as a response to the analytical failures of the deflation decades: the failure of conventional models to predict when inflation would return, to track the speed of its return when it came, and to disentangle the relative contributions of cost-push and demand-pull factors in an economy where both were operating simultaneously. The tools are better than what preceded them. The nowcasting infrastructure developed by the Research Department can track the Japanese economy in something closer to real time, with a granularity of observation that would have been impossible ten years ago. The text analysis of the Economy Watchers Survey provides a qualitative signal on price expectations that precedes the official data by months.
What remains unresolved is whether tools that were built and validated during the return of inflation are adequate to guide analysis through what may prove to be a more complex and uncertain period: the question of whether Japan's inflation above two per cent represents a durable regime change or a transitory response to cost shocks that will eventually dissipate as those shocks unwind. The Bank's own December 2024 Broad-Perspective Review found that the inflation dynamics of the past three years were driven in significant part by factors that its conventional models had not anticipated [3]. The machine-learning tools now in place are better equipped to track those dynamics in real time. They are not, by their nature, better equipped to determine whether the dynamics will persist, which is the question that will determine whether Japan's monetary normalisation was appropriately calibrated or, in a different and equally uncomfortable sense from its predecessor, premature.
An institution that spent thirty years underestimating how persistent low inflation would be is now asking its new analytical tools to help it avoid the opposite error. That is a considerably more demanding use case than the one for which most of the tools were designed. I do not raise this as a criticism: the tools are the best available, and the people deploying them are serious and capable. I raise it because it is the right question to ask, and because the history of central banking suggests that the analytical frameworks most thoroughly validated by experience are precisely the ones most likely to mislead when the regime changes. The Bank of Japan has lived through that lesson once already. Whether its new tools are adequate to prevent a repetition is the question that will define the next decade of its history.
- Bank of Japan. "Statement on Monetary Policy." Bank of Japan Monetary Policy Meeting, March 2024. 19 March 2024. boj.or.jp
- Bank of Japan. "Statement on Monetary Policy." Bank of Japan Monetary Policy Meeting, December 2025. 19 December 2025. boj.or.jp
- Bank of Japan. "Review of Monetary Policy from a Broad Perspective." Bank of Japan. December 2024. boj.or.jp
- Liu, Yang, Ran Pan, and Rui Xu. "Mending the Crystal Ball: Enhanced Inflation Forecasts with Machine Learning." IMF Working Paper 2024/206. International Monetary Fund. September 2024. imf.org
- Kameda, Seisaku. "Use of Alternative Data in the Bank of Japan's Research Activities." Bank of Japan Review Series, 2022-E-1. 21 January 2022. boj.or.jp
- Furukawa, Kakuho, et al. "A Nowcasting Model of Industrial Production using Alternative Data and Machine Learning Approaches." Bank of Japan Working Paper 22-E-16. November 2022. boj.or.jp
- Nakazawa, Takashi. "Constructing GDP Nowcasting Models Using Alternative Data." Bank of Japan Working Paper 22-E-9. July 2022. boj.or.jp
- Okubo, Tomohiro, et al. "Development of ‘Alternative Data Consumption Index’: Nowcasting Private Consumption Using Alternative Data." Bank of Japan Working Paper 22-E-8. July 2022. boj.or.jp
- Furukawa, Kakuho. "A Nowcasting Model of Exports Using Maritime Big Data." Bank of Japan Working Paper 22-E-19. December 2022. boj.or.jp
- Nakajima, Jouchi, et al. "Extracting Firms’ Short-Term Inflation Expectations from the Economy Watchers Survey Using Text Analysis." Bank of Japan Working Paper 21-E-12. 2021. boj.or.jp
- Abe, Nobuhiro, and Kimiaki Shinozaki. "Compilation of Experimental Price Indices Using Big Data and Machine Learning." Bank of Japan Working Paper 18-E-13. August 2018. boj.or.jp
- Bank of Japan Institute for Monetary and Economic Studies. "11th Annual Finance Workshop: Applications of Machine Learning and AI to Financial Analysis." IMES Newsletter. February 2026. imes.boj.or.jp
- Bank of Japan Institute for Monetary and Economic Studies. "BOK/ERI – BOJ/IMES Joint Research Workshop, February 2025." IMES Newsletter. May 2025. imes.boj.or.jp
- Takahashi, Yusuke, Kazuki Otaka, and Naoya Kato. "Potential Applications of Generative AI in Economic Simulations." Bank of Japan Research Laboratory Series, 25-E-1. 13 November 2025. boj.or.jp
- Ueda, Kazuo. "Remarks at FIN/SUM 2026." Bank of Japan. 3 March 2026. boj.or.jp
- Bank for International Settlements. "The Use of Artificial Intelligence for Policy Purposes." BIS Report to G20 Finance Ministers and Central Bank Governors. October 2025. bis.org