BiHRNN – Bi-Directional Hierarchical Recurrent Neural Network for Inflation Forecasting
ArXiv ID: 2503.01893 “View on arXiv”
Authors: Unknown
Abstract
Inflation prediction guides decisions on interest rates, investments, and wages, playing a key role in economic stability. Yet accurate forecasting is challenging due to dynamic factors and the layered structure of the Consumer Price Index, which organizes goods and services into multiple categories. We propose the Bi-directional Hierarchical Recurrent Neural Network (BiHRNN) model to address these challenges by leveraging the hierarchical structure to enable bidirectional information flow between levels. Informative constraints on the RNN parameters enhance predictive accuracy at all levels without the inefficiencies of a unified model. We validated BiHRNN on inflation datasets from the United States, Canada, and Norway by training, tuning hyperparameters, and experimenting with various loss functions. Our results demonstrate that BiHRNN significantly outperforms traditional RNN models, with its bidirectional architecture playing a pivotal role in achieving improved forecasting accuracy.
Keywords: Inflation Prediction, RNN (Recurrent Neural Network), Time Series Forecasting, Consumer Price Index, Deep Learning
Complexity vs Empirical Score
- Math Complexity: 7.5/10
- Empirical Rigor: 8.0/10
- Quadrant: Holy Grail
- Why: The paper introduces a novel neural network architecture with mathematical modifications like bidirectional hierarchical information flow and parameter constraints, showing advanced math. It is highly empirically rigorous, evidenced by training, tuning hyperparameters, experimenting with loss functions, backtesting on three real-world inflation datasets (US, Canada, Norway), and releasing code on GitHub.
flowchart TD
A["Research Goal<br>Inflation Prediction using CPI Hierarchy"] --> B["Data Input<br>US, Canada, Norway CPI Datasets"]
B --> C["Methodology<br>Bi-directional Hierarchical RNN BiHRNN"]
C --> D["Computational Process<br>Hyperparameter Tuning & Loss Function Experiments"]
D --> E["Key Outcome<br>Significantly Outperforms Traditional RNNs"]