The Memorization Problem: Can We Trust LLMs' Economic Forecasts?
The Memorization Problem: Can We Trust LLMs’ Economic Forecasts? ArXiv ID: 2504.14765 “View on arXiv” Authors: Unknown Abstract Large language models (LLMs) cannot be trusted for economic forecasts during periods covered by their training data. Counterfactual forecasting ability is non-identified when the model has seen the realized values: any observed output is consistent with both genuine skill and memorization. Any evidence of memorization represents only a lower bound on encoded knowledge. We demonstrate LLMs have memorized economic and financial data, recalling exact values before their knowledge cutoff. Instructions to respect historical boundaries fail to prevent recall-level accuracy, and masking fails as LLMs reconstruct entities and dates from minimal context. Post-cutoff, we observe no recall. Memorization extends to embeddings. ...