When Dimensionality Hurts: The Role of LLM Embedding Compression for Noisy Regression Tasks
ArXiv ID: 2502.02199 “View on arXiv”
Authors: Unknown
Abstract
Large language models (LLMs) have shown remarkable success in language modelling due to scaling laws found in model size and the hidden dimension of the model’s text representation. Yet, we demonstrate that compressed representations of text can yield better performance in LLM-based regression tasks. In this paper, we compare the relative performance of embedding compression in three different signal-to-noise contexts: financial return prediction, writing quality assessment and review scoring. Our results show that compressing embeddings, in a minimally supervised manner using an autoencoder’s hidden representation, can mitigate overfitting and improve performance on noisy tasks, such as financial return prediction; but that compression reduces performance on tasks that have high causal dependencies between the input and target data. Our results suggest that the success of interpretable compressed representations such as sentiment may be due to a regularising effect.
Keywords: Large Language Models, Embedding Compression, Financial Prediction, Autoencoders, Signal-to-Noise Ratio
Complexity vs Empirical Score
- Math Complexity: 4.0/10
- Empirical Rigor: 7.5/10
- Quadrant: Street Traders
- Why: The paper employs standard machine learning techniques (autoencoders, regression) with minimal advanced mathematical derivations, but demonstrates strong empirical rigor through the use of three distinct real-world datasets (including financial news and stock returns), specific model architectures (all-mpnet-base-v2), and clear performance benchmarking, positioning it as a practical, data-heavy investigation.
flowchart TD
A["Research Goal<br/>Does LLM embedding compression<br/>improve regression performance?"]
B["Dataset & Signal-to-Noise Contexts<br/>Financial (Low SNR)<br/>Writing Quality (High SNR)<br/>Review Scoring (Medium SNR)"]
C["Methodology<br/>Train Autoencoder on LLM Embeddings<br/>Compress to latent space representation"]
D["Computation<br/>Regression Task: Predict Target<br/>Compare Raw vs. Compressed Embeddings"]
E{"Key Findings & Outcomes"}
E --> F["Low SNR (Financial)<br/>Compression improves performance<br/>Reduces overfitting"]
E --> G["High SNR (Writing/Review)<br/>Compression reduces performance<br/>Removes causal dependencies"]
E --> H["Interpretability<br/>Compressed features act as<br/>effective regularization"]
A --> B
B --> C
C --> D
D --> E