false

Adaptive Market Intelligence: A Mixture of Experts Framework for Volatility-Sensitive Stock Forecasting

Adaptive Market Intelligence: A Mixture of Experts Framework for Volatility-Sensitive Stock Forecasting ArXiv ID: 2508.02686 “View on arXiv” Authors: Diego Vallarino Abstract This study develops and empirically validates a Mixture of Experts (MoE) framework for stock price prediction across heterogeneous volatility regimes using real market data. The proposed model combines a Recurrent Neural Network (RNN) optimized for high-volatility stocks with a linear regression model tailored to stable equities. A volatility-aware gating mechanism dynamically weights the contributions of each expert based on asset classification. Using a dataset of 30 publicly traded U.S. stocks spanning diverse sectors, the MoE approach consistently outperforms both standalone models. Specifically, it achieves up to 33% improvement in MSE for volatile assets and 28% for stable assets relative to their respective baselines. Stratified evaluation across volatility classes demonstrates the model’s ability to adapt complexity to underlying market dynamics. These results confirm that no single model suffices across market regimes and highlight the advantage of adaptive architectures in financial prediction. Future work should explore real-time gate learning, dynamic volatility segmentation, and applications to portfolio optimization. ...

July 22, 2025 · 2 min · Research Team

LLM-Based Routing in Mixture of Experts: A Novel Framework for Trading

LLM-Based Routing in Mixture of Experts: A Novel Framework for Trading ArXiv ID: 2501.09636 “View on arXiv” Authors: Unknown Abstract Recent advances in deep learning and large language models (LLMs) have facilitated the deployment of the mixture-of-experts (MoE) mechanism in the stock investment domain. While these models have demonstrated promising trading performance, they are often unimodal, neglecting the wealth of information available in other modalities, such as textual data. Moreover, the traditional neural network-based router selection mechanism fails to consider contextual and real-world nuances, resulting in suboptimal expert selection. To address these limitations, we propose LLMoE, a novel framework that employs LLMs as the router within the MoE architecture. Specifically, we replace the conventional neural network-based router with LLMs, leveraging their extensive world knowledge and reasoning capabilities to select experts based on historical price data and stock news. This approach provides a more effective and interpretable selection mechanism. Our experiments on multimodal real-world stock datasets demonstrate that LLMoE outperforms state-of-the-art MoE models and other deep neural network approaches. Additionally, the flexible architecture of LLMoE allows for easy adaptation to various downstream tasks. ...

January 16, 2025 · 2 min · Research Team