Multimodal Stock Price Prediction: A Case Study of the Russian Securities Market

ArXiv ID: 2503.08696 “View on arXiv”

Authors: Unknown

Abstract

Classical asset price forecasting methods primarily rely on numerical data, such as price time series, trading volumes, limit order book data, and technical analysis indicators. However, the news flow plays a significant role in price formation, making the development of multimodal approaches that combine textual and numerical data for improved prediction accuracy highly relevant. This paper addresses the problem of forecasting financial asset prices using the multimodal approach that combines candlestick time series and textual news flow data. A unique dataset was collected for the study, which includes time series for 176 Russian stocks traded on the Moscow Exchange and 79,555 financial news articles in Russian. For processing textual data, pre-trained models RuBERT and Vikhr-Qwen2.5-0.5b-Instruct (a large language model) were used, while time series and vectorized text data were processed using an LSTM recurrent neural network. The experiments compared models based on a single modality (time series only) and two modalities, as well as various methods for aggregating text vector representations. Prediction quality was estimated using two key metrics: Accuracy (direction of price movement prediction: up or down) and Mean Absolute Percentage Error (MAPE), which measures the deviation of the predicted price from the true price. The experiments showed that incorporating textual modality reduced the MAPE value by 55%. The resulting multimodal dataset holds value for the further adaptation of language models in the financial sector. Future research directions include optimizing textual modality parameters, such as the time window, sentiment, and chronological order of news messages.

Keywords: Multimodal Learning, Price Forecasting, LSTM, Large Language Models (LLM), RuBERT, Equities (Russian Stocks)

Complexity vs Empirical Score

  • Math Complexity: 7.0/10
  • Empirical Rigor: 9.0/10
  • Quadrant: Holy Grail
  • Why: The paper employs advanced deep learning architectures (LSTMs, pre-trained LLMs like RuBERT/Qwen) for multimodal fusion, representing substantial mathematical modeling complexity, and demonstrates empirical rigor with a custom dataset of 176 stocks and 79,555 news articles, rigorous backtesting metrics (Accuracy and MAPE), and a clear 55% MAPE improvement claim.
  flowchart TD
    A["Research Goal: Forecast Russian stock prices using multimodal data"] --> B["Data Collection"]
    B --> C["Modality Processing"]
    C --> D["Model Training & Prediction"]
    D --> E["Evaluation & Findings"]

    B --> B1["(176 Stocks: Candlestick Time Series)"]
    B --> B2["(79,555 News Articles: Textual Data)"]

    C --> C1["Text Embeddings: RuBERT & Vikhr-Qwen2.5 LLM"]
    C --> C2["Time Series: LSTM Processing"]

    D --> D1["Multimodal Fusion"]
    D1 --> D2["LSTM Model Training"]

    E --> E1["Metrics: Accuracy & MAPE"]
    E --> E2["Key Result: 55% MAPE Reduction"]
    E --> E3["Outcome: Validated Multimodal Dataset"]