Generalized Exponentiated Gradient Algorithms and Their Application to On-Line Portfolio Selection

ArXiv ID: 2406.00655 “View on arXiv”

Authors: Unknown

Abstract

This paper introduces a novel family of generalized exponentiated gradient (EG) updates derived from an Alpha-Beta divergence regularization function. Collectively referred to as EGAB, the proposed updates belong to the category of multiplicative gradient algorithms for positive data and demonstrate considerable flexibility by controlling iteration behavior and performance through three hyperparameters: $α$, $β$, and the learning rate $η$. To enforce a unit $l_1$ norm constraint for nonnegative weight vectors within generalized EGAB algorithms, we develop two slightly distinct approaches. One method exploits scale-invariant loss functions, while the other relies on gradient projections onto the feasible domain. As an illustration of their applicability, we evaluate the proposed updates in addressing the online portfolio selection problem (OLPS) using gradient-based methods. Here, they not only offer a unified perspective on the search directions of various OLPS algorithms (including the standard exponentiated gradient and diverse mean-reversion strategies), but also facilitate smooth interpolation and extension of these updates due to the flexibility in hyperparameter selection. Simulation results confirm that the adaptability of these generalized gradient updates can effectively enhance the performance for some portfolios, particularly in scenarios involving transaction costs.

Keywords: Exponentiated Gradient, Alpha-Beta Divergence, Online Portfolio Selection, Multiplicative Gradient Algorithms, Nonnegative Constraints, Portfolio Selection

Complexity vs Empirical Score

  • Math Complexity: 8.5/10
  • Empirical Rigor: 6.5/10
  • Quadrant: Holy Grail
  • Why: The paper introduces a novel generalized family of exponentiated gradient algorithms derived from Alpha-Beta divergence, involving advanced mathematical formulations and derivations. It also demonstrates empirical validation through extensive simulation experiments in online portfolio selection, including transaction costs and hyperparameter optimization, making it both mathematically dense and backtest-ready.
  flowchart TD
    A["Research Goal<br>Develop flexible EG updates via<br>Alpha-Beta divergence for OLPS"] --> B["Derive EGAB Update<br>Family from Alpha-Beta<br>Divergence Regularization"]
    B --> C{"Enforce Unit l1 Norm?"}
    C -->|Method 1| D["Scale-Invariant<br>Loss Functions"]
    C -->|Method 2| E["Gradient Projection<br>onto Feasible Domain"]
    D --> F["Apply to Online Portfolio<br>Selection with Gradient Methods"]
    E --> F
    F --> G["Simulation on<br>Portfolio Data"]
    G --> H["Key Findings<br>Unified view of OLPS algorithms<br>Hyperparameters enable interpolation<br>Improved performance with transaction costs"]