false

Optimizing Sharpe Ratio: Risk-Adjusted Decision-Making in Multi-Armed Bandits

Optimizing Sharpe Ratio: Risk-Adjusted Decision-Making in Multi-Armed Bandits ArXiv ID: 2406.06552 “View on arXiv” Authors: Unknown Abstract Sharpe Ratio (SR) is a critical parameter in characterizing financial time series as it jointly considers the reward and the volatility of any stock/portfolio through its variance. Deriving online algorithms for optimizing the SR is particularly challenging since even offline policies experience constant regret with respect to the best expert Even-Dar et al (2006). Thus, instead of optimizing the usual definition of SR, we optimize regularized square SR (RSSR). We consider two settings for the RSSR, Regret Minimization (RM) and Best Arm Identification (BAI). In this regard, we propose a novel multi-armed bandit (MAB) algorithm for RM called UCB-RSSR for RSSR maximization. We derive a path-dependent concentration bound for the estimate of the RSSR. Based on that, we derive the regret guarantees of UCB-RSSR and show that it evolves as O(log n) for the two-armed bandit case played for a horizon n. We also consider a fixed budget setting for well-known BAI algorithms, i.e., sequential halving and successive rejects, and propose SHVV, SHSR, and SuRSR algorithms. We derive the upper bound for the error probability of all proposed BAI algorithms. We demonstrate that UCB-RSSR outperforms the only other known SR optimizing bandit algorithm, U-UCB Cassel et al (2023). We also establish its efficacy with respect to other benchmarks derived from the GRA-UCB and MVTS algorithms. We further demonstrate the performance of proposed BAI algorithms for multiple different setups. Our research highlights that our proposed algorithms will find extensive applications in risk-aware portfolio management problems. Consequently, our research highlights that our proposed algorithms will find extensive applications in risk-aware portfolio management problems. ...

May 28, 2024 · 2 min · Research Team

Trading Volume Maximization with Online Learning

Trading Volume Maximization with Online Learning ArXiv ID: 2405.13102 “View on arXiv” Authors: Unknown Abstract We explore brokerage between traders in an online learning framework. At any round $t$, two traders meet to exchange an asset, provided the exchange is mutually beneficial. The broker proposes a trading price, and each trader tries to sell their asset or buy the asset from the other party, depending on whether the price is higher or lower than their private valuations. A trade happens if one trader is willing to sell and the other is willing to buy at the proposed price. Previous work provided guidance to a broker aiming at enhancing traders’ total earnings by maximizing the gain from trade, defined as the sum of the traders’ net utilities after each interaction. In contrast, we investigate how the broker should behave to maximize the trading volume, i.e., the total number of trades. We model the traders’ valuations as an i.i.d. process with an unknown distribution. If the traders’ valuations are revealed after each interaction (full-feedback), and the traders’ valuations cumulative distribution function (cdf) is continuous, we provide an algorithm achieving logarithmic regret and show its optimality up to constant factors. If only their willingness to sell or buy at the proposed price is revealed after each interaction ($2$-bit feedback), we provide an algorithm achieving poly-logarithmic regret when the traders’ valuations cdf is Lipschitz and show that this rate is near-optimal. We complement our results by analyzing the implications of dropping the regularity assumptions on the unknown traders’ valuations cdf. If we drop the continuous cdf assumption, the regret rate degrades to $Θ(\sqrt{“T”})$ in the full-feedback case, where $T$ is the time horizon. If we drop the Lipschitz cdf assumption, learning becomes impossible in the $2$-bit feedback case. ...

May 21, 2024 · 3 min · Research Team

$ε$-Policy Gradient for Online Pricing

$ε$-Policy Gradient for Online Pricing ArXiv ID: 2405.03624 “View on arXiv” Authors: Unknown Abstract Combining model-based and model-free reinforcement learning approaches, this paper proposes and analyzes an $ε$-policy gradient algorithm for the online pricing learning task. The algorithm extends $ε$-greedy algorithm by replacing greedy exploitation with gradient descent step and facilitates learning via model inference. We optimize the regret of the proposed algorithm by quantifying the exploration cost in terms of the exploration probability $ε$ and the exploitation cost in terms of the gradient descent optimization and gradient estimation errors. The algorithm achieves an expected regret of order $\mathcal{“O”}(\sqrt{“T”})$ (up to a logarithmic factor) over $T$ trials. ...

May 6, 2024 · 2 min · Research Team

A Game of Competition for Risk

A Game of Competition for Risk ArXiv ID: 2305.18941 “View on arXiv” Authors: Unknown Abstract In this study, we present models where participants strategically select their risk levels and earn corresponding rewards, mirroring real-world competition across various sectors. Our analysis starts with a normal form game involving two players in a continuous action space, confirming the existence and uniqueness of a Nash equilibrium and providing an analytical solution. We then extend this analysis to multi-player scenarios, introducing a new numerical algorithm for its calculation. A key novelty of our work lies in using regret minimization algorithms to solve continuous games through discretization. This groundbreaking approach enables us to incorporate additional real-world factors like market frictions and risk correlations among firms. We also experimentally validate that the Nash equilibrium in our model also serves as a correlated equilibrium. Our findings illuminate how market frictions and risk correlations affect strategic risk-taking. We also explore how policy measures can impact risk-taking and its associated rewards, with our model providing broader applicability than the Diamond-Dybvig framework. We make our methodology and open-source code available at https://github.com/louisabraham/cfrgame Finally, we contribute methodologically by advocating the use of algorithms in economics, shifting focus from finite games to games with continuous action sets. Our study provides a solid framework for analyzing strategic interactions in continuous action games, emphasizing the importance of market frictions, risk correlations, and policy measures in strategic risk-taking dynamics. ...

May 30, 2023 · 2 min · Research Team