false

Market efficiency, informational asymmetry and pseudo-collusion of adaptively learning agents

Market efficiency, informational asymmetry and pseudo-collusion of adaptively learning agents ArXiv ID: 2411.05032 “View on arXiv” Authors: Unknown Abstract We examine the dynamics of informational efficiency in a market with asymmetrically informed, boundedly rational traders who adaptively learn optimal strategies using simple multiarmed bandit (MAB) algorithms. The strategies available to the traders have two dimensions: on the one hand, the traders must endogenously choose whether to acquire a costly information signal, on the other, they must determine how aggressively they trade by choosing the share of their wealth to be invested in the risky asset. Our study contributes to two strands of literature: the literature comparing the effects of competitive and strategic behavior on asset price efficiency under costly information as well as the actively growing literature on algorithmic tacit collusion and pseudo-collusion in financial markets. We find that for certain market environments (with low information costs) our model reproduces the results of Kyle [“1989”] in that the ability of traders to trade strategically leads to worse price efficiency compared to the purely competitive case. For other environments (with high information costs), on the other hand, our results show that a market with strategically acting traders can be more efficient than a purely competitive one. Furthermore, we obtain novel results on the ability of independently learning traders to coordinate on a pseudo-collusive behavior, leading to non-competitive pricing. Contrary to some recent contributions (see e.g. [“Cartea et al. 2022”]), we find that the pseudo-collusive behavior in our model is robust to a large number of agents, demonstrating that even in the setting of financial markets with a large number of independently learning traders non-competitive pricing and pseudo-collusive behavior can frequently arise. ...

November 6, 2024 · 2 min · Research Team

Optimizing Sharpe Ratio: Risk-Adjusted Decision-Making in Multi-Armed Bandits

Optimizing Sharpe Ratio: Risk-Adjusted Decision-Making in Multi-Armed Bandits ArXiv ID: 2406.06552 “View on arXiv” Authors: Unknown Abstract Sharpe Ratio (SR) is a critical parameter in characterizing financial time series as it jointly considers the reward and the volatility of any stock/portfolio through its variance. Deriving online algorithms for optimizing the SR is particularly challenging since even offline policies experience constant regret with respect to the best expert Even-Dar et al (2006). Thus, instead of optimizing the usual definition of SR, we optimize regularized square SR (RSSR). We consider two settings for the RSSR, Regret Minimization (RM) and Best Arm Identification (BAI). In this regard, we propose a novel multi-armed bandit (MAB) algorithm for RM called UCB-RSSR for RSSR maximization. We derive a path-dependent concentration bound for the estimate of the RSSR. Based on that, we derive the regret guarantees of UCB-RSSR and show that it evolves as O(log n) for the two-armed bandit case played for a horizon n. We also consider a fixed budget setting for well-known BAI algorithms, i.e., sequential halving and successive rejects, and propose SHVV, SHSR, and SuRSR algorithms. We derive the upper bound for the error probability of all proposed BAI algorithms. We demonstrate that UCB-RSSR outperforms the only other known SR optimizing bandit algorithm, U-UCB Cassel et al (2023). We also establish its efficacy with respect to other benchmarks derived from the GRA-UCB and MVTS algorithms. We further demonstrate the performance of proposed BAI algorithms for multiple different setups. Our research highlights that our proposed algorithms will find extensive applications in risk-aware portfolio management problems. Consequently, our research highlights that our proposed algorithms will find extensive applications in risk-aware portfolio management problems. ...

May 28, 2024 · 2 min · Research Team