false

Graph Learning for Foreign Exchange Rate Prediction and Statistical Arbitrage

Graph Learning for Foreign Exchange Rate Prediction and Statistical Arbitrage ArXiv ID: 2508.14784 “View on arXiv” Authors: Yoonsik Hong, Diego Klabjan Abstract We propose a two-step graph learning approach for foreign exchange statistical arbitrages (FXSAs), addressing two key gaps in prior studies: the absence of graph-learning methods for foreign exchange rate prediction (FXRP) that leverage multi-currency and currency-interest rate relationships, and the disregard of the time lag between price observation and trade execution. In the first step, to capture complex multi-currency and currency-interest rate relationships, we formulate FXRP as an edge-level regression problem on a discrete-time spatiotemporal graph. This graph consists of currencies as nodes and exchanges as edges, with interest rates and foreign exchange rates serving as node and edge features, respectively. We then introduce a graph-learning method that leverages the spatiotemporal graph to address the FXRP problem. In the second step, we present a stochastic optimization problem to exploit FXSAs while accounting for the observation-execution time lag. To address this problem, we propose a graph-learning method that enforces constraints through projection and ReLU, maximizes risk-adjusted return by leveraging a graph with exchanges as nodes and influence relationships as edges, and utilizes the predictions from the FXRP method for the constraint parameters and node features. Moreover, we prove that our FXSA method satisfies empirical arbitrage constraints. The experimental results demonstrate that our FXRP method yields statistically significant improvements in mean squared error, and that the FXSA method achieves a 61.89% higher information ratio and a 45.51% higher Sortino ratio than a benchmark. Our approach provides a novel perspective on FXRP and FXSA within the context of graph learning. ...

August 20, 2025 · 2 min · Research Team

Automated Market Makers: A Stochastic Optimization Approach for Profitable Liquidity Concentration

Automated Market Makers: A Stochastic Optimization Approach for Profitable Liquidity Concentration ArXiv ID: 2504.16542 “View on arXiv” Authors: Simon Caspar Zeller, Paul-Niklas Ken Kandora, Daniel Kirste, Niclas Kannengießer, Steffen Rebennack, Ali Sunyaev Abstract Concentrated liquidity automated market makers (AMMs), such as Uniswap v3, enable liquidity providers (LPs) to earn liquidity rewards by depositing tokens into liquidity pools. However, LPs often face significant financial losses driven by poorly selected liquidity provision intervals and high costs associated with frequent liquidity reallocation. To support LPs in achieving more profitable liquidity concentration, we developed a tractable stochastic optimization problem that can be used to compute optimal liquidity provision intervals for profitable liquidity provision. The developed problem accounts for the relationships between liquidity rewards, divergence loss, and reallocation costs. By formalizing optimal liquidity provision as a tractable stochastic optimization problem, we support a better understanding of the relationship between liquidity rewards, divergence loss, and reallocation costs. Moreover, the stochastic optimization problem offers a foundation for more profitable liquidity concentration. ...

April 23, 2025 · 2 min · Research Team

Battery valuation on electricity intraday markets with liquidity costs

Battery valuation on electricity intraday markets with liquidity costs ArXiv ID: 2412.15959 “View on arXiv” Authors: Unknown Abstract In this paper, we propose a complete modelling framework to value several batteries in the electricity intraday market at the trading session scale. The model consists of a stochastic model for the 24 mid-prices (one price per delivery hour) combined with a deterministic model for the liquidity costs (representing the cost of going deeper in the order book). A stochastic optimisation framework based on dynamic programming is used to calculate the value of the batteries. We carry out a back test for the years 2021, 2022 and 2023 for the German market and for the French market. We show that it is essential to take liquidity into account, especially when the number of batteries is large: it allows much higher profits and avoids high losses using our liquidity model. The use of our stochastic model for the mid-price also significantly improves the results (compared to a deterministic framework where the mid-price forecast is the spot price). ...

December 20, 2024 · 2 min · Research Team

A Joint Energy and Differentially-Private Smart Meter Data Market

A Joint Energy and Differentially-Private Smart Meter Data Market ArXiv ID: 2412.07688 “View on arXiv” Authors: Unknown Abstract Given the vital role that smart meter data could play in handling uncertainty in energy markets, data markets have been proposed as a means to enable increased data access. However, most extant literature considers energy markets and data markets separately, which ignores the interdependence between them. In addition, existing data market frameworks rely on a trusted entity to clear the market. This paper proposes a joint energy and data market focusing on the day-ahead retailer energy procurement problem with uncertain demand. The retailer can purchase differentially-private smart meter data from consumers to reduce uncertainty. The problem is modelled as an integrated forecasting and optimisation problem providing a means of valuing data directly rather than valuing forecasts or forecast accuracy. Value is determined by the Wasserstein distance, enabling privacy to be preserved during the valuation and procurement process. The value of joint energy and data clearing is highlighted through numerical case studies using both synthetic and real smart meter data. ...

December 10, 2024 · 2 min · Research Team

Higher order measures of risk and stochastic dominance

Higher order measures of risk and stochastic dominance ArXiv ID: 2402.15387 “View on arXiv” Authors: Unknown Abstract Higher order risk measures are stochastic optimization problems by design, and for this reason they enjoy valuable properties in optimization under uncertainties. They nicely integrate with stochastic optimization problems, as has been observed by the intriguing concept of the risk quadrangles, for example. Stochastic dominance is a binary relation for random variables to compare random outcomes. It is demonstrated that the concepts of higher order risk measures and stochastic dominance are equivalent, they can be employed to characterize the other. The paper explores these relations and connects stochastic orders, higher order risk measures and the risk quadrangle. Expectiles are employed to exemplify the relations obtained. ...

February 23, 2024 · 2 min · Research Team

Portfolio Optimization under Transaction Costs with Recursive Preferences

Portfolio Optimization under Transaction Costs with Recursive Preferences ArXiv ID: 2402.08387 “View on arXiv” Authors: Unknown Abstract The Merton investment-consumption problem is fundamental, both in the field of finance, and in stochastic control. An important extension of the problem adds transaction costs, which is highly relevant from a financial perspective but also challenging from a control perspective because the solution now involves singular control. A further significant extension takes us from additive utility to stochastic differential utility (SDU), which allows time preferences and risk preferences to be disentangled. In this paper, we study this extended version of the Merton problem with proportional transaction costs and Epstein-Zin SDU. We fully characterise all parameter combinations for which the problem is well posed (which may depend on the level of transaction costs) and provide a full verification argument that relies on no additional technical assumptions and uses primal methods only. The case with SDU requires new mathematical techniques as duality methods break down. Even in the special case of (additive) power utility, our arguments are significantly simpler, more elegant and more far-reaching than the ones in the extant literature. This means that we can easily analyse aspects of the problem which previously have been very challenging, including comparative statics, boundary cases which heretofore have required separate treatment and the situation beyond the small transaction cost regime. A key and novel idea is to parametrise consumption and the value function in terms of the shadow fraction of wealth, which may be of much wider applicability. ...

February 13, 2024 · 2 min · Research Team

Representation of forward performance criteria with random endowment via FBSDE and its application to forward optimized certainty equivalent

Representation of forward performance criteria with random endowment via FBSDE and its application to forward optimized certainty equivalent ArXiv ID: 2401.00103 “View on arXiv” Authors: Unknown Abstract We extend the notion of forward performance criteria to settings with random endowment in incomplete markets. Building on these results, we introduce and develop the novel concept of \textit{“forward optimized certainty equivalent (forward OCE)”}, which offers a genuinely dynamic valuation mechanism that accommodates progressively adaptive market model updates, stochastic risk preferences, and incoming claims with arbitrary maturities. In parallel, we develop a new methodology to analyze the emerging stochastic optimization problems by directly studying the candidate optimal control processes for both the primal and dual problems. Specifically, we derive two new systems of forward-backward stochastic differential equations (FBSDEs) and establish necessary and sufficient conditions for optimality, and various equivalences between the two problems. This new approach is general and complements the existing one for forward performance criteria with random endowment based on backward stochastic partial differential equations (backward SPDEs) for the related value functions. We, also, consider representative examples for both forward performance criteria with random endowment and for forward OCE. Furthermore, for the case of exponential criteria, we investigate the connection between forward OCE and forward entropic risk measures. ...

December 29, 2023 · 2 min · Research Team

Onflow: an online portfolio allocation algorithm

Onflow: an online portfolio allocation algorithm ArXiv ID: 2312.05169 “View on arXiv” Authors: Unknown Abstract We introduce Onflow, a reinforcement learning technique that enables online optimization of portfolio allocation policies based on gradient flows. We devise dynamic allocations of an investment portfolio to maximize its expected log return while taking into account transaction fees. The portfolio allocation is parameterized through a softmax function, and at each time step, the gradient flow method leads to an ordinary differential equation whose solutions correspond to the updated allocations. This algorithm belongs to the large class of stochastic optimization procedures; we measure its efficiency by comparing our results to the mathematical theoretical values in a log-normal framework and to standard benchmarks from the ‘old NYSE’ dataset. For log-normal assets, the strategy learned by Onflow, with transaction costs at zero, mimics Markowitz’s optimal portfolio and thus the best possible asset allocation strategy. Numerical experiments from the ‘old NYSE’ dataset show that Onflow leads to dynamic asset allocation strategies whose performances are: a) comparable to benchmark strategies such as Cover’s Universal Portfolio or Helmbold et al. “multiplicative updates” approach when transaction costs are zero, and b) better than previous procedures when transaction costs are high. Onflow can even remain efficient in regimes where other dynamical allocation techniques do not work anymore. Therefore, as far as tested, Onflow appears to be a promising dynamic portfolio management strategy based on observed prices only and without any assumption on the laws of distributions of the underlying assets’ returns. In particular it could avoid model risk when building a trading strategy. ...

December 8, 2023 · 2 min · Research Team

Quantum-inspired nonlinear Galerkin ansatz for high-dimensional HJB equations

Quantum-inspired nonlinear Galerkin ansatz for high-dimensional HJB equations ArXiv ID: 2311.12239 “View on arXiv” Authors: Unknown Abstract Neural networks are increasingly recognized as a powerful numerical solution technique for partial differential equations (PDEs) arising in diverse scientific computing domains, including quantum many-body physics. In the context of time-dependent PDEs, the dominant paradigm involves casting the approximate solution in terms of stochastic minimization of an objective function given by the norm of the PDE residual, viewed as a function of the neural network parameters. Recently, advancements have been made in the direction of an alternative approach which shares aspects of nonlinearly parametrized Galerkin methods and variational quantum Monte Carlo, especially for high-dimensional, time-dependent PDEs that extend beyond the usual scope of quantum physics. This paper is inspired by the potential of solving Hamilton-Jacobi-Bellman (HJB) PDEs using Neural Galerkin methods and commences the exploration of nonlinearly parametrized trial functions for which the evolution equations are analytically tractable. As a precursor to the Neural Galerkin scheme, we present trial functions with evolution equations that admit closed-form solutions, focusing on time-dependent HJB equations relevant to finance. ...

November 20, 2023 · 2 min · Research Team

Uses of Sub-sample Estimates to Reduce Errors in Stochastic Optimization Models

Uses of Sub-sample Estimates to Reduce Errors in Stochastic Optimization Models ArXiv ID: 2310.07052 “View on arXiv” Authors: Unknown Abstract Optimization software enables the solution of problems with millions of variables and associated parameters. These parameters are, however, often uncertain and represented with an analytical description of the parameter’s distribution or with some form of sample. With large numbers of such parameters, optimization of the resulting model is often driven by mis-specifications or extreme sample characteristics, resulting in solutions that are far from a true optimum. This paper describes how asymptotic convergence results may not be useful in large-scale problems and how the optimization of problems based on sub-sample estimates may achieve improved results over models using full-sample solution estimates. A motivating example and numerical results from a portfolio optimization problem demonstrate the potential improvement. A theoretical analysis also provides insight into the structure of problems where sub-sample optimization may be most beneficial. ...

October 10, 2023 · 2 min · Research Team