false

Optimal Capital Deployment Under Stochastic Deal Arrivals: A Continuous-Time ADP Approach

Optimal Capital Deployment Under Stochastic Deal Arrivals: A Continuous-Time ADP Approach ArXiv ID: 2508.10300 “View on arXiv” Authors: Kunal Menda, Raphael S Benarrosh Abstract Suppose you are a fund manager with $100 million to deploy and two years to invest it. A deal comes across your desk that looks appealing but costs $50 million – half of your available capital. Should you take it, or wait for something better? The decision hinges on the trade-off between current opportunities and uncertain future arrivals. This work formulates the problem of capital deployment under stochastic deal arrivals as a continuous-time Markov decision process (CTMDP) and solves it numerically via an approximate dynamic programming (ADP) approach. We model deal economics using correlated lognormal distributions for multiples on invested capital (MOIC) and deal sizes, and model arrivals as a nonhomogeneous Poisson process (NHPP). Our approach uses quasi-Monte Carlo (QMC) sampling to efficiently approximate the continuous-time Bellman equation for the value function over a discretized capital grid. We present an interpretable acceptance policy, illustrating how selectivity evolves over time and as capital is consumed. We show in simulation that this policy outperforms a baseline that accepts any affordable deal exceeding a fixed hurdle rate. ...

August 14, 2025 · 2 min · Research Team

CATNet: A geometric deep learning approach for CAT bond spread prediction in the primary market

CATNet: A geometric deep learning approach for CAT bond spread prediction in the primary market ArXiv ID: 2508.10208 “View on arXiv” Authors: Dixon Domfeh, Saeid Safarveisi Abstract Traditional models for pricing catastrophe (CAT) bonds struggle to capture the complex, relational data inherent in these instruments. This paper introduces CATNet, a novel framework that applies a geometric deep learning architecture, the Relational Graph Convolutional Network (R-GCN), to model the CAT bond primary market as a graph, leveraging its underlying network structure for spread prediction. Our analysis reveals that the CAT bond market exhibits the characteristics of a scale-free network, a structure dominated by a few highly connected and influential hubs. CATNet demonstrates high predictive performance, significantly outperforming a strong Random Forest benchmark. The inclusion of topological centrality measures as features provides a further, significant boost in accuracy. Interpretability analysis confirms that these network features are not mere statistical artifacts; they are quantitative proxies for long-held industry intuition regarding issuer reputation, underwriter influence, and peril concentration. This research provides evidence that network connectivity is a key determinant of price, offering a new paradigm for risk assessment and proving that graph-based models can deliver both state-of-the-art accuracy and deeper, quantifiable market insights. ...

August 13, 2025 · 2 min · Research Team

Language of Persuasion and Misrepresentation in Business Communication: A Textual Detection Approach

Language of Persuasion and Misrepresentation in Business Communication: A Textual Detection Approach ArXiv ID: 2508.09935 “View on arXiv” Authors: Sayem Hossen, Monalisa Moon Joti, Md. Golam Rashed Abstract Business communication digitisation has reorganised the process of persuasive discourse, which allows not only greater transparency but also advanced deception. This inquiry synthesises classical rhetoric and communication psychology with linguistic theory and empirical studies in the financial reporting, sustainability discourse, and digital marketing to explain how deceptive language can be systematically detected using persuasive lexicon. In controlled settings, detection accuracies of greater than 99% were achieved by using computational textual analysis as well as personalised transformer models. However, reproducing this performance in multilingual settings is also problematic and, to a large extent, this is because it is not easy to find sufficient data, and because few multilingual text-processing infrastructures are in place. This evidence shows that there has been an increasing gap between the theoretical representations of communication and those empirically approximated, and therefore, there is a need to have strong automatic text-identification systems where AI-based discourse is becoming more realistic in communicating with humans. ...

August 13, 2025 · 2 min · Research Team

Marketron Through the Looking Glass: From Equity Dynamics to Option Pricing in Incomplete Markets

Marketron Through the Looking Glass: From Equity Dynamics to Option Pricing in Incomplete Markets ArXiv ID: 2508.09863 “View on arXiv” Authors: Igor Halperin, Andrey Itkin Abstract The Marketron model, introduced by [“Halperin, Itkin, 2025”], describes price formation in inelastic markets as the nonlinear diffusion of a quasiparticle (the marketron) in a multidimensional space comprising the log-price $x$, a memory variable $y$ encoding past money flows, and unobservable return predictors $z$. While the original work calibrated the model to S&P 500 time series data, this paper extends the framework to option markets - a fundamentally distinct challenge due to market incompleteness stemming from non-tradable state variables. We develop a utility-based pricing approach that constructs a risk-adjusted measure via the dual solution of an optimal investment problem. The resulting Hamilton-Jacobi-Bellman (HJB) equation, though computationally formidable, is solved using a novel methodology enabling efficient calibration even on standard laptop hardware. Having done that, we look at the additional question to answer: whether the Marketron model, calibrated to market option prices, can simultaneously reproduce the statistical properties of the underlying asset’s log-returns. We discuss our results in view of the long-standing challenge in quantitative finance of developing an unified framework capable of jointly capturing equity returns, option smile dynamics, and potentially volatility index behavior. ...

August 13, 2025 · 2 min · Research Team

Mitigating Distribution Shift in Stock Price Data via Return-Volatility Normalization for Accurate Prediction

Mitigating Distribution Shift in Stock Price Data via Return-Volatility Normalization for Accurate Prediction ArXiv ID: 2508.20108 “View on arXiv” Authors: Hyunwoo Lee, Jihyeong Jeon, Jaemin Hong, U Kang Abstract How can we address distribution shifts in stock price data to improve stock price prediction accuracy? Stock price prediction has attracted attention from both academia and industry, driven by its potential to uncover complex market patterns and enhance decisionmaking. However, existing methods often fail to handle distribution shifts effectively, focusing on scaling or representation adaptation without fully addressing distributional discrepancies and shape misalignments between training and test data. We propose ReVol (Return-Volatility Normalization for Mitigating Distribution Shift in Stock Price Data), a robust method for stock price prediction that explicitly addresses the distribution shift problem. ReVol leverages three key strategies to mitigate these shifts: (1) normalizing price features to remove sample-specific characteristics, including return, volatility, and price scale, (2) employing an attention-based module to estimate these characteristics accurately, thereby reducing the influence of market anomalies, and (3) reintegrating the sample characteristics into the predictive process, restoring the traits lost during normalization. Additionally, ReVol combines geometric Brownian motion for long-term trend modeling with neural networks for short-term pattern recognition, unifying their complementary strengths. Extensive experiments on real-world datasets demonstrate that ReVol enhances the performance of the state-of-the-art backbone models in most cases, achieving an average improvement of more than 0.03 in IC and over 0.7 in SR across various settings. ...

August 13, 2025 · 2 min · Research Team

Optimal Control of Reserve Asset Portfolios for Pegged Digital Currencies

Optimal Control of Reserve Asset Portfolios for Pegged Digital Currencies ArXiv ID: 2508.09429 “View on arXiv” Authors: Alexander Hammerl, Georg Beyschlag Abstract Stablecoins promise par convertibility, yet issuers must balance immediate liquidity against yield on reserves to keep the peg credible. We study this treasury problem as a continuous-time control task with two instruments: reallocating reserves between cash and short-duration government bills, and setting a spread fee for either minting or burning the coin. Mint and redemption flows follow mutually exciting processes that reproduce clustered order flow; peg deviations arise when redemptions exceed liquid reserves within settlement windows. We develop a stochastic model predictive control framework that incorporates moment closure for event intensities. Using Pontryagin’s Maximum Principle, we demonstrate that the optimal control exhibits a bang-off-bang structure: each asset type is purchased at maximum capacity when the utility difference exceeds the corresponding difference in shadow costs. Introducing settlement windows leads to a sampled-data implementation with a simple threshold (soft-thresholding) structure for rebalancing. We also establish a monotone stress-response property: as expected outflows intensify or windows lengthen, the optimal policy shifts predictably toward cash. In simulations covering various stress test scenarios, the controller preserves most bill carry in calm markets, builds cash quickly when stress emerges, and avoids unnecessary rotations under transitory signals. The proposed policy is implementation-ready and aligns naturally with operational cut-offs. Our results translate empirical flow risk into auditable treasury rules that improve peg quality without sacrificing avoidable carry. ...

August 13, 2025 · 2 min · Research Team

Prompt-Response Semantic Divergence Metrics for Faithfulness Hallucination and Misalignment Detection in Large Language Models

Prompt-Response Semantic Divergence Metrics for Faithfulness Hallucination and Misalignment Detection in Large Language Models ArXiv ID: 2508.10192 “View on arXiv” Authors: Igor Halperin Abstract The proliferation of Large Language Models (LLMs) is challenged by hallucinations, critical failure modes where models generate non-factual, nonsensical or unfaithful text. This paper introduces Semantic Divergence Metrics (SDM), a novel lightweight framework for detecting Faithfulness Hallucinations – events of severe deviations of LLMs responses from input contexts. We focus on a specific implementation of these LLM errors, {“confabulations, defined as responses that are arbitrary and semantically misaligned with the user’s query. Existing methods like Semantic Entropy test for arbitrariness by measuring the diversity of answers to a single, fixed prompt. Our SDM framework improves upon this by being more prompt-aware: we test for a deeper form of arbitrariness by measuring response consistency not only across multiple answers but also across multiple, semantically-equivalent paraphrases of the original prompt. Methodologically, our approach uses joint clustering on sentence embeddings to create a shared topic space for prompts and answers. A heatmap of topic co-occurances between prompts and responses can be viewed as a quantified two-dimensional visualization of the user-machine dialogue. We then compute a suite of information-theoretic metrics to measure the semantic divergence between prompts and responses. Our practical score, $\mathcal{S”}_H$, combines the Jensen-Shannon divergence and Wasserstein distance to quantify this divergence, with a high score indicating a Faithfulness hallucination. Furthermore, we identify the KL divergence KL(Answer $||$ Prompt) as a powerful indicator of \textbf{“Semantic Exploration”}, a key signal for distinguishing different generative behaviors. These metrics are further combined into the Semantic Box, a diagnostic framework for classifying LLM response types, including the dangerous, confident confabulation. ...

August 13, 2025 · 2 min · Research Team

A Stream Pipeline Framework for Digital Payment Programming based on Smart Contracts

A Stream Pipeline Framework for Digital Payment Programming based on Smart Contracts ArXiv ID: 2508.21075 “View on arXiv” Authors: Zijia Meng, Victor Feng Abstract Digital payments play a pivotal role in the burgeoning digital economy. Moving forward, the enhancement of digital payment systems necessitates programmability, going beyond just efficiency and convenience, to meet the evolving needs and complexities. Smart contract platforms like Central Bank Digital Currency (CBDC) networks and blockchains support programmable digital payments. However, the prevailing paradigm of programming payment logics involves coding smart contracts with programming languages, leading to high costs and significant security challenges. A novel and versatile method for payment programming on DLTs was presented in this paper - transforming digital currencies into token streams, then pipelining smart contracts to authorize, aggregate, lock, direct, and dispatch these streams efficiently from source to target accounts. By utilizing a small set of configurable templates, a few specialized smart contracts could be generated, and support most of payment logics through configuring and composing them. This approach could substantially reduce the cost of payment programming and enhance security, self-enforcement, adaptability, and controllability, thus hold the potential to become an essential component in the infrastructure of digital economy. ...

August 12, 2025 · 2 min · Research Team

Artificially Intelligent, Naturally Inefficient? Service Quality Investments and the Efficiency Trap in Australian Banking

Artificially Intelligent, Naturally Inefficient? Service Quality Investments and the Efficiency Trap in Australian Banking ArXiv ID: ssrn-5379457 “View on arXiv” Authors: Unknown Abstract This paper questions whether the current surge in artificial intelligence (AI) investment within the Australian banking sector will achieve the efficiency gains Keywords: Artificial Intelligence, Banking Efficiency, AI Investment, Digital Transformation, Equities Complexity vs Empirical Score Math Complexity: 1.0/10 Empirical Rigor: 2.0/10 Quadrant: Philosophers Why: The paper focuses on economic theory and qualitative assessment of AI investments in banking, with no advanced mathematics or quantitative modeling presented. Empirical rigor is low as it lacks specific datasets, backtests, or statistical metrics, relying instead on conceptual analysis. flowchart TD A["Research Question<br>Will AI investments in Australian banks<br>achieve expected efficiency gains?"] --> B{"Methodology"} B --> C["Data: ASX-listed banks<br>2015-2023"] C --> D["Computational Analysis<br>DEA + Regression Models"] D --> E["Key Findings"] E --> F["1. Diminishing returns on AI investment"] E --> G["2. Efficiency trap identified"] E --> H["3. Quality-service trade-off<br>offsets automation gains"]

August 12, 2025 · 1 min · Research Team

Deep Reinforcement Learning for Optimal Asset Allocation Using DDPG with TiDE

Deep Reinforcement Learning for Optimal Asset Allocation Using DDPG with TiDE ArXiv ID: 2508.20103 “View on arXiv” Authors: Rongwei Liu, Jin Zheng, John Cartlidge Abstract The optimal asset allocation between risky and risk-free assets is a persistent challenge due to the inherent volatility in financial markets. Conventional methods rely on strict distributional assumptions or non-additive reward ratios, which limit their robustness and applicability to investment goals. To overcome these constraints, this study formulates the optimal two-asset allocation problem as a sequential decision-making task within a Markov Decision Process (MDP). This framework enables the application of reinforcement learning (RL) mechanisms to develop dynamic policies based on simulated financial scenarios, regardless of prerequisites. We use the Kelly criterion to balance immediate reward signals against long-term investment objectives, and we take the novel step of integrating the Time-series Dense Encoder (TiDE) into the Deep Deterministic Policy Gradient (DDPG) RL framework for continuous decision-making. We compare DDPG-TiDE with a simple discrete-action Q-learning RL framework and a passive buy-and-hold investment strategy. Empirical results show that DDPG-TiDE outperforms Q-learning and generates higher risk adjusted returns than buy-and-hold. These findings suggest that tackling the optimal asset allocation problem by integrating TiDE within a DDPG reinforcement learning framework is a fruitful avenue for further exploration. ...

August 12, 2025 · 2 min · Research Team