false

Prompt-Response Semantic Divergence Metrics for Faithfulness Hallucination and Misalignment Detection in Large Language Models

Prompt-Response Semantic Divergence Metrics for Faithfulness Hallucination and Misalignment Detection in Large Language Models ArXiv ID: 2508.10192 “View on arXiv” Authors: Igor Halperin Abstract The proliferation of Large Language Models (LLMs) is challenged by hallucinations, critical failure modes where models generate non-factual, nonsensical or unfaithful text. This paper introduces Semantic Divergence Metrics (SDM), a novel lightweight framework for detecting Faithfulness Hallucinations – events of severe deviations of LLMs responses from input contexts. We focus on a specific implementation of these LLM errors, {“confabulations, defined as responses that are arbitrary and semantically misaligned with the user’s query. Existing methods like Semantic Entropy test for arbitrariness by measuring the diversity of answers to a single, fixed prompt. Our SDM framework improves upon this by being more prompt-aware: we test for a deeper form of arbitrariness by measuring response consistency not only across multiple answers but also across multiple, semantically-equivalent paraphrases of the original prompt. Methodologically, our approach uses joint clustering on sentence embeddings to create a shared topic space for prompts and answers. A heatmap of topic co-occurances between prompts and responses can be viewed as a quantified two-dimensional visualization of the user-machine dialogue. We then compute a suite of information-theoretic metrics to measure the semantic divergence between prompts and responses. Our practical score, $\mathcal{S”}_H$, combines the Jensen-Shannon divergence and Wasserstein distance to quantify this divergence, with a high score indicating a Faithfulness hallucination. Furthermore, we identify the KL divergence KL(Answer $||$ Prompt) as a powerful indicator of \textbf{“Semantic Exploration”}, a key signal for distinguishing different generative behaviors. These metrics are further combined into the Semantic Box, a diagnostic framework for classifying LLM response types, including the dangerous, confident confabulation. ...

August 13, 2025 · 2 min · Research Team

Wasserstein Robust Market Making via Entropy Regularization

Wasserstein Robust Market Making via Entropy Regularization ArXiv ID: 2503.04072 “View on arXiv” Authors: Unknown Abstract In this paper, we introduce a robust market making framework based on Wasserstein distance, utilizing a stochastic policy approach enhanced by entropy regularization. We demonstrate that, under mild assumptions, the robust market making problem can be reformulated as a convex optimization question. Additionally, we outline a methodology for selecting the optimal radius of the Wasserstein ball, further refining our framework’s effectiveness. ...

March 6, 2025 · 1 min · Research Team

Causality Analysis of COVID-19 Induced Crashes in Stock and Commodity Markets: A Topological Perspective

Causality Analysis of COVID-19 Induced Crashes in Stock and Commodity Markets: A Topological Perspective ArXiv ID: 2502.14431 “View on arXiv” Authors: Unknown Abstract The paper presents a comprehensive causality analysis of the US stock and commodity markets during the COVID-19 crash. The dynamics of different sectors are also compared. We use Topological Data Analysis (TDA) on multidimensional time-series to identify crashes in stock and commodity markets. The Wasserstein Distance WD shows distinct spikes signaling the crash for both stock and commodity markets. We then compare the persistence diagrams of stock and commodity markets using the WD metric. A significant spike in the $WD$ between stock and commodity markets is observed during the crisis, suggesting significant topological differences between the markets. Similar spikes are observed between the sectors of the US market as well. Spikes obtained may be due to either a difference in the magnitude of crashes in the two markets (or sectors), or from the temporal lag between the two markets suggesting information flow. We study the Granger-causality between stock and commodity markets and also between different sectors. The results show a bidirectional Granger-causality between commodity and stock during the crash period, demonstrating the greater interdependence of financial markets during the crash. However, the overall analysis shows that the causal direction is from stock to commodity. A pairwise Granger-causal analysis between US sectors is also conducted. There is a significant increase in the interdependence between the sectors during the crash period. TDA combined with Granger-causality effectively analyzes the interdependence and sensitivity of different markets and sectors. ...

February 20, 2025 · 2 min · Research Team

A Joint Energy and Differentially-Private Smart Meter Data Market

A Joint Energy and Differentially-Private Smart Meter Data Market ArXiv ID: 2412.07688 “View on arXiv” Authors: Unknown Abstract Given the vital role that smart meter data could play in handling uncertainty in energy markets, data markets have been proposed as a means to enable increased data access. However, most extant literature considers energy markets and data markets separately, which ignores the interdependence between them. In addition, existing data market frameworks rely on a trusted entity to clear the market. This paper proposes a joint energy and data market focusing on the day-ahead retailer energy procurement problem with uncertain demand. The retailer can purchase differentially-private smart meter data from consumers to reduce uncertainty. The problem is modelled as an integrated forecasting and optimisation problem providing a means of valuing data directly rather than valuing forecasts or forecast accuracy. Value is determined by the Wasserstein distance, enabling privacy to be preserved during the valuation and procurement process. The value of joint energy and data clearing is highlighted through numerical case studies using both synthetic and real smart meter data. ...

December 10, 2024 · 2 min · Research Team

Time-Causal VAE: Robust Financial Time Series Generator

Time-Causal VAE: Robust Financial Time Series Generator ArXiv ID: 2411.02947 “View on arXiv” Authors: Unknown Abstract We build a time-causal variational autoencoder (TC-VAE) for robust generation of financial time series data. Our approach imposes a causality constraint on the encoder and decoder networks, ensuring a causal transport from the real market time series to the fake generated time series. Specifically, we prove that the TC-VAE loss provides an upper bound on the causal Wasserstein distance between market distributions and generated distributions. Consequently, the TC-VAE loss controls the discrepancy between optimal values of various dynamic stochastic optimization problems under real and generated distributions. To further enhance the model’s ability to approximate the latent representation of the real market distribution, we integrate a RealNVP prior into the TC-VAE framework. Finally, extensive numerical experiments show that TC-VAE achieves promising results on both synthetic and real market data. This is done by comparing real and generated distributions according to various statistical distances, demonstrating the effectiveness of the generated data for downstream financial optimization tasks, as well as showcasing that the generated data reproduces stylized facts of real financial market data. ...

November 5, 2024 · 2 min · Research Team

Identifying Extreme Events in the Stock Market: A Topological Data Analysis

Identifying Extreme Events in the Stock Market: A Topological Data Analysis ArXiv ID: 2405.16052 “View on arXiv” Authors: Unknown Abstract This paper employs Topological Data Analysis (TDA) to detect extreme events (EEs) in the stock market at a continental level. Previous approaches, which analyzed stock indices separately, could not detect EEs for multiple time series in one go. TDA provides a robust framework for such analysis and identifies the EEs during the crashes for different indices. The TDA analysis shows that $L^1$, $L^2$ norms and Wasserstein distance ($W_D$) of the world leading indices rise abruptly during the crashes, surpassing a threshold of $μ+4σ$ where $μ$ and $σ$ are the mean and the standard deviation of norm or $W_D$, respectively. Our study identified the stock index crashes of the 2008 financial crisis and the COVID-19 pandemic across continents as EEs. Given that different sectors in an index behave differently, a sector-wise analysis was conducted during the COVID-19 pandemic for the Indian stock market. The sector-wise results show that after the occurrence of EE, we have observed strong crashes surpassing $μ+2σ$ for an extended period for the banking sector. While for the pharmaceutical sector, no significant spikes were noted. Hence, TDA also proves successful in identifying the duration of shocks after the occurrence of EEs. This also indicates that the Banking sector continued to face stress and remained volatile even after the crash. This study gives us the applicability of TDA as a powerful analytical tool to study EEs in various fields. ...

May 25, 2024 · 3 min · Research Team

Neural option pricing for rough Bergomi model

Neural option pricing for rough Bergomi model ArXiv ID: 2402.02714 “View on arXiv” Authors: Unknown Abstract The rough Bergomi (rBergomi) model can accurately describe the historical and implied volatilities, and has gained much attention in the past few years. However, there are many hidden unknown parameters or even functions in the model. In this work, we investigate the potential of learning the forward variance curve in the rBergomi model using a neural SDE. To construct an efficient solver for the neural SDE, we propose a novel numerical scheme for simulating the volatility process using the modified summation of exponentials. Using the Wasserstein 1-distance to define the loss function, we show that the learned forward variance curve is capable of calibrating the price process of the underlying asset and the price of the European-style options simultaneously. Several numerical tests are provided to demonstrate its performance. ...

February 5, 2024 · 2 min · Research Team

Automated regime detection in multidimensional time series data using sliced Wasserstein k-means clustering

Automated regime detection in multidimensional time series data using sliced Wasserstein k-means clustering ArXiv ID: 2310.01285 “View on arXiv” Authors: Unknown Abstract Recent work has proposed Wasserstein k-means (Wk-means) clustering as a powerful method to identify regimes in time series data, and one-dimensional asset returns in particular. In this paper, we begin by studying in detail the behaviour of the Wasserstein k-means clustering algorithm applied to synthetic one-dimensional time series data. We study the dynamics of the algorithm and investigate how varying different hyperparameters impacts the performance of the clustering algorithm for different random initialisations. We compute simple metrics that we find are useful in identifying high-quality clusterings. Then, we extend the technique of Wasserstein k-means clustering to multidimensional time series data by approximating the multidimensional Wasserstein distance as a sliced Wasserstein distance, resulting in a method we call `sliced Wasserstein k-means (sWk-means) clustering’. We apply the sWk-means clustering method to the problem of automated regime detection in multidimensional time series data, using synthetic data to demonstrate the validity of the approach. Finally, we show that the sWk-means method is effective in identifying distinct market regimes in real multidimensional financial time series, using publicly available foreign exchange spot rate data as a case study. We conclude with remarks about some limitations of our approach and potential complementary or alternative approaches. ...

October 2, 2023 · 2 min · Research Team