false

The Theory of Intrinsic Time: A Primer

The Theory of Intrinsic Time: A Primer ArXiv ID: 2406.07354 “View on arXiv” Authors: Unknown Abstract The concept of time mostly plays a subordinate role in finance and economics. The assumption is that time flows continuously and that time series data should be analyzed at regular, equidistant intervals. Nonetheless, already nearly 60 years ago, the concept of an event-based measure of time was first introduced. This paper expands on this theme by discussing the paradigm of intrinsic time, its origins, history, and modern applications. Departing from traditional, continuous measures of time, intrinsic time proposes an event-based, algorithmic framework that captures the dynamic and fluctuating nature of real-world phenomena more accurately. Unsuspected implications arise in general for complex systems and specifically for financial markets. For instance, novel structures and regularities are revealed, otherwise obscured by any analysis utilizing equidistant time intervals. Of particular interest is the emergence of a multiplicity of scaling laws, a hallmark signature of an underlying organizational principle in complex systems. Moreover, a central insight from this novel paradigm is the realization that universal time does not exist; instead, time is observer-dependent, shaped by the intrinsic activity unfolding within complex systems. This research opens up new avenues for economic modeling and forecasting, paving the way for a deeper understanding of the invisible forces that guide the evolution and emergence of market dynamics and financial systems. An exciting and rich landscape of possibilities emerges within the paradigm of intrinsic time. ...

June 11, 2024 · 2 min · Research Team

Elicitability and identifiability of tail risk measures

Elicitability and identifiability of tail risk measures ArXiv ID: 2404.14136 “View on arXiv” Authors: Unknown Abstract Tail risk measures are fully determined by the distribution of the underlying loss beyond its quantile at a certain level, with Value-at-Risk, Expected Shortfall and Range Value-at-Risk being prime examples. They are induced by law-based risk measures, called their generators, evaluated on the tail distribution. This paper establishes joint identifiability and elicitability results of tail risk measures together with the corresponding quantile, provided that their generators are identifiable and elicitable, respectively. As an example, we establish the joint identifiability and elicitability of the tail expectile together with the quantile. The corresponding consistent scores constitute a novel class of weighted scores, nesting the known class of scores of Fissler and Ziegel for the Expected Shortfall together with the quantile. For statistical purposes, our results pave the way to easier model fitting for tail risk measures via regression and the generalized method of moments, but also model comparison and model validation in terms of established backtesting procedures. ...

April 22, 2024 · 2 min · Research Team

On Risk-Sensitive Decision Making Under Uncertainty

On Risk-Sensitive Decision Making Under Uncertainty ArXiv ID: 2404.13371 “View on arXiv” Authors: Unknown Abstract This paper studies a risk-sensitive decision-making problem under uncertainty. It considers a decision-making process that unfolds over a fixed number of stages, in which a decision-maker chooses among multiple alternatives, some of which are deterministic and others are stochastic. The decision-maker’s cumulative value is updated at each stage, reflecting the outcomes of the chosen alternatives. After formulating this as a stochastic control problem, we delineate the necessary optimality conditions for it. Two illustrative examples from optimal betting and inventory management are provided to support our theory. ...

April 20, 2024 · 1 min · Research Team

FinLlama: Financial Sentiment Classification for Algorithmic Trading Applications

FinLlama: Financial Sentiment Classification for Algorithmic Trading Applications ArXiv ID: 2403.12285 “View on arXiv” Authors: Unknown Abstract There are multiple sources of financial news online which influence market movements and trader’s decisions. This highlights the need for accurate sentiment analysis, in addition to having appropriate algorithmic trading techniques, to arrive at better informed trading decisions. Standard lexicon based sentiment approaches have demonstrated their power in aiding financial decisions. However, they are known to suffer from issues related to context sensitivity and word ordering. Large Language Models (LLMs) can also be used in this context, but they are not finance-specific and tend to require significant computational resources. To facilitate a finance specific LLM framework, we introduce a novel approach based on the Llama 2 7B foundational model, in order to benefit from its generative nature and comprehensive language manipulation. This is achieved by fine-tuning the Llama2 7B model on a small portion of supervised financial sentiment analysis data, so as to jointly handle the complexities of financial lexicon and context, and further equipping it with a neural network based decision mechanism. Such a generator-classifier scheme, referred to as FinLlama, is trained not only to classify the sentiment valence but also quantify its strength, thus offering traders a nuanced insight into financial news articles. Complementing this, the implementation of parameter-efficient fine-tuning through LoRA optimises trainable parameters, thus minimising computational and memory requirements, without sacrificing accuracy. Simulation results demonstrate the ability of the proposed FinLlama to provide a framework for enhanced portfolio management decisions and increased market returns. These results underpin the ability of FinLlama to construct high-return portfolios which exhibit enhanced resilience, even during volatile periods and unpredictable market events. ...

March 18, 2024 · 2 min · Research Team

Multiple-bubble testing in the cryptocurrency market: a case study of bitcoin

Multiple-bubble testing in the cryptocurrency market: a case study of bitcoin ArXiv ID: 2401.05417 “View on arXiv” Authors: Unknown Abstract Economic periods and financial crises have highlighted the importance of evaluating financial markets to investors and researchers in recent decades. Keywords: financial markets, economic periods, financial crises, market evaluation, General Financial Markets Complexity vs Empirical Score Math Complexity: 6.0/10 Empirical Rigor: 3.0/10 Quadrant: Lab Rats Why: The paper applies advanced statistical methods like the Right-Tail Augmented Dickey–Fuller (RTADF) test, indicating significant mathematical modeling, but the excerpt shows no implementation details, backtesting results, or data processing steps, resulting in low empirical readiness. flowchart TD A["Research Question<br>Identify & test for multiple bubbles<br>in the cryptocurrency market"] --> B["Data Input<br>Historical Bitcoin Price Data<br>across different time periods"] B --> C["Methodology<br>Advanced Bubble Testing<br>e.g., GSADF or SADF"] C --> D["Computational Process<br>Calculate Test Statistics<br>Identify Bubble Regimes"] D --> E["Key Findings<br>Detect multiple bubble periods<br>Assess crash risks<br>Market implications"]

December 29, 2023 · 1 min · Research Team

On the Three Demons in Causality in Finance: Time Resolution, Nonstationarity, and Latent Factors

On the Three Demons in Causality in Finance: Time Resolution, Nonstationarity, and Latent Factors ArXiv ID: 2401.05414 “View on arXiv” Authors: Unknown Abstract Financial data is generally time series in essence and thus suffers from three fundamental issues: the mismatch in time resolution, the time-varying property of the distribution - nonstationarity, and causal factors that are important but unknown/unobserved. In this paper, we follow a causal perspective to systematically look into these three demons in finance. Specifically, we reexamine these issues in the context of causality, which gives rise to a novel and inspiring understanding of how the issues can be addressed. Following this perspective, we provide systematic solutions to these problems, which hopefully would serve as a foundation for future research in the area. ...

December 28, 2023 · 2 min · Research Team

Discrete-Time Mean-Variance Strategy Based on Reinforcement Learning

Discrete-Time Mean-Variance Strategy Based on Reinforcement Learning ArXiv ID: 2312.15385 “View on arXiv” Authors: Unknown Abstract This paper studies a discrete-time mean-variance model based on reinforcement learning. Compared with its continuous-time counterpart in \cite{“zhou2020mv”}, the discrete-time model makes more general assumptions about the asset’s return distribution. Using entropy to measure the cost of exploration, we derive the optimal investment strategy, whose density function is also Gaussian type. Additionally, we design the corresponding reinforcement learning algorithm. Both simulation experiments and empirical analysis indicate that our discrete-time model exhibits better applicability when analyzing real-world data than the continuous-time model. ...

December 24, 2023 · 2 min · Research Team

Shai: A large language model for asset management

Shai: A large language model for asset management ArXiv ID: 2312.14203 “View on arXiv” Authors: Unknown Abstract This paper introduces “Shai” a 10B level large language model specifically designed for the asset management industry, built upon an open-source foundational model. With continuous pre-training and fine-tuning using a targeted corpus, Shai demonstrates enhanced performance in tasks relevant to its domain, outperforming baseline models. Our research includes the development of an innovative evaluation framework, which integrates professional qualification exams, tailored tasks, open-ended question answering, and safety assessments, to comprehensively assess Shai’s capabilities. Furthermore, we discuss the challenges and implications of utilizing large language models like GPT-4 for performance assessment in asset management, suggesting a combination of automated evaluation and human judgment. Shai’s development, showcasing the potential and versatility of 10B-level large language models in the financial sector with significant performance and modest computational requirements, hopes to provide practical insights and methodologies to assist industry peers in their similar endeavors. ...

December 21, 2023 · 2 min · Research Team

Discrete time optimal investment under model uncertainty

Discrete time optimal investment under model uncertainty ArXiv ID: 2307.11919 “View on arXiv” Authors: Unknown Abstract We study a robust utility maximization problem in a general discrete-time frictionless market under quasi-sure no-arbitrage. The investor is assumed to have a random and concave utility function defined on the whole real-line. She also faces model ambiguity on her beliefs about the market, which is modeled through a set of priors. We prove the existence of an optimal investment strategy using only primal methods. For that we assume classical assumptions on the market and on the random utility function as asymptotic elasticity constraints. Most of our other assumptions are stated on a prior-by-prior basis and correspond to generally accepted assumptions in the literature on markets without ambiguity. We also propose a general setting including utility functions with benchmark for which our assumptions are easily checked. ...

July 21, 2023 · 2 min · Research Team

Ten Financial Applications of Machine Learning (Seminar Slides)

Ten Financial Applications of Machine Learning (Seminar Slides) ArXiv ID: ssrn-3197726 “View on arXiv” Authors: Unknown Abstract Financial ML offers the opportunity to gain insight from data:* Modelling non-linear relationships in a high-dimensional space* Analyzing unstructured d Keywords: Financial ML, machine learning, non-linear modeling, high-dimensional data, unstructured data analysis, General Financial Markets Complexity vs Empirical Score Math Complexity: 3.0/10 Empirical Rigor: 4.0/10 Quadrant: Philosophers Why: The content is conceptual, emphasizing high-level ML applications and data insights (e.g., non-linear relationships, meta-labeling) without presenting specific equations, derivations, or implementation details. It lacks backtest metrics, code, or datasets, focusing more on theoretical justification and conceptual frameworks than on hands-on empirical validation. flowchart TD A["Research Goal<br>Apply ML to Finance"] --> B["Key Methodology<br>Non-linear & High-dimensional Modeling"] B --> C{"Data Inputs"} C --> D["Unstructured &<br>Market Data"] C --> E["Structured<br>Financial Data"] D & E --> F["Computational Processes<br>ML Algorithms"] F --> G["Key Outcomes<br>Insight Generation"] G --> H{"General Financial<br>Markets Application"} H --> I["Improved Prediction"] H --> J["Risk Management"]

June 18, 2018 · 1 min · Research Team