Comparative analysis of stationarity for Bitcoin and the S&P500
ArXiv ID: 2408.02973 “View on arXiv”
Authors: Unknown
Abstract
This paper compares and contrasts stationarity between the conventional stock market and cryptocurrency. The dataset used for the analysis is the intraday price indices of the S&P500 from 1996 to 2023 and the intraday Bitcoin indices from 2019 to 2023, both in USD. We adopt the definition of `wide sense stationary’, which constrains the time independence of the first and second moments of a time series. The testing method used in this paper follows the Wiener-Khinchin Theorem, i.e., that for a wide sense stationary process, the power spectral density and the autocorrelation are a Fourier transform pair. We demonstrate that localized stationarity can be achieved by truncating the time series into segments, and for each segment, detrending and normalizing the price return are required. These results show that the S&P500 price return can achieve stationarity for the full 28-year period with a detrending window of 12 months and a constrained normalization window of 10 minutes. With truncated segments, a larger normalization window can be used to establish stationarity, indicating that within the segment the data is more homogeneous. For Bitcoin price return, the segment with higher volatility presents stationarity with a normalization window of 60 minutes, whereas stationarity cannot be established in other segments.
Keywords: Stationarity Testing, Wiener-Khinchin Theorem, Power Spectral Density, Time Series Analysis, Autocorrelation, Equities and Cryptocurrency
Complexity vs Empirical Score
- Math Complexity: 7.5/10
- Empirical Rigor: 3.0/10
- Quadrant: Lab Rats
- Why: The paper employs advanced mathematical concepts such as the Wiener-Khinchin Theorem, Fourier transforms, and detailed derivations of windowing functions, placing it in the high math complexity range. However, the empirical implementation is primarily theoretical and methodological, with no mention of backtesting, live data integration, or specific quantitative performance metrics, resulting in low empirical rigor.
flowchart TD
A["Research Goal: Compare Stationarity of S&P500 vs. Bitcoin"] --> B["Data Collection & Preprocessing"]
subgraph B ["Inputs"]
B1["S&P500 Intraday<br>1996-2023"]
B2["Bitcoin Intraday<br>2019-2023"]
end
B --> C{"Core Methodology<br>Wiener-Khinchin Theorem"}
C --> D["Detrending & Normalization<br>Segmentation of Data"]
D --> E["Stationarity Testing<br>Power Spectrum Analysis"]
subgraph E ["Computational Process"]
E1["Calculate Autocorrelation"]
E2["Calculate Power Spectral Density<br>via Fourier Transform"]
end
E --> F{"Key Findings"}
subgraph F ["Outcomes"]
F1["S&P500: Stationary<br>Full 28-year period achievable<br>12m Detrend / 10min Norm"]
F2["Bitcoin: Segmented Stationarity<br>Only in high volatility segments<br>60min Norm required"]
end