false

S&P 500 Trend Prediction

S&P 500 Trend Prediction ArXiv ID: 2412.11462 “View on arXiv” Authors: Unknown Abstract This project aims to predict short-term and long-term upward trends in the S&P 500 index using machine learning models and feature engineering based on the “101 Formulaic Alphas” methodology. The study employed multiple models, including Logistic Regression, Decision Trees, Random Forests, Neural Networks, K-Nearest Neighbors (KNN), and XGBoost, to identify market trends from historical stock data collected from Yahoo! Finance. Data preprocessing involved handling missing values, standardization, and iterative feature selection to ensure relevance and variability. For short-term predictions, KNN emerged as the most effective model, delivering robust performance with high recall for upward trends, while for long-term forecasts, XGBoost demonstrated the highest accuracy and AUC scores after hyperparameter tuning and class imbalance adjustments using SMOTE. Feature importance analysis highlighted the dominance of momentum-based and volume-related indicators in driving predictions. However, models exhibited limitations such as overfitting and low recall for positive market movements, particularly in imbalanced datasets. The study concludes that KNN is ideal for short-term alerts, whereas XGBoost is better suited for long-term trend forecasting. Future enhancements could include advanced architectures like Long Short-Term Memory (LSTM) networks and further feature refinement to improve precision and generalizability. These findings contribute to developing reliable machine learning tools for market trend prediction and investment decision-making. ...

December 16, 2024 · 2 min · Research Team

Enhanced Local Explainability and Trust Scores with Random Forest Proximities

Enhanced Local Explainability and Trust Scores with Random Forest Proximities ArXiv ID: 2310.12428 “View on arXiv” Authors: Unknown Abstract We initiate a novel approach to explain the predictions and out of sample performance of random forest (RF) regression and classification models by exploiting the fact that any RF can be mathematically formulated as an adaptive weighted K nearest-neighbors model. Specifically, we employ a recent result that, for both regression and classification tasks, any RF prediction can be rewritten exactly as a weighted sum of the training targets, where the weights are RF proximities between the corresponding pairs of data points. We show that this linearity facilitates a local notion of explainability of RF predictions that generates attributions for any model prediction across observations in the training set, and thereby complements established feature-based methods like SHAP, which generate attributions for a model prediction across input features. We show how this proximity-based approach to explainability can be used in conjunction with SHAP to explain not just the model predictions, but also out-of-sample performance, in the sense that proximities furnish a novel means of assessing when a given model prediction is more or less likely to be correct. We demonstrate this approach in the modeling of US corporate bond prices and returns in both regression and classification cases. ...

October 19, 2023 · 2 min · Research Team