Can an unsupervised clustering algorithm reproduce a categorization system?

ArXiv ID: 2408.10340 “View on arXiv”

Authors: Unknown

Abstract

Peer analysis is a critical component of investment management, often relying on expert-provided categorization systems. These systems’ consistency is questioned when they do not align with cohorts from unsupervised clustering algorithms optimized for various metrics. We investigate whether unsupervised clustering can reproduce ground truth classes in a labeled dataset, showing that success depends on feature selection and the chosen distance metric. Using toy datasets and fund categorization as real-world examples we demonstrate that accurately reproducing ground truth classes is challenging. We also highlight the limitations of standard clustering evaluation metrics in identifying the optimal number of clusters relative to the ground truth classes. We then show that if appropriate features are available in the dataset, and a proper distance metric is known (e.g., using a supervised Random Forest-based distance metric learning method), then an unsupervised clustering can indeed reproduce the ground truth classes as distinct clusters.

Keywords: Peer Analysis, Unsupervised Clustering, Random Forest, Distance Metric Learning, Asset Management, Multi-Asset

Complexity vs Empirical Score

  • Math Complexity: 7.0/10
  • Empirical Rigor: 4.0/10
  • Quadrant: Lab Rats
  • Why: The paper employs advanced distance metric learning (Mahalanobis, RF-PHATE) and objective optimization formulas, indicating high mathematical density, but it lacks direct backtesting, live data pipelines, or portfolio implementation metrics, focusing instead on algorithmic reproduction on labeled datasets.
  flowchart TD
    A["Research Goal<br>Can unsupervised clustering reproduce<br>expert-provided categorization?"] --> B

    subgraph B ["Methodology & Data"]
        direction LR
        B1["Toy Datasets<br>Controlled ground truth"] --> B2["Fund Categorization Data<br>Real-world labeled data"]
    end
    
    B --> C["Computational Process"]
    
    subgraph C ["Computational Process"]
        direction TB
        C1["Standard Clustering<br>Various distance metrics"] --> C2["Cluster Evaluation<br>Metric analysis vs ground truth"]
        C3["Supervised Learning<br>Random Forest<br>Distance Metric Learning"] --> C4["Optimized Clustering<br>Reproduction attempt"]
    end
    
    C --> D["Key Findings & Outcomes"]
    
    subgraph D ["Key Findings & Outcomes"]
        D1["Ground Truth Reproduction<br>Challenging with standard metrics"] --> D2["Success Condition<br>Requires appropriate features +<br>proper distance metric"]
    end