false

Bankruptcy analysis using images and convolutional neural networks (CNN)

Bankruptcy analysis using images and convolutional neural networks (CNN) ArXiv ID: 2502.15726 “View on arXiv” Authors: Unknown Abstract The marketing departments of financial institutions strive to craft products and services that cater to the diverse needs of businesses of all sizes. However, it is evident upon analysis that larger corporations often receive a more substantial portion of available funds. This disparity arises from the relative ease of assessing the risk of default and bankruptcy in these more prominent companies. Historically, risk analysis studies have focused on data from publicly traded or stock exchange-listed companies, leaving a gap in knowledge about small and medium-sized enterprises (SMEs). Addressing this gap, this study introduces a method for evaluating SMEs by generating images for processing via a convolutional neural network (CNN). To this end, more than 10,000 images, one for each company in the sample, were created to identify scenarios in which the CNN can operate with higher assertiveness and reduced training error probability. The findings demonstrate a significant predictive capacity, achieving 97.8% accuracy, when a substantial number of images are utilized. Moreover, the image creation method paves the way for potential applications of this technique in various sectors and for different analytical purposes. ...

January 29, 2025 · 2 min · Research Team

Missing Data Imputation With Granular Semantics and AI-driven Pipeline for Bankruptcy Prediction

Missing Data Imputation With Granular Semantics and AI-driven Pipeline for Bankruptcy Prediction ArXiv ID: 2404.00013 “View on arXiv” Authors: Unknown Abstract This work focuses on designing a pipeline for the prediction of bankruptcy. The presence of missing values, high dimensional data, and highly class-imbalance databases are the major challenges in the said task. A new method for missing data imputation with granular semantics has been introduced here. The merits of granular computing have been explored here to define this method. The missing values have been predicted using the feature semantics and reliable observations in a low-dimensional space, in the granular space. The granules are formed around every missing entry, considering a few of the highly correlated features and most reliable closest observations to preserve the relevance and reliability, the context, of the database against the missing entries. An intergranular prediction is then carried out for the imputation within those contextual granules. That is, the contextual granules enable a small relevant fraction of the huge database to be used for imputation and overcome the need to access the entire database repetitively for each missing value. This method is then implemented and tested for the prediction of bankruptcy with the Polish Bankruptcy dataset. It provides an efficient solution for big and high-dimensional datasets even with large imputation rates. Then an AI-driven pipeline for bankruptcy prediction has been designed using the proposed granular semantic-based data filling method followed by the solutions to the issues like high dimensional dataset and high class-imbalance in the dataset. The rest of the pipeline consists of feature selection with the random forest for reducing dimensionality, data balancing with SMOTE, and prediction with six different popular classifiers including deep NN. All methods defined here have been experimentally verified with suitable comparative studies and proven to be effective on all the data sets captured over the five years. ...

March 15, 2024 · 3 min · Research Team

From Numbers to Words: Multi-Modal Bankruptcy Prediction Using the ECL Dataset

From Numbers to Words: Multi-Modal Bankruptcy Prediction Using the ECL Dataset ArXiv ID: 2401.12652 “View on arXiv” Authors: Unknown Abstract In this paper, we present ECL, a novel multi-modal dataset containing the textual and numerical data from corporate 10K filings and associated binary bankruptcy labels. Furthermore, we develop and critically evaluate several classical and neural bankruptcy prediction models using this dataset. Our findings suggest that the information contained in each data modality is complementary for bankruptcy prediction. We also see that the binary bankruptcy prediction target does not enable our models to distinguish next year bankruptcy from an unhealthy financial situation resulting in bankruptcy in later years. Finally, we explore the use of LLMs in the context of our task. We show how GPT-based models can be used to extract meaningful summaries from the textual data but zero-shot bankruptcy prediction results are poor. All resources required to access and update the dataset or replicate our experiments are available on github.com/henriarnoUG/ECL. ...

January 23, 2024 · 2 min · Research Team

Distressed Firm and Bankruptcy Prediction in an International Context: A Review and Empirical Analysis of Altman's Z-Score Model

Distressed Firm and Bankruptcy Prediction in an International Context: A Review and Empirical Analysis of Altman’s Z-Score Model ArXiv ID: ssrn-2536340 “View on arXiv” Authors: Unknown Abstract The purpose of this paper is firstly to review the literature on the efficacy and importance of the Altman Z-Score bankruptcy prediction model globally and its Keywords: Altman Z-Score, Bankruptcy Prediction, Credit Risk Modeling, Financial Ratios, Distress Analysis, Equity/Fixed Income Complexity vs Empirical Score Math Complexity: 4.0/10 Empirical Rigor: 7.0/10 Quadrant: Street Traders Why: The paper applies a well-established linear model (Z-Score) with basic statistical metrics, showing low math complexity, but uses a large international dataset, cross-country validation, and AUC analysis, indicating high empirical rigor. flowchart TD A["Research Goal<br>Evaluate global efficacy of Altman Z-Score<br>in distressed firm & bankruptcy prediction"] --> B["Methodology & Data<br>Literature review & empirical analysis<br>of international financial data"] B --> C["Input Variables<br>Financial Ratios:<br>Working Capital/Total Assets<br>Retained Earnings/Total Assets<br>EBIT/Total Assets<br>Market Value/Book Value<br>Sales/Total Assets"] C --> D["Computational Process<br>Calculate Altman Z-Score:<br>Z = 1.2A + 1.4B + 3.3C + 0.6D + 1.0E<br>Apply Thresholds: Z < 1.8 (Distress)"] D --> E["Key Findings<br>Model demonstrates moderate predictive power<br>Contextual limitations in global markets<br>Recommendations for sector/region adjustments"]

December 11, 2014 · 1 min · Research Team