false

Trading Devil: Robust backdoor attack via Stochastic investment models and Bayesian approach

Trading Devil: Robust backdoor attack via Stochastic investment models and Bayesian approach ArXiv ID: 2406.10719 “View on arXiv” Authors: Unknown Abstract With the growing use of voice-activated systems and speech recognition technologies, the danger of backdoor attacks on audio data has grown significantly. This research looks at a specific type of attack, known as a Stochastic investment-based backdoor attack (MarketBack), in which adversaries strategically manipulate the stylistic properties of audio to fool speech recognition systems. The security and integrity of machine learning models are seriously threatened by backdoor attacks, in order to maintain the reliability of audio applications and systems, the identification of such attacks becomes crucial in the context of audio data. Experimental results demonstrated that MarketBack is feasible to achieve an average attack success rate close to 100% in seven victim models when poisoning less than 1% of the training data. ...

June 15, 2024 · 2 min · Research Team

Gray-box Adversarial Attack of Deep Reinforcement Learning-based Trading Agents

Gray-box Adversarial Attack of Deep Reinforcement Learning-based Trading Agents ArXiv ID: 2309.14615 “View on arXiv” Authors: Unknown Abstract In recent years, deep reinforcement learning (Deep RL) has been successfully implemented as a smart agent in many systems such as complex games, self-driving cars, and chat-bots. One of the interesting use cases of Deep RL is its application as an automated stock trading agent. In general, any automated trading agent is prone to manipulations by adversaries in the trading environment. Thus studying their robustness is vital for their success in practice. However, typical mechanism to study RL robustness, which is based on white-box gradient-based adversarial sample generation techniques (like FGSM), is obsolete for this use case, since the models are protected behind secure international exchange APIs, such as NASDAQ. In this research, we demonstrate that a “gray-box” approach for attacking a Deep RL-based trading agent is possible by trading in the same stock market, with no extra access to the trading agent. In our proposed approach, an adversary agent uses a hybrid Deep Neural Network as its policy consisting of Convolutional layers and fully-connected layers. On average, over three simulated trading market configurations, the adversary policy proposed in this research is able to reduce the reward values by 214.17%, which results in reducing the potential profits of the baseline by 139.4%, ensemble method by 93.7%, and an automated trading software developed by our industrial partner by 85.5%, while consuming significantly less budget than the victims (427.77%, 187.16%, and 66.97%, respectively). ...

September 26, 2023 · 2 min · Research Team

Designing an attack-defense game: how to increase robustness of financial transaction models via a competition

Designing an attack-defense game: how to increase robustness of financial transaction models via a competition ArXiv ID: 2308.11406 “View on arXiv” Authors: Unknown Abstract Banks routinely use neural networks to make decisions. While these models offer higher accuracy, they are susceptible to adversarial attacks, a risk often overlooked in the context of event sequences, particularly sequences of financial transactions, as most works consider computer vision and NLP modalities. We propose a thorough approach to studying these risks: a novel type of competition that allows a realistic and detailed investigation of problems in financial transaction data. The participants directly oppose each other, proposing attacks and defenses – so they are examined in close-to-real-life conditions. The paper outlines our unique competition structure with direct opposition of participants, presents results for several different top submissions, and analyzes the competition results. We also introduce a new open dataset featuring financial transactions with credit default labels, enhancing the scope for practical research and development. ...

August 22, 2023 · 2 min · Research Team

Conditional Generators for Limit Order Book Environments: Explainability, Challenges, and Robustness

Conditional Generators for Limit Order Book Environments: Explainability, Challenges, and Robustness ArXiv ID: 2306.12806 “View on arXiv” Authors: Unknown Abstract Limit order books are a fundamental and widespread market mechanism. This paper investigates the use of conditional generative models for order book simulation. For developing a trading agent, this approach has drawn recent attention as an alternative to traditional backtesting due to its ability to react to the presence of the trading agent. Using a state-of-the-art CGAN (from Coletta et al. (2022)), we explore its dependence upon input features, which highlights both strengths and weaknesses. To do this, we use “adversarial attacks” on the model’s features and its mechanism. We then show how these insights can be used to improve the CGAN, both in terms of its realism and robustness. We finish by laying out a roadmap for future work. ...

June 22, 2023 · 2 min · Research Team