RL trading agent based on a research paper
Top 69.3% on sourcepulse
This repository provides an implementation of trading strategies using asynchronous advantage actor-critic (A3C) reinforcement learning. It is intended for researchers and practitioners interested in applying deep learning to algorithmic trading problems. The project aims to demonstrate how recurrent neural networks within an A3C framework can learn profitable trading policies.
How It Works
The project utilizes a recurrent actor-critic architecture, specifically A3C, to train trading agents. The core idea is to use a neural network that takes market data as input and outputs trading actions (buy, sell, hold). The recurrent nature allows the agent to maintain an internal state, capturing temporal dependencies in market data, which is crucial for trading. The asynchronous nature of A3C enables parallel training by multiple workers, accelerating the learning process and improving exploration.
Quick Start & Requirements
pip install -r requirements.txt
config.py
.A3C_trading.py
.A3C_testing.ipynb
Jupyter notebook.Highlighted Details
trader_gym.py
).A3C_trading.py
, A3C_testing.ipynb
).config.py
.Maintenance & Community
The project is associated with a published paper: "Using Reinforcement Learning in the Algorithmic Trading Problem" by Ponomarev et al. (2019). No specific community channels or active maintenance indicators are mentioned in the README.
Licensing & Compatibility
The repository does not explicitly state a license. Users should assume all rights are reserved or contact the authors for clarification regarding usage, especially for commercial applications.
Limitations & Caveats
The project name is noted as potentially misleading, with the primary training script being A3C_trading.py
. The dataset download relies on a Google Drive link, which may be subject to availability. The project appears to be research-oriented and may require significant effort to adapt for production trading systems.
2 years ago
Inactive