Trading environment for reinforcement learning agent training and backtesting
Top 25.4% on sourcepulse
This toolkit provides a flexible environment for training and backtesting reinforcement learning agents for trading. It is designed for researchers and developers working with financial time-series data, offering a framework inspired by OpenAI Gym for creating custom trading strategies.
How It Works
TradingGym simulates trading environments using either tick or OHLC data, supporting custom feature engineering. It allows users to define observation window sizes (obs_data_len
) and step increments (step_len
), managing trading parameters like fees and maximum position size. The environment processes input dataframes and yields states, rewards, and transaction details, facilitating agent evaluation.
Quick Start & Requirements
git clone https://github.com/Yvictor/TradingGym.git
and python setup.py install
.trading_env
and creating an environment instance with specified parameters.Highlighted Details
training_v1
and backtest_v1
environment types.Maintenance & Community
The repository is actively maintained by Yvictor. Further community engagement details are not specified in the README.
Licensing & Compatibility
The repository does not explicitly state a license. Users should verify compatibility for commercial use.
Limitations & Caveats
The project is marked as "WIP" (Work In Progress), with several planned features like real-time trading and advanced RL algorithms (DQN, Policy Gradient, A3C) still under development. The README indicates potential for breaking changes as development progresses.
1 year ago
Inactive