deep_rl_trader  by miroblog

RL agent for cryptocurrency trading using OpenAI Gym and Keras-RL

created 7 years ago
418 stars

Top 71.2% on sourcepulse

GitHubView on GitHub
Project Summary

This repository provides a cryptocurrency trading environment compatible with OpenAI Gym, coupled with a Dueling Deep Q-Network (DDQN) agent implemented using Keras-RL. It's designed for researchers and traders interested in applying deep reinforcement learning to automated trading strategies, aiming to maximize profit by learning optimal buy, sell, or hold sequences.

How It Works

The core of the project is a custom OpenAI Gym environment that simulates cryptocurrency trading using OHLCV (Open, High, Low, Close, Volume) candle data. The agent observes a configurable window of this data and learns to execute actions (buy, sell, hold) to maximize profit. It employs a Dueling DQN architecture with a sparse reward system, where rewards are granted only upon closing a position or at the end of an episode, encouraging the learning of long-term dependencies. The implementation allows for flexible model definition using Keras.

Quick Start & Requirements

  • Install dependencies: pip install -r requirements.txt
  • Crucial Step: Modify keras-rl/core.py to ./modified/core.py.
  • Requires Python, TensorFlow, Keras, and NumPy.
  • Sample data (5min OHLCV from BitMEX) is provided for training and testing.

Highlighted Details

  • Implements a Dueling DQN agent with configurable dueling_type ('avg', 'max', 'naive').
  • Supports both LONG and SHORT positions, with invalid action sequences (e.g., buy-buy) handled as buy-hold.
  • Offers a sparse reward scheme for learning long-term dependencies.
  • Includes an LSTM-based Keras model architecture.
  • Claims significant initial results (e.g., 29x return) on sample data, with a disclaimer about potential overfitting.

Maintenance & Community

  • The project appears to be the initial work of a single author, Lee Hankyol.
  • No explicit links to community channels (Discord, Slack) or a roadmap are provided in the README.

Licensing & Compatibility

  • Licensed under the MIT License.
  • Permissive for commercial use and integration with closed-source projects.

Limitations & Caveats

The README mentions a potential for overfitting, and the requirement to modify the keras-rl library's core files is a significant integration hurdle. The sparse reward system may lead to longer training times.

Health Check
Last commit

2 years ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
1 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.