PPO agent for autonomous driving in CARLA simulator
Top 100.0% on sourcepulse
This project provides a framework for training deep reinforcement learning agents for autonomous driving in the CARLA simulator. It targets researchers and developers looking to experiment with RL-based driving agents, offering a gym-like environment with custom reward functions and analysis tools for faster iteration and hypothesis testing.
How It Works
The core approach involves training a Variational Autoencoder (VAE) to reconstruct semantic segmentation maps, which are then used to encode the CARLA environment's state. This encoded representation, combined with vehicle measurements (steering, throttle, speed), serves as input to a Proximal Policy Optimization (PPO) agent. This VAE-based state representation is claimed to significantly improve RL agent performance compared to using raw RGB input.
Quick Start & Requirements
make package
, potentially including Town07
map.python run_eval.py --model_name pretrained_agent -start_carla
python train.py --model_name name_of_your_model -start_carla
Highlighted Details
CarlaLapEnv
for lap following and CarlaRouteEnv
for point-to-point navigation.Maintenance & Community
The project appears to be a master's thesis project from NTNU, with the primary author listed as Marcus Loo Vergara. No active community channels or ongoing maintenance efforts are explicitly mentioned in the README.
Licensing & Compatibility
The README does not explicitly state a license. The code uses TensorFlow and OpenAI Gym, which have permissive licenses. However, the CARLA simulator itself has its own licensing terms.
Limitations & Caveats
The environment is not strictly deterministic, even in synchronous mode. The environment implementation does not strictly adhere to OpenAI gym standards, requiring modifications for direct use with standard gym algorithms. The project is primarily focused on CARLA version 0.9.5 and the Town07
map.
3 years ago
Inactive