PyTorch code for autonomous driving research paper
Top 59.6% on sourcepulse
This repository provides a PyTorch implementation for "Driving with LLMs," a system that fuses object-level vector data with pre-trained Large Language Models (LLMs) to predict explainable autonomous driving actions. It targets researchers and engineers in autonomous driving, offering a robust and interpretable approach to decision-making.
How It Works
The LLM-Driver utilizes object-level vector inputs from a driving simulator, feeding them into pre-trained LLMs. This approach allows for the prediction of steering angles and acceleration/braking commands, alongside generating natural language justifications for these actions and answering driving-related questions. This fusion of structured vector data and LLM capabilities aims to enhance the explainability and interpretability of autonomous driving systems.
Quick Start & Requirements
pip install -r requirements.txt.lock
data/vqa_train_10k.tar.gz
and data/vqa_test_1k.tar.gz
.Highlighted Details
Maintenance & Community
The project is associated with authors from ICRA 2024 and has a follow-up work, LingoQA. It draws inspiration from the Alpaca LoRA repository.
Licensing & Compatibility
The repository does not explicitly state a license in the provided README.
Limitations & Caveats
The project requires significant VRAM (20GB for evaluation, 40GB for training), which may be a barrier for users with limited hardware. The absence of an explicit license could pose compatibility issues for commercial use or closed-source integration.
10 months ago
1+ week