llmtime  by ngruver

Research paper code for zero-shot time series forecasting with LLMs

created 1 year ago
789 stars

Top 45.3% on sourcepulse

GitHubView on GitHub
Project Summary

LLMTime enables zero-shot time series forecasting by encoding numerical data as text and leveraging Large Language Models (LLMs) for extrapolation. It targets researchers and practitioners seeking to forecast time series without task-specific training, offering competitive performance against traditional methods, especially with powerful base LLMs.

How It Works

LLMTime represents time series data as textual prompts, allowing LLMs to predict future values through text completion. This approach bypasses the need for traditional model training on target datasets. The method's effectiveness scales with LLM capabilities, though aligned models like GPT-4 may underperform compared to base models like GPT-3 due to differences in their output generation characteristics.

Quick Start & Requirements

  • Install via source install.sh (creates a conda environment llmtime).
  • Requires CUDA 11.8 for PyTorch (or adjust install.sh for other CUDA versions).
  • OpenAI API key can be set via export OPENAI_API_KEY=<your key>.
  • Mistral API key can be set via export MISTRAL_KEY=<your key>.
  • Demo notebook demo.ipynb requires no GPUs.
  • Official quick-start: demo.ipynb

Highlighted Details

  • Outperforms traditional methods in zero-shot settings.
  • Performance scales with base LLM power (e.g., GPT-3 > GPT-4).
  • Supports GPT-3, GPT-3.5, GPT-4, Mistral, and LLaMA 2.
  • Offers experimental replication scripts for Darts, Monash, Synthetic, Missing Values, and Memorization benchmarks.

Maintenance & Community

  • Authors: Nate Gruver, Marc Finzi, Shikai Qiu, Andrew Gordon Wilson.
  • Paper: NeurIPS 2023.

Licensing & Compatibility

  • License: Not explicitly stated in the README.
  • Compatibility: Can use OpenAI API for forecasting, avoiding local GPU requirements for those models.

Limitations & Caveats

RLHF-aligned models may show degraded performance compared to base models. The README suggests specific temperature tuning for gpt-3.5-turbo-instruct.

Health Check
Last commit

11 months ago

Responsiveness

1 week

Pull Requests (30d)
0
Issues (30d)
0
Star History
19 stars in the last 90 days

Explore Similar Projects

Starred by George Hotz George Hotz(Author of tinygrad; Founder of the tiny corp, comma.ai) and Ross Taylor Ross Taylor(Cofounder of General Reasoning; Creator of Papers with Code).

GPT2 by ConnorJL

0%
1k
GPT2 training implementation, supporting TPUs and GPUs
created 6 years ago
updated 2 years ago
Starred by George Hotz George Hotz(Author of tinygrad; Founder of the tiny corp, comma.ai), Andrej Karpathy Andrej Karpathy(Founder of Eureka Labs; Formerly at Tesla, OpenAI; Author of CS 231n), and
5 more.

TinyZero by Jiayi-Pan

0.2%
12k
Minimal reproduction of DeepSeek R1 Zero for countdown/multiplication tasks
created 6 months ago
updated 3 months ago
Feedback? Help us improve.