Curated list of resources arguing against Transformers for time series forecasting
Top 48.1% on sourcepulse
This repository serves as a curated collection of research papers, code, and articles that critically evaluate the efficacy of Transformer and Large Language Model (LLM) architectures for time series forecasting. It aims to provide evidence and alternatives for practitioners and researchers who may be over-relying on these complex models, highlighting superior performance from simpler, non-Transformer State-of-the-Art (SOTA) methods.
How It Works
The repository compiles a comprehensive list of academic publications, many with accompanying code, that challenge the prevailing narrative around Transformers and LLMs in time series forecasting. It showcases research demonstrating that simpler models, such as MLPs, CNNs, and linear models, often achieve comparable or superior results with greater efficiency and interpretability, particularly for long-term forecasting tasks. The collection emphasizes frequency-domain analysis and decomposition techniques as key advantages.
Quick Start & Requirements
pip install -r requirements.txt
).Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
This repository is a curated list of research and does not provide a unified framework or single executable. Users must individually assess and implement the code for each paper. The "best" model is context-dependent, and direct comparisons across all listed papers may require significant effort.
1 week ago
Inactive