Paper collection for large language models (LLMs)
Top 86.9% on sourcepulse
This repository serves as a curated collection of papers and resources related to Large Language Models (LLMs), focusing on foundational research and evaluation methodologies. It is intended for researchers, engineers, and practitioners in the NLP and AI fields seeking a comprehensive overview of LLM advancements, particularly concerning models like OpenAI's GPT series and Meta's Llama.
How It Works
The repository organizes a vast number of academic papers, categorized by LLM capabilities and research areas such as reasoning, instruction tuning, retrieval augmentation, and multimodal applications. It leverages an automated script powered by Auto-Bibfile to maintain and regenerate its README from BibTeX entries and JSON data, ensuring a structured and up-to-date compilation of relevant literature.
Quick Start & Requirements
Highlighted Details
Maintenance & Community
The repository is maintained by Runzhe Wang and Shenyu Zhang, with contributions from a dedicated team of paper collectors and organizers, including Guilin Qi and Xiaofang Qi.
Licensing & Compatibility
The repository itself does not specify a license, but it links to external academic papers, each with its own licensing and usage terms.
Limitations & Caveats
This repository is a collection of links to external research papers and does not provide any executable code or models. The sheer volume of papers means it is a starting point for exploration rather than an exhaustive, annotated guide.
2 months ago
Inactive