Workshop for training and fine-tuning large language models
Top 89.6% on sourcepulse
This repository provides comprehensive materials for a full-day workshop on training and fine-tuning Large Language Models (LLMs), targeting data scientists and ML engineers. It offers hands-on notebooks and presentations covering essential LLM concepts, from basic embeddings and prompt engineering to advanced techniques like parameter-efficient fine-tuning (PEFT) and reinforcement learning from human feedback (RLHF).
How It Works
The workshop is structured into five modules, progressing from foundational knowledge to advanced alignment techniques. It utilizes Hugging Face's Transformers and PEFT libraries, demonstrating practical applications with models like Phi-3 Mini, Llama 3.1, and GPT-2. The approach emphasizes hands-on coding within Jupyter notebooks, complemented by conceptual explanations in presentation slides, enabling participants to build and adapt LLMs for various tasks.
Quick Start & Requirements
*_Install_Requirements.ipynb
notebook detailing necessary library installations.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
5 months ago
Inactive