NLP library based on HuggingFace Transformers
Top 75.0% on sourcepulse
HugNLP is a comprehensive NLP library designed to enhance the convenience and effectiveness for NLP researchers, built upon the Hugging Face Transformers ecosystem. It offers a unified framework for developing and applying various NLP tasks, including sequence classification, information extraction, and code understanding, with a focus on knowledge-enhanced pre-training, prompt-based fine-tuning, and instruction tuning.
How It Works
HugNLP integrates several advanced NLP paradigms. It introduces KP-PLM for knowledge-enhanced pre-training by decomposing knowledge sub-graphs into language prompts. For fine-tuning, it supports prompt-based methods like PET and P-tuning, and parameter-efficient techniques such as LoRA and Adapter-tuning. The library also unifies tasks into extractive, inference, or generative formats for instruction tuning and in-context learning, enabling few/zero-shot capabilities. Additionally, it implements uncertainty-aware self-training to mitigate noise in semi-supervised learning.
Quick Start & Requirements
git clone https://github.com/HugAILab/HugNLP.git
cd HugNLP
python3 setup.py install
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project is noted as still under development, with potential for bugs. Users are encouraged to report issues and contribute pull requests.
1 year ago
1 day