pet  by timoschick

Code for pattern-exploiting training (PET) research paper

created 5 years ago
1,627 stars

Top 25.8% on SourcePulse

GitHubView on GitHub
Project Summary

Pattern-Exploiting Training (PET) offers a semi-supervised approach for few-shot text classification and natural language inference by reformulating examples as cloze questions. It targets researchers and practitioners working with limited labeled data, demonstrating significant performance gains over standard supervised methods and even larger models like GPT-3, while requiring substantially fewer parameters.

How It Works

PET reformulates tasks into cloze-style questions, where a language model predicts a masked token based on a pattern and a verbalizer mapping labels to words. The iPET variant iteratively refines models and can operate with zero training examples. This approach leverages the masked language modeling objective to adapt models to specific tasks with minimal labeled data, outperforming traditional fine-tuning in low-resource scenarios.

Quick Start & Requirements

  • Install dependencies: pip install -r requirements.txt
  • Run training/evaluation: python3 cli.py --method <pet|ipet|sequence_classifier> ...
  • Requires Python, PyTorch, and Hugging Face Transformers. Specific model requirements (e.g., XLNet for long sequences) apply.
  • See CLI Usage for detailed command arguments.

Highlighted Details

  • Achieves state-of-the-art few-shot performance on SuperGLUE tasks.
  • Outperforms GPT-3 in low-resource settings with 99.9% fewer parameters.
  • Supports custom tasks by defining DataProcessors and Pattern-Verbalizers (PVPs).
  • Iterative PET (iPET) can train models with zero labeled examples.

Maintenance & Community

The project is associated with Timo Schick and Hinrich Schütze. No specific community channels (Discord/Slack) or active development signals are mentioned in the README.

Licensing & Compatibility

The repository does not explicitly state a license. Users should verify licensing for commercial use or integration into closed-source projects.

Limitations & Caveats

The experimental MultiMaskTaskHelper for multi-token verbalizers has limitations, including a batch size of 1 for evaluation and potential scaling issues with long verbalizers. It has only been tested with PET, not iPET.

Health Check
Last commit

2 years ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
1 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.