Python framework for NLP adversarial attacks, data augmentation, and model training
Top 15.3% on sourcepulse
TextAttack is a comprehensive Python framework designed for researchers and practitioners in Natural Language Processing (NLP) to generate adversarial examples, augment datasets, and train NLP models. It provides a unified interface for understanding, developing, and benchmarking various adversarial attack methods against NLP models, enhancing model robustness and interpretability.
How It Works
TextAttack modularizes adversarial attacks into four key components: Goal Functions (defining attack success), Constraints (validating perturbations), Transformations (generating modifications), and Search Methods (navigating the perturbation space). This design allows for the assembly of existing attacks from literature and the creation of novel ones by combining these components, enabling model-agnostic analysis of any NLP model that can process string inputs.
Quick Start & Requirements
pip install textattack
Highlighted Details
Maintenance & Community
CONTRIBUTING.md
.Licensing & Compatibility
Limitations & Caveats
3 weeks ago
1 week