Model scanner for detecting malicious code in ML models
Top 60.1% on sourcepulse
ModelScan addresses the critical security vulnerability of model serialization attacks, protecting machine learning workflows from malicious code embedded within model files. It is designed for ML engineers, data scientists, and MLOps professionals seeking to secure their model supply chain.
How It Works
ModelScan operates by statically analyzing model files byte-by-byte, identifying known unsafe code signatures without executing the model's code. This approach is fast and secure, preventing potential exploits during the loading process. It categorizes detected risks from CRITICAL to LOW, enabling informed decisions about model usage.
Quick Start & Requirements
pip install modelscan
pip install 'modelscan[tensorflow, h5py]'
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project is inspired by and extends PickleScan, indicating a potential for ongoing development and refinement. While it supports multiple formats, the breadth of support for emerging ML frameworks and serialization methods may evolve.
2 weeks ago
1 week