lit  by PAIR-code

Interactive ML model analysis tool for understanding model behavior

created 5 years ago
3,580 stars

Top 13.8% on sourcepulse

GitHubView on GitHub
Project Summary

The Learning Interpretability Tool (LIT) provides an interactive, extensible, and framework-agnostic interface for analyzing machine learning models across text, image, and tabular data. It empowers researchers and practitioners to understand model behavior, identify failure modes, and debug predictions through a browser-based UI.

How It Works

LIT offers a suite of debugging workflows, including local explanations (salience maps), aggregate analysis (custom metrics, embedding visualization), counterfactual generation, and side-by-side model comparison. Its framework-agnostic design supports TensorFlow, PyTorch, and various model types (classification, regression, seq2seq, etc.), facilitating deep dives into model decision-making.

Quick Start & Requirements

  • Install: pip install lit-nlp
  • Optional Dependencies: pip install 'lit-nlp[examples-discriminative-ai]' or pip install 'lit-nlp[examples-generative-ai]' for demo-specific packages.
  • Python Version: 3.9+ required for building from source.
  • Quickstart Demo: python -m lit_nlp.examples.glue.demo --port=5432
  • Documentation: https://github.com/PAIR-code/lit
  • Demos: https://pair-code.github.io/lit/

Highlighted Details

  • Supports text, image, and tabular data.
  • Framework-agnostic (TensorFlow, PyTorch).
  • Extensible with custom components (interpretability methods, generators).
  • Includes features for aggregate analysis, counterfactuals, and model comparison.

Maintenance & Community

LIT is an active research project with contributions from Google. Community engagement is encouraged via the Discussions page.

Licensing & Compatibility

The project is not explicitly licensed in the README. Compatibility for commercial use or closed-source linking is not specified.

Limitations & Caveats

LIT is a research project under active development, implying potential for breaking changes or incomplete features. The lack of explicit licensing information may pose a barrier to commercial adoption.

Health Check
Last commit

5 days ago

Responsiveness

Inactive

Pull Requests (30d)
1
Issues (30d)
1
Star History
45 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Elie Bursztein Elie Bursztein(Cybersecurity Lead at Google DeepMind), and
1 more.

alibi by SeldonIO

0.1%
3k
Python library for ML model inspection and interpretation
created 6 years ago
updated 1 month ago
Feedback? Help us improve.