lit  by PAIR-code

Interactive ML model analysis tool for understanding model behavior

Created 5 years ago
3,634 stars

Top 13.1% on SourcePulse

GitHubView on GitHub
Project Summary

The Learning Interpretability Tool (LIT) provides an interactive, extensible, and framework-agnostic interface for analyzing machine learning models across text, image, and tabular data. It empowers researchers and practitioners to understand model behavior, identify failure modes, and debug predictions through a browser-based UI.

How It Works

LIT offers a suite of debugging workflows, including local explanations (salience maps), aggregate analysis (custom metrics, embedding visualization), counterfactual generation, and side-by-side model comparison. Its framework-agnostic design supports TensorFlow, PyTorch, and various model types (classification, regression, seq2seq, etc.), facilitating deep dives into model decision-making.

Quick Start & Requirements

  • Install: pip install lit-nlp
  • Optional Dependencies: pip install 'lit-nlp[examples-discriminative-ai]' or pip install 'lit-nlp[examples-generative-ai]' for demo-specific packages.
  • Python Version: 3.9+ required for building from source.
  • Quickstart Demo: python -m lit_nlp.examples.glue.demo --port=5432
  • Documentation: https://github.com/PAIR-code/lit
  • Demos: https://pair-code.github.io/lit/

Highlighted Details

  • Supports text, image, and tabular data.
  • Framework-agnostic (TensorFlow, PyTorch).
  • Extensible with custom components (interpretability methods, generators).
  • Includes features for aggregate analysis, counterfactuals, and model comparison.

Maintenance & Community

LIT is an active research project with contributions from Google. Community engagement is encouraged via the Discussions page.

Licensing & Compatibility

The project is not explicitly licensed in the README. Compatibility for commercial use or closed-source linking is not specified.

Limitations & Caveats

LIT is a research project under active development, implying potential for breaking changes or incomplete features. The lack of explicit licensing information may pose a barrier to commercial adoption.

Health Check
Last Commit

5 days ago

Responsiveness

1 day

Pull Requests (30d)
3
Issues (30d)
0
Star History
8 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Chaoyu Yang Chaoyu Yang(Founder of Bento), and
1 more.

OmniXAI by salesforce

0.1%
963
Python library for explainable AI (XAI)
Created 4 years ago
Updated 1 year ago
Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Travis Addair Travis Addair(Cofounder of Predibase), and
4 more.

alibi by SeldonIO

0.1%
3k
Python library for ML model inspection and interpretation
Created 7 years ago
Updated 4 months ago
Feedback? Help us improve.