Discover and explore top open-source AI tools and projects—updated daily.
Interactive toolkit for analyzing Transformer-based language models
Top 42.5% on SourcePulse
The LLM Transparency Tool (LLM-TT) provides an interactive, web-based interface for dissecting the internal mechanisms of Transformer-based language models. It is designed for researchers and practitioners seeking to understand model behavior, attention patterns, and neuron activations.
How It Works
LLM-TT leverages TransformerLens to create hooks into model layers, enabling detailed analysis of token contributions and representations. Users can visualize attention head contributions, explore neuron activations within Feed-Forward Networks (FFNs), and trace information flow through the model's layers. This approach allows for granular inspection of how specific tokens influence model outputs.
Quick Start & Requirements
docker build -t llm_transparency_tool .
docker run --rm -p 7860:7860 llm_transparency_tool
git clone git@github.com:facebookresearch/llm-transparency-tool.git
cd llm-transparency-tool
conda env create --name llmtt -f env.yaml
pip install -e .
cd llm_transparency_tool/components/frontend
yarn install
yarn build
streamlit run llm_transparency_tool/server/app.py -- config/local.json
Highlighted Details
Maintenance & Community
Developed by facebookresearch. Links to relevant research papers are provided for citation.
Licensing & Compatibility
Licensed under CC BY-NC 4.0 (Creative Commons Attribution-NonCommercial 4.0 International). This license restricts commercial use and redistribution.
Limitations & Caveats
Adding support for models not already integrated with TransformerLens requires custom implementation of the TransparentLlm
class and modifications to the Streamlit application. The CC BY-NC 4.0 license prohibits commercial use.
9 months ago
Inactive