Discover and explore top open-source AI tools and projects—updated daily.
Research paper code for improving LLM reasoning via layer-selective rank reduction
Top 73.9% on SourcePulse
This repository provides code for LASER (Layer-Selective Rank Reduction), a method to improve Large Language Model (LLM) reasoning capabilities by replacing specific weight matrices with their low-rank approximations. It targets researchers and practitioners seeking to enhance LLM performance on tasks like question answering without extensive retraining.
How It Works
LASER intervenes in transformer layers by applying Singular Value Decomposition (SVD) to selected weight matrices, then reconstructing them using a specified fraction of the largest singular values. This process is controlled by three hyperparameters: the target layer (ℓ), the parameter type (τ, e.g., MLP or attention weights), and the rank retention fraction (ρ). This approach is advantageous as it can significantly boost performance with minimal computational overhead and no additional training.
Quick Start & Requirements
pip3 install -r requirements.txt
datasets
and transformers
.python3 intervention_gptj_fever.py --lname fc_in --rate 9.9 --lnum 26
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
laser
package and wrapper code.1 year ago
Inactive