Library for differentiable nonlinear optimization layers in PyTorch
Top 23.2% on sourcepulse
Theseus is a PyTorch library for building custom, end-to-end differentiable nonlinear optimization layers, targeting robotics and computer vision researchers and engineers. It enables integrating domain-specific priors into neural networks by allowing optimization problems to be solved within the forward pass and gradients to be computed through the optimizer, facilitating end-to-end training.
How It Works
Theseus provides an application-agnostic interface for defining optimization problems as Objective
s, composed of CostFunction
s. It supports various second-order (Gauss-Newton, Levenberg-Marquardt) and other nonlinear optimizers, coupled with efficient linear solvers (dense and sparse, including GPU-accelerated options like CHOLMOD and BaSpaCho). The library leverages Lie groups (via torchlie
) for efficient representation of rotations and poses, crucial for many robotics and vision tasks. Gradients can be computed using implicit, truncated, or direct loss minimization (DLM) backward modes for efficiency.
Quick Start & Requirements
pip install theseus-ai
suitesparse
(install via apt
or conda
).BASPACHO_ROOT_DIR
.Highlighted Details
Maintenance & Community
Developed by Meta AI (facebookresearch). Community discussions and issue tracking are managed via GitHub.
Licensing & Compatibility
MIT licensed, permitting commercial use and integration into closed-source projects.
Limitations & Caveats
The PyPI installation does not include experimental "Theseus Labs" features; these require installation from source. Compilation from source may be necessary for specific CUDA or dependency versions not covered by pre-built wheels.
6 months ago
1 week