JAX library for using and fine-tuning Gemma LLMs
Top 13.8% on sourcepulse
Gemma provides an open-weights Large Language Model (LLM) library from Google DeepMind, built on Gemini research. It enables users to utilize and fine-tune Gemma models, offering a JAX-based implementation for researchers and developers working with advanced AI models.
How It Works
Gemma is implemented as a JAX library, allowing for efficient computation on CPUs, GPUs, and TPUs. The library provides pre-trained model checkpoints and tools for sampling, including multi-turn and multi-modal conversations. Its design leverages JAX's automatic differentiation and hardware acceleration capabilities for both inference and fine-tuning.
Quick Start & Requirements
pip install gemma
Highlighted Details
Maintenance & Community
This project is not an official Google product. Contributions are welcome via contributing guidelines.
Licensing & Compatibility
The repository does not explicitly state a license in the provided README. Compatibility for commercial use or closed-source linking is not specified.
Limitations & Caveats
The README does not specify the license, which may impact commercial use. Model weights need to be downloaded separately as per documentation.
1 day ago
1 day