Discover and explore top open-source AI tools and projects—updated daily.
Alpha-VLLMOpen-source toolkit for LLM development, pretraining, finetuning, and deployment
Top 17.0% on SourcePulse
LLaMA2-Accessory is an open-source toolkit for developing, finetuning, and deploying large language models (LLMs) and multimodal LLMs (MLLMs). It extends the LLaMA-Adapter project with advanced features, including the SPHINX MLLM, which supports diverse training tasks, data domains, and visual embeddings, aiming to provide a comprehensive solution for LLM practitioners.
How It Works
The toolkit supports parameter-efficient finetuning methods like Zero-init Attention and Bias-norm Tuning, alongside distributed training strategies such as Fully Sharded Data Parallel (FSDP) and optimizations like Flash Attention 2 and QLoRA. It integrates various visual encoders (CLIP, Q-Former, ImageBind, DINOv2) and supports a wide range of LLMs including LLaMA, LLaMA2, CodeLlama, InternLM, Falcon, and Mixtral-8x7B. This modular design allows for flexible customization and efficient scaling of LLM development.
Quick Start & Requirements
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
9 months ago
Inactive
philschmid
OptimalScale
haotian-liu