Discover and explore top open-source AI tools and projects—updated daily.
intelGenAI acceleration for Intel Arc Pro GPUs
Top 91.9% on SourcePulse
Summary LLM Scaler is an Intel GenAI solution optimized for text, image, and video generation on Intel® Arc™ Pro B60 GPUs. It integrates popular frameworks like vLLM, ComfyUI, SGLang Diffusion, and Xinference to deliver high performance for state-of-the-art models, targeting users and developers seeking efficient generative AI capabilities on Intel hardware.
How It Works The project provides optimized builds for diverse GenAI models, leveraging vLLM for text generation with features like CCL support, INT4/FP8 quantization, and multi-modal capabilities. The "Omni" component extends capabilities to image, video, and audio generation via ComfyUI (Omni Studio) and SGLang Diffusion/Xinference (Omni Serving), offering OpenAI-API compatible endpoints. This approach maximizes throughput and minimizes latency on Arc Pro B60 hardware.
Quick Start & Requirements
transformers==5.0.0rc0). Refer to "Getting Started" documentation.Highlighted Details
Maintenance & Community The project demonstrates active maintenance with frequent updates. Support is primarily handled through GitHub Issues for bug reporting and feature requests. No specific community channels (Discord/Slack) were mentioned.
Licensing & Compatibility License information is not explicitly stated in the provided README content, requiring further investigation for commercial use.
Limitations & Caveats
13 hours ago
Inactive
NVIDIA
xorbitsai