Discover and explore top open-source AI tools and projects—updated daily.
LLM integration for Swift applications
Top 97.4% on SourcePulse
SpeziLLM provides Swift modules for integrating LLM functionality into applications, supporting local on-device execution, OpenAI's remote APIs, and LLMs running on local network "Fog" nodes. It targets developers building LLM-powered applications within the Spezi ecosystem, offering a unified interface for diverse LLM backends.
How It Works
SpeziLLM acts as a central orchestrator, abstracting the complexities of different LLM platforms. It leverages a LLMRunner
that can be configured with specific platform implementations (LLMLocalPlatform
, LLMOpenAIPlatform
, LLMFogPlatform
). This allows developers to switch between or combine LLM sources seamlessly using a consistent Swift API, promoting code reusability and simplifying integration.
Quick Start & Requirements
SpeziLLMLocal
) requires a modern Metal MTLGPUFamily (not compatible with simulators) and may need an "Increase Memory Limit" entitlement. Fog execution (SpeziLLMFog
) requires a SpeziLLMFogNode
running in the local network and specific Info.plist
entries for local network discovery.LLMRunner
in your SpeziAppDelegate
with the desired platform. Refer to DocC documentation for detailed target setup.Highlighted Details
mlx-swift
, with an optional download manager and onboarding view.Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
SpeziLLMLocal
is not compatible with simulators due to Metal GPU requirements.SpeziLLMFog
requires a separate SpeziLLMFogNode
setup and user authorization for local network access.2 days ago
1 day