Swift client library for interacting with the Ollama API
Top 71.8% on sourcepulse
This Swift client library provides a robust interface for interacting with the Ollama API, enabling developers to integrate large language models into their macOS applications. It supports core LLM functionalities like text generation, chat, embeddings, and model management, with advanced features such as streaming responses, structured output generation via JSON schema, and tool use for complex task execution.
How It Works
The library leverages Swift's async/await
for asynchronous operations, ensuring non-blocking API calls. It communicates with the Ollama server via HTTP requests, parsing JSON responses into strongly-typed Swift objects. Key features include streaming capabilities for real-time feedback and a flexible format
parameter for structured data output, allowing users to specify JSON or custom JSON schemas for model responses. Tool use is implemented through a Tool
struct that defines the tool's name, description, parameters, and an asynchronous execution closure.
Quick Start & Requirements
https://github.com/loopwork-ai/ollama-swift.git
llama3.2
(download via ollama pull llama3.2
)Highlighted Details
Maintenance & Community
The project is maintained by loopwork-ai. Further community engagement details are not specified in the README.
Licensing & Compatibility
The library is released under the MIT License, permitting commercial use and integration into closed-source projects.
Limitations & Caveats
The README notes that tool support requires a compatible model, such as llama3.2
. The parameter format for tool definition was updated in v1.3.0, with the older format now deprecated but still supported.
1 week ago
1+ week