raycast_ollama  by MassimilianoPasquini97

Local LLM inference and AI interaction for Raycast

Created 2 years ago
250 stars

Top 100.0% on SourcePulse

GitHubView on GitHub
Project Summary

This Raycast extension enables users to leverage local large language models (LLMs) through Ollama directly within the Raycast productivity application. It targets Raycast power users who wish to integrate local AI capabilities for tasks like chat, content summarization, and custom command execution, offering a privacy-focused alternative to cloud-based AI services.

How It Works

The extension acts as an interface between Raycast and a locally running Ollama instance. It facilitates various interactions, including chatting with selected LLM models, managing installed models, and creating custom commands using Raycast's Prompt Explorer format. Key features leverage placeholders for dynamic content injection, such as {selection}, {browser-tab}, and {image}, allowing for context-aware AI prompts. Advanced functionality includes integrating with external tools via MCP servers for capabilities like web searching.

Quick Start & Requirements

  • Prerequisites: Ollama must be installed and running on macOS. At least one LLM model needs to be installed via Ollama CLI or the extension's "Manage Models" command. The Raycast Browser Extension is required for {browser-tab} features. An Ollama API Key is necessary for the "Ollama Search API" feature. Models with vision capabilities are required for image-related commands, and models with tool capabilities are needed for MCP server integration.
  • Setup: Installation is typically handled through Raycast's extension management system.

Highlighted Details

  • Enables local LLM inference directly within the Raycast environment.
  • "Chat With Ollama" supports dynamic model selection, context injection from clipboard, selection, browser tabs, and images.
  • Customizable commands can be defined using the Raycast Prompt Explorer format.
  • Supports integration with external tools via MCP servers, demonstrated with a DuckDuckGo search example.

Maintenance & Community

This project is not officially affiliated with Ollama.ai. The provided README does not contain specific details regarding community channels (like Discord/Slack), active contributors, or a public roadmap.

Licensing & Compatibility

The license type is not specified in the provided README. This omission requires further investigation before commercial use or integration into closed-source projects.

Limitations & Caveats

The Windows (Beta) version has limitations: image inputs are restricted to file paths, and the "Selected Text" input source is unsupported, requiring the use of the clipboard instead. Certain features are dependent on the specific capabilities (vision, tools) of the installed Ollama models.

Health Check
Last Commit

3 weeks ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
3 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.