AgentHandover  by sandroandric

Agent skill generation from user observation

Created 1 week ago

New!

518 stars

Top 60.4% on SourcePulse

GitHubView on GitHub
Project Summary

AgentHandover addresses the challenge of enabling AI agents to perform complex work autonomously by observing user actions, learning workflows, and generating "self-improving Skills." It targets users and developers seeking to delegate tasks to AI agents without constant explicit instruction, offering the benefit of agents that learn, adapt, and improve over time based on real-world execution.

How It Works

AgentHandover employs an 11-stage local pipeline on macOS to transform raw screen activity into structured, agent-executable Skills. This process involves screen capture, local VLM and text/image embedding annotation, activity classification, semantic clustering, and behavioral synthesis to extract strategy, decision logic, guardrails, and the user's writing voice. Unlike static playbooks, these Skills are designed to be self-improving; they learn from agent execution feedback, increasing confidence on success, and adapting to deviations or failures, thereby refining their accuracy and utility over repeated use.

Quick Start & Requirements

  • Primary install: Download the .pkg installer from Releases or use Homebrew (brew tap sandroandric/agenthandover; brew install --HEAD agenthandover). A source build is also available.
  • Prerequisites: macOS is required. Local AI models are managed via Ollama (e.g., Gemma 4, Qwen 3.5), with model selection recommended based on available RAM (8 GB to 48 GB+). A Chrome extension is also part of the setup.
  • Resource Footprint: AI model downloads range from approximately 6 GB to 20 GB. All core processing, including VLM inference, occurs locally.
  • Links: YouTube Demo (Note: Actual link not provided in README, placeholder used), GitHub Releases.

Highlighted Details

  • Self-Improving Skills: Skills dynamically learn from agent execution feedback, automatically adjusting confidence and logic based on successes, deviations, and failures.
  • 11-Stage Local Pipeline: A comprehensive, multi-stage process runs entirely on the user's machine, ensuring privacy and control from observation to Skill generation.
  • Privacy-First Architecture: No cloud APIs are required by default; all data, including screenshots, annotations, and the knowledge base, remains local, encrypted, and subject to configurable retention policies.
  • Seamless Agent Integration: Supports one-click pairing with MCP-compatible agents (Claude Code, Cursor, Windsurf, Codex, OpenClaw) via a local MCP server or REST API.

Maintenance & Community

Contact is available via sandro@sandric.co. The project includes a changelog detailing recent updates and improvements. No explicit community channels (e.g., Discord, Slack) or details on notable contributors or sponsorships are provided in the README.

Licensing & Compatibility

  • License: Apache 2.0.
  • Compatibility: The Apache 2.0 license permits commercial use and linking. The local-first, macOS-only architecture means compatibility is limited to that platform and its software ecosystem.

Limitations & Caveats

The system is strictly limited to macOS. While designed for privacy, the setup involves managing local AI models and their dependencies (like Ollama), which may require technical expertise. The project appears to be under active development, with recent updates focusing on core pipeline enhancements and model support.

Health Check
Last Commit

1 day ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
1
Star History
523 stars in the last 12 days

Explore Similar Projects

Feedback? Help us improve.