functionary  by MeetKai

Chat language model for tool use and result interpretation

created 2 years ago
1,574 stars

Top 27.1% on sourcepulse

GitHubView on GitHub
Project Summary

Functionary is a suite of open-source language models designed for robust function calling and tool interpretation. It enables LLMs to intelligently determine when to execute functions, manage their execution (serially or in parallel), and understand their outputs, making them suitable for complex agentic workflows and automated task execution.

How It Works

Functionary models are fine-tuned to interpret function definitions provided as JSON Schema objects, similar to OpenAI's function calling mechanism. This approach allows the model to directly understand and generate function calls without requiring complex prompt engineering or external parsing logic. The models are trained to output function calls in a structured format that can be easily parsed and executed by an external system.

Quick Start & Requirements

  • Installation: pip install -e .[vllm] or pip install -e .[sglang]
  • Prerequisites: Python, CUDA (for GPU acceleration), vLLM or SGLang. Medium models require significant VRAM (e.g., 2x A100 80GB or 4x A6000).
  • Deployment: Servers can be launched using server_vllm.py or server_sglang.py.
  • Documentation: functionary.meetkai.com

Highlighted Details

  • Supports parallel function calls and generating responses grounded in tool execution results.
  • Offers code interpreter capabilities for executing Python code snippets.
  • Achieves high rankings on the Berkeley Function-Calling Leaderboard and ToolSandbox benchmark, comparable to proprietary models.
  • Provides extensive integration examples, including OpenAI-compatible APIs, Llama.cpp, and Modal serverless deployment.

Maintenance & Community

The project is actively developed by MeetKai, with frequent updates and new model releases. Community channels are not explicitly listed in the README.

Licensing & Compatibility

The models are typically released under permissive licenses (e.g., Apache 2.0 for code, model weights often have specific terms based on the base model). Compatibility with OpenAI's API structure is a key design goal.

Limitations & Caveats

Older model versions (v1.x) are deprecated. While v2 models are designed for OpenAI-python v1 compatibility, specific integrations might require careful testing. The README notes that Llama.cpp integration might not always be up-to-date, recommending the provided llama_cpp_inference.py script for current GGUF models.

Health Check
Last commit

5 days ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
30 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.