Chat language model for tool use and result interpretation
Top 27.1% on sourcepulse
Functionary is a suite of open-source language models designed for robust function calling and tool interpretation. It enables LLMs to intelligently determine when to execute functions, manage their execution (serially or in parallel), and understand their outputs, making them suitable for complex agentic workflows and automated task execution.
How It Works
Functionary models are fine-tuned to interpret function definitions provided as JSON Schema objects, similar to OpenAI's function calling mechanism. This approach allows the model to directly understand and generate function calls without requiring complex prompt engineering or external parsing logic. The models are trained to output function calls in a structured format that can be easily parsed and executed by an external system.
Quick Start & Requirements
pip install -e .[vllm]
or pip install -e .[sglang]
server_vllm.py
or server_sglang.py
.Highlighted Details
Maintenance & Community
The project is actively developed by MeetKai, with frequent updates and new model releases. Community channels are not explicitly listed in the README.
Licensing & Compatibility
The models are typically released under permissive licenses (e.g., Apache 2.0 for code, model weights often have specific terms based on the base model). Compatibility with OpenAI's API structure is a key design goal.
Limitations & Caveats
Older model versions (v1.x) are deprecated. While v2 models are designed for OpenAI-python v1 compatibility, specific integrations might require careful testing. The README notes that Llama.cpp integration might not always be up-to-date, recommending the provided llama_cpp_inference.py
script for current GGUF models.
5 days ago
1 day