azure-ai-travel-agents  by Azure-Samples

AI Travel Agents application for enhanced travel operations

created 4 months ago
290 stars

Top 91.7% on sourcepulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

This project provides a robust enterprise application demonstrating how to orchestrate multiple AI agents for enhanced travel agency operations, targeting developers and researchers interested in multi-agent systems and enterprise AI solutions. It leverages LlamaIndex.TS and the Model Context Protocol (MCP) to manage customer queries, destination recommendations, and itinerary planning, offering a scalable and modular architecture.

How It Works

The application utilizes LlamaIndex.TS to coordinate specialized AI agents, each performing distinct tasks like understanding customer queries, recommending destinations, and planning itineraries. These agents interact via MCP servers, implemented in various languages (Python, Node.js, Java, .NET), allowing for flexible tool integration. A key component is a serverless GPU-hosted LLM for high-performance inference, complemented by web search capabilities for real-time data.

Quick Start & Requirements

  • Local Preview: Requires Git, Node.js, Docker (with Docker Model Runner enabled), and PowerShell 7+ (Windows). A setup script automates dependency checks, cloning, installation, model download (Phi4 14B, ~7.8GB), and Docker image builds.
  • Azure Deployment: Uses Azure Developer CLI (azd). Commands: azd auth login followed by azd up.
  • Resources: Local LLM execution requires significant resources (16GB RAM, modern CPU/GPU). GPU acceleration is supported on macOS Apple Silicon and NVIDIA GPUs (Windows).
  • Documentation: Advanced Setup

Highlighted Details

  • Demonstrates multi-agent orchestration with LlamaIndex.TS and MCP.
  • Features serverless GPU inference via Azure Container Apps.
  • Includes agents for customer understanding, recommendations, itinerary planning, code execution, web search, and model inference.
  • Containerized architecture deployable via Azure Container Apps.

Maintenance & Community

Licensing & Compatibility

  • License: MIT License.
  • Compatibility: Permissive license suitable for commercial use and integration with closed-source applications.

Limitations & Caveats

The local preview of the Phi4 14B model has substantial hardware requirements (16GB RAM, CPU/GPU) and GPU acceleration is platform-specific. While Azure services offer scalability, costs can accrue based on usage of Azure Container Apps, Azure Container Registry, Azure OpenAI, and Azure Monitor.

Health Check
Last commit

1 week ago

Responsiveness

Inactive

Pull Requests (30d)
8
Issues (30d)
0
Star History
284 stars in the last 90 days

Explore Similar Projects

Starred by Jeff Hammerbacher Jeff Hammerbacher(Cofounder of Cloudera), Stas Bekman Stas Bekman(Author of Machine Learning Engineering Open Book; Research Engineer at Snowflake), and
2 more.

gpustack by gpustack

1.6%
3k
GPU cluster manager for AI model deployment
created 1 year ago
updated 3 days ago
Feedback? Help us improve.