Infosys-Responsible-AI-Toolkit  by Infosys

AI toolkit for trustworthy and transparent AI solutions

Created 1 year ago
252 stars

Top 99.6% on SourcePulse

GitHubView on GitHub
Project Summary

<2-3 sentences summarising what the project addresses and solves, the target audience, and the benefit.> The Infosys Responsible AI Toolkit offers a suite of APIs for integrating safety, security, privacy, explainability, fairness, bias detection, and hallucination detection into AI solutions. It aims to enhance the trustworthiness and transparency of AI systems, benefiting developers and researchers building robust AI applications.

How It Works

This toolkit employs a modular API-driven approach, addressing key responsible AI tenets. It includes ModerationLayer APIs for content regulation, Explainability APIs for LLM and traditional models, Fairness & Bias APIs, Hallucination detection for RAG, Privacy APIs for PII handling, Safety APIs for toxic content, and Security APIs for attack detection. A micro-frontend UI facilitates seamless interaction and experimentation.

Quick Start & Requirements

Installation details are found in individual module READMEs. The toolkit is optimized for Azure OpenAI, necessitating an Azure OpenAI API subscription. Users of alternative LLMs must implement client-side configuration adjustments. Links to official documentation and a development roadmap are available.

Highlighted Details

  • LLM & Generative AI: Features prompt injection/jailbreak checks, privacy validation, toxicity detection, hallucination scoring, and explainability methods (e.g., CoT, GoT).
  • ML Model Support: Provides explainability (SHAP, LIME), fairness metrics (e.g., Statistical Parity Difference), and bias mitigation techniques (e.g., Equalized Odds).
  • Integrated UI: A micro-frontend architecture (MFE, SHELL) with a Python backend supports user management, admin configuration, telemetry, file storage (Azure Blob Storage), LLM benchmarking, and document processing (PII Anonymization, Safety/Nudity Masking).
  • Security & Red Teaming: Includes APIs for tabular/image data attacks/defenses, prompt injection, jailbreak checks, and advanced red teaming techniques (PAIR, TAP).

Maintenance & Community

Contributions and feedback are welcomed via a dedicated contribution page. An email contact (Infosysraitoolkit@infosys.com) and a development roadmap are provided for community engagement and tracking progress.

Licensing & Compatibility

The specific open-source license is not explicitly stated in the README. The toolkit is optimized for Azure OpenAI; compatibility with other LLMs requires client-side adjustments.

Limitations & Caveats

Primary optimization is for Azure OpenAI, necessitating configuration changes for other LLM providers. Features like multi-lingual support and advanced red teaming are marked as upcoming. The absence of a stated license requires clarification for commercial use or integration into closed-source projects.

Health Check
Last Commit

4 weeks ago

Responsiveness

Inactive

Pull Requests (30d)
3
Issues (30d)
2
Star History
7 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.