awesome-fairness-in-ai  by datamllab

Fairness in AI resource list

created 5 years ago
326 stars

Top 83.5% on SourcePulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

This repository is a curated list of resources on Fairness in Artificial Intelligence (AI), targeting researchers and practitioners interested in understanding, detecting, and mitigating algorithmic bias. It provides a comprehensive overview of theoretical concepts, measurement techniques, bias demonstration, and mitigation strategies, serving as a valuable starting point for those working to build equitable AI systems.

How It Works

The list categorizes resources into key areas such as theoretical understanding, fairness metrics, bias detection across various applications and models, and mitigation techniques including adversarial learning, calibration, and data collection strategies. It also highlights relevant fairness packages, conferences, and interpretability resources, offering a structured approach to navigating the complex field of AI fairness.

Quick Start & Requirements

This is a curated list, not a software package. No installation or specific requirements are needed beyond a web browser to access the resources.

Highlighted Details

  • Comprehensive coverage of fairness definitions, including Equality of Opportunity and Beyond Parity.
  • Detailed examples of bias demonstration in facial recognition, sentiment analysis, and natural language processing.
  • Extensive list of mitigation techniques and associated research papers.
  • Inclusion of popular fairness toolkits like AI Fairness 360 and Fairlearn.

Maintenance & Community

The list is maintained by Mengnan Du from Texas A&M University. Contributions are welcomed via pull requests. Contact information for the maintainer is provided.

Licensing & Compatibility

The repository itself is not licensed as a software package. The resources listed within may have various licenses.

Limitations & Caveats

The maintainer notes that the list is "probably biased and incomplete," indicating that it may not cover all existing research or perspectives in the field of AI fairness.

Health Check
Last commit

1 year ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
2 stars in the last 30 days

Explore Similar Projects

Starred by Shawn Wang Shawn Wang(Editor of Latent Space), Evan Hubinger Evan Hubinger(Head of Alignment Stress-Testing at Anthropic), and
3 more.

awful-ai by daviddao

0.0%
7k
Curated list of scary AI usages to raise awareness
created 7 years ago
updated 5 months ago
Feedback? Help us improve.