Skip to content

Bpriya42/bias-reduction-facial-recognition

Repository files navigation

Mitigating Racial Bias in Facial Recognition

Overview

This project addresses bias in facial recognition systems, with a specific focus on racial bias in age classification. Using a Seldonian machine learning framework, we aim to build fairer facial recognition models that maintain high accuracy while ensuring equal performance across different demographic groups.

Problem Statement

Facial recognition technologies often exhibit biases against racial minorities. This project specifically addresses:

  • Age classification bias across different racial groups
  • Development of constrained models that maintain utility while reducing bias
  • Application of Seldonian techniques to ensure fairness with statistical guarantees

Dataset

The project uses the FairFace dataset, which contains:

  • Facial images balanced across different races, genders, and age groups
  • Annotations for race, gender, and age classifications
  • Images preprocessed to 48x48 RGB format

Approach

Our methodology includes:

  1. Data Preparation: Processing FairFace images and partitioning ages into groups
  2. Bias Detection: Measuring baseline performance disparities across racial groups
  3. Seldonian Constraints: Implementing statistical fairness constraints
  4. Model Development: Creating models that satisfy these constraints while maintaining accuracy
  5. Evaluation: Comparing constrained and unconstrained models on bias metrics

Implementation Details

  • Environment: Python, PyTorch, and the Seldonian Engine
  • Models: CNN-based facial recognition architecture
  • Fairness Techniques: Demographic parity and equal opportunity constraints
  • Visualization: Tools for exploring bias metrics and model performance across groups

Files

  • Facial_Recognition.ipynb: Main model building and analysis
  • racial_constraints_for_Age.ipynb: Implementation of racial fairness constraints
  • CSE_682_Final_Project___Final_Write_Up.pdf: Detailed project report

Requirements

  • Python 3.x
  • PyTorch
  • seldonian-engine
  • pandas, numpy, matplotlib
  • datasets (HuggingFace)

Usage

  1. Run the Jupyter notebooks to explore the dataset and model training
  2. Follow the examples to implement your own fairness constraints
  3. Use the trained models to make predictions with fairness guarantees

Results

Our constrained models demonstrate significantly reduced bias in age classification across different racial groups, while maintaining overall accuracy comparable to unconstrained baselines. The Seldonian approach provides statistical guarantees regarding fairness constraints that traditional methods cannot offer.

Future Work

  • Extension to other demographic attributes and intersectional bias
  • Improvement of model architecture for better accuracy and fairness trade-offs
  • Exploration of other fairness constraint formulations

References

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors