Skip to content

mggg/Quantitative_fairness

 
 

Repository files navigation

This repository accompanies the paper "Quantitative Relaxations for Arrow's Axioms".

Repository Structure

The repository contains the following core files:

setup.sh Sets up the environment for the project, including installing necessary dependencies.

voting_metrics_main_code/fairness_metric.py

Implements:

  • The Kendall Tau distance function
  • Our quantitative fairness metrics:
    • $\sigma_{IIA}$ (Independence of Irrelevant Alternatives)
    • $\sigma_{UM}$ (Unanimity)
    • $\sigma_{IIA}^{WS}$ (The "Winner Set" version of IIA)
    • $\sigma_{UM}^{WS}$ (The "Winner Set" version of Unanimity)

voting_metrics_main_code/voting_rules.py A factory file that generates the appropriate voting rule using VoteKit according to a string input.

scripts/run_2_bloc_BT_pipeline.sh Runs the full 2-bloc Bradley-Terry pipeline (profile generation if needed, metric collection, proportionality analysis, and plots).

scripts/run_ny_pipeline.sh Runs the pipeline for the New York dataset.

scripts/run_portland_pipeline.sh Runs the pipeline for the Portland dataset.

scripts/run_scottish_pipeline.sh Runs the pipeline for the Scottish dataset.

data/ Contains the raw and cleaned data files for the experiments, including the pre-generated BT preference profiles (via git-lfs). Populated by setup.sh.

pipelines/ Contains the pipeline files for each dataset.

  • pipelines/bradley-terry/ Generates BT profiles (generate_BT_profiles.py), collects metric statistics (collect_stats_BT.py), computes proportionality diagnostics (compute_proportionality.py), and builds plots (make_plots/).
  • pipelines/NY/ Collects NY statistics (collect_stats_ny.py).
  • pipelines/portland/ Collects Portland statistics (collect_stats_portland.py) and includes optional clustering diagnostics (short_burst_clustering.py, brute_force_clustering.py).
  • pipelines/scottish/ Collects Scottish statistics and produces plots and CSV summaries.

stats/ Contains the statistics files generated by the pipelines.

plots/ Contains the plots generated by the pipelines.

other_files/ Contains other files that were either used in earlier notebooks or which are used to help clean the data at setup.

notebooks/ Contains some Jupyter notebooks and old drafts of work that appear in the paper.

Setup

This repo uses uv to manage dependencies. Once installed, you can simply run:

./setup.sh

from your terminal to set up the environment and extract all the necessary data for replication.

Running the Experiments

To repeat the experiments in the paper, you just need to run the pipeline files:

NOTE: The file titled scripts/run_2_bloc_BT_pipeline.sh will take a while to run on a computer without a significant number of CPUs. The commands below are ordered by the time they take to run. If BT preference profiles are already present (pulled via git-lfs in setup.sh), the runner skips regeneration.

⚠️ Replication requires system tools git, git-lfs, uv, wget, and unzip, plus Python >=3.12. setup.sh downloads data from the internet and can require multiple GB of disk space (BT profiles are large). The BT pipeline is CPU-heavy and may take a long time without many cores.

./scripts/run_ny_pipeline.sh
./scripts/run_portland_pipeline.sh
./scripts/run_scottish_pipeline.sh
./scripts/run_2_bloc_BT_pipeline.sh

Data Sources

Data retrieval is done in the setup.sh file. The data sources are:

About

This is the code repository for the paper "Quantitative Relaxations for Arrow's Axioms".

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Jupyter Notebook 86.9%
  • Python 12.8%
  • Shell 0.3%