Seminars

View all Seminars  |  Download ICal for this event

Mitigating Bias via Algorithms with Fairness Guarantees

Series: Ph.D. Colloquium

Speaker: Sruthi Gorantla, Ph.D (Engg.) student, Dept. of CSA

Date/Time: Feb 24 14:30:00

Location: CSA Conference Room, (Room No. 101, Ground floor)

Faculty Advisor: Prof. Anand Louis

Abstract:
The rapid integration of automated decision-making systems in critical domains such as resume screening, loan approval, content recommendation, and disaster containment has raised significant concerns regarding biases in these systems. This research investigates both theoretical and practical aspects of fairness in machine learning models, proposing novel fairness definitions and algorithmic frameworks that mitigate bias while maintaining performance. Our contributions are broadly categorized into three parts.

Part 1??Post-Processing and In-Processing for Ex-Post Group-Fair Rankings:
Ranking systems shape decision-making across various fields but often introduce biases that disproportionately affect underrepresented groups. In applications like resume screening, the system often must always output a fair ranking rather than satisfying fairness in expectation over multiple rankings. We propose a post-processing technique to achieve ex-post group fairness in ranking systems; that is, whenever a ranking is output by the system, it is guaranteed to have a fair representation of groups. Our key contribution is a novel randomized ranking algorithm that produces ex-post group-fair rankings by sampling from a probability distribution satisfying natural fair ranking axioms we define. We further show that our sampler also enables in-processing for ex-post fairness in learning-to-rank systems. Experiments on real-world datasets, such as credit risk assessments and student admissions, demonstrate that our approach effectively balances fairness and ranking utility.

Part 2??Achieving Group and Individual Fairness Simultaneously:
Ensuring fairness requires addressing both group fairness (equitable treatment across demographic groups) and individual fairness (similar treatment for similar individuals). Given group fairness constraints in ranking, we define underranking??a phenomenon where meritorious individuals are systematically pushed down in rankings to satisfy the group fairness constraints. We prove a theoretical lower bound on underranking and propose an efficient post-processing algorithm with simultaneous group fairness and underranking guarantees comparable to the lower bound. Empirical results show that our algorithm achieves both fairness notions while outperforming state-of-the-art baselines. Extending beyond ranking, we propose a novel algorithm for achieving both group and individual fairness in graph min-cut problems.

Part 3??Error-Based Group Fairness in Ranking and Clustering:
Beyond traditional fairness constraints, we explore error-based fairness, ensuring that different groups incur a fair error distribution. Inspired by socially-fair clustering (Ghadiri et al., FAccT 21; Abbasi et al., FAccT 21), we develop approximation algorithms for socially-fair (k,z) clustering (including k-median, k-means), and socially-fair linear subspace clustering, using strong coresets. Experimental evaluations validate their effectiveness in mitigating bias. In ranking, we introduce a cascaded-norm objective function for fair error measurement across groups in settings where the ranker learns from noisy pairwise preferences. We establish theoretical lower bounds for both group-blind and group-aware fair ranking algorithms, showing that our group-blind algorithm nearly matches the sample complexity lower bound, while our group-aware approach achieves better sample complexity in real-world experiments.