View all Seminars  |  Download ICal for this event

Algorithms for Fair Decision Making: Provable Guarantees and Applications

Series: Ph.D (Engg.)Thesis Defence -ON-LINE

Speaker: Ms. Arpita Biswas Ph.D (Engg.)StudentDept. of CSA

Date/Time: Oct 12 15:00:00

Location: Microsoft Teams - ON-LINE

Faculty Advisor: Prof. Y Narahari & Dr. Siddharth Barman

The topic of fair allocation of indivisible items has received significant attention because of its applicability in several real-world settings. This has led to a vast body of work focusing on defining appropriate fairness notions, providing existence guarantees, and handling computational issues in obtaining such fair allocations. This thesis addresses open questions in the space of fair allocation of indivisible items. We study constrained versions of the fair allocation problem, namely cardinality constraints and matroid constraints. These constrained settings generalize the unconstrained formulation and we are the first to study these constructs. We establish the existence of well-studied fairness notions (such as EF1 and MMS) under the constrained settings, and we design methods that provide an algorithmic anchor to these existence results. Moreover, we define strictly stronger notions of fairness and provide algorithms for obtaining these stronger fairness guarantees. Finally, we investigate fairness in diverse application scenarios, such as recommendation systems and classification problems. The key novelty involves providing solutions to such applications through the lens of fair allocation.
Fair Allocation under Cardinality Constraints
We investigate the problem of fairly allocating goods under cardinality constraints and additive valuations. In this setting, the set of goods are categorized, and an upper limit is imposed on the number of goods allocated to any agent from a particular category. The objective is to find an allocation that satisfies the given cardinality constraints as well as a fairness constraint. We design an efficient algorithm that computes an envy-free up to one good (EF1) allocation. Additionally, this algorithm outputs an exact maximin share (MMS) allocation when the valuations are binary. We also show that the constrained fair allocation problem with additive valuations reduces to an unconstrained fair allocation problem with submodular valuations. This allows us to guarantee 1/3-approximate maximin share (1/3-MMS) allocations under cardinality constraints.
Fair Allocation under Matroid Constraints
We consider the fair allocation problem under more general constraints. Here, each allocated bundle, in addition to the fairness criterion, needs to satisfy the independence criterion specified by a matroid. We establish that MMS exists in this setting when the valuations are identical. We provide an algorithm that efficiently computes an EF1 allocation under identical laminar matroid. The algorithm initializes an allocation by computing a matroid feasible partition of goods (using a method proposed by Gabow and Westermann) and then iteratively reallocates goods between the bundles till an EF1 allocation is obtained. Our reallocation strategy maintains matroid feasibility at each iteration (using an extension of the strong basis exchange lemma) and also ensures polynomial time convergence.
Stronger Notions of Fairness
We define two novel fairness notions, namely envy-free up to one less preferred good (EFL) and groupwise maximin share (GMMS). We show that these fairness notions are better, in terms of social welfare, compared to EF1 and MMS, respectively. We provide a scale of fairness to establish how these new fairness notions fit in the hierarchy of existing notions. We provide an algorithm that outputs an EFL and ½-GMMS allocations under the unconstrained setting. We also show that exact GMMS allocations are guaranteed to exist when the valuations of the agents are either binary or identical. We empirically show that GMMS allocations exist when the valuations are drawn from Gaussian and Uniform distributions. These results highlight that, for unconstrained settings, we do not fall short on generic existence results by strengthening the existing fairness notions.
Application to Two-Sided Fair Recommendation Systems
We investigate the problem of fair recommendation in two-sided online platforms, such as Amazon, Netflix, and Spotify, consisting of customers on one side and producers on the other. These services have typically focused only on maximizing customer satisfaction by tailoring the recommendations according to the preferences of individual customers, which may be detrimental for the producers. We consider fairness issues that span both customers and producers. Our approach involves a mapping of the fair recommendation problem to a constrained version of the fair allocation problem. Our proposed algorithm guarantees at least MMS exposure for most of the producers and EF1 fairness for every customer. We establish theoretical guarantees and provide empirical evidence through extensive evaluations on real-world datasets.
Application to Classification Problems under Prior Probability Shifts
We consider the problem of fair classification under prior probability shifts, which is a kind of distributional change occurring between the training and test datasets. Such shifts can be observed in the yearly records of several real-world datasets, such as COMPAS. If unaccounted for, such shifts can cause the predictions to become unfair towards specific population sub-groups. We define a fairness notion, called proportional equality (PE) which is motivated by solution concepts from the fair allocation literature, and accounts for prior probability shifts. We develop an algorithm CAPE that uses prevalence estimation techniques, sampling and an ensemble of classifiers to ensure fair predictions. We evaluate the performance of CAPE on real-world datasets and compare its performance with state-of-the-art fair algorithms. Our findings indicate that CAPE ensures PE-fair predictions, with low compromise on other performance metrics.

Microsoft Teams link :

Speaker Bio:

Host Faculty: