Seminars

View all Seminars  |  Download ICal for this event

New Algorithmic and Hardness results in Learning, Error Correcting Codes, and Constraint Satisfaction Problems

Series: Ph.D (Engg.) (Colloquium) -ONLINE

Speaker: Mr. Suprovat GhoshalPh.D (Engg.) studentDept. of C.S.A

Date/Time: Oct 05 15:30:00

Location: Microsoft Teams: Link details follows.

Faculty Advisor: Dr. Arnab Bhattacharyya and Dr. Siddharth Barman

Abstract:
Approximation is a natural way to deal with the intractability barrier that is inherent in many natural computational problems. However, it is often the case that the task of solving the approximation version of a problem is as hard as the exact version of the problem itself, which is the driving philosophy underlying the celebrated Probabilistically Checkable Proofs (PCP) theorem. In this thesis, we investigate the hardness of approximating several fundamental problems in Learning Theory, Coding Theory and Constraint Satisfaction Problems.

-- Hardness of Learning of Halfspaces Using Polynomial Thresholds.
In the first part, we study the complexity of improper learning of halfspaces. Here we study the following question: given a distribution over point label pairs with the guarantee that there exists a halfspace that classifies almost all the point-label pairs (say, 99%) correctly, can we efficiently find a hypothesis from a (possibly different) hypothesis class, that classifies a non-trivial (say, 51%)-fraction of point-label pairs correctly. When the hypothesis class is the set of halfspaces itself (i.e, proper learning), it is well known that it is NP-Hard to weakly learn halfspaces. A natural next step is to consider hypothesis classes which generalize halfspaces (i.e, improper learning), with the hope that this relaxation would allow for efficient approximation algorithms. A candidate hypothesis class which is popular in this setting is the set of Polynomial Threshold Functions (PTFs). By inclusion, learning halfspaces using halfspaces using PTFs is at least as easy as learning halfspaces, which as is does not rule out the efficient weakly learning halfspaces using constant degree PTFs. However, the added twist of dealing with two distinct hypothesis classes makes the task of ruling out weak learning in this setting much more challenging. Consequently, previous results in this direction were only able to rule out weak learning using degree-2 PTFs. In this work, we show that it is NP-Hard to weakly learn halfspaces using any constant degree PTF. In fact, we rule out efficient weak learning of halfspaces using any function of constant number of constant degree PTFs.

-- Hardness of Learning DNFs using Halfspaces.
In the second part, we investigate the analogous question of weakly learning Disjunctive Normal Forms (DNFs). DNFs are a fundamental concept class, simply due to the fact that any boolean function admits a DNF representation, and the size of its DNF representation is often a useful measure of its complexity. This naturally motivates the question of agnostic learning of DNFs, where we are given a distribution over point-label pairs that is perfectly (or almost perfectly) classifiable by a small size DNF, one wishes to efficiently find a hypothesis that classifies a large fraction of these point-label pairs correctly. While analogous to the setting of halfspaces, it is NP-Hard to weakly learn constant terms DNFs with constant term DNFs, the similarities do not carry over to the setting of improper learning. In particular, it is folkore that constant term DNFs can be efficiently learnt using constant degree PTFs, thus motivating the question of whether DNFs can be weakly learnt using halfspaces? We show that it is NP-Hard to weakly learn constant term DNFs (or a noisy 1-term DNF) using any function of constant number of halfspaces. This simultaneously generalizes and strengthens previous known results on learning 2-term DNFs using t-term DNFs, learning noisy monomials using halfspaces, and learning intersection of two halfspaces with intersections of constant number of halfspaces.

-- Hardness of Solving Sparse Equations over Finite Fields.

Given a system of linear equations over a finite field that admits a k-sparse solution, can we find a Ck-sparse solution that satisfies all the equations? Can we efficiently find a equally sparse solution that satisfies a large fraction fraction of equations? These are natural questions that arise in the context of Learning Sparse Parities over finite fields and error correcting codes. We study these problems in the context of parameterized complexity, where the parameterization is in terms of the promised sparsity of the solution. Of particular importance here is the homogeneous variant of the former question, popularly termed as the EVEN-SET problem. Here one asks if there exists an Fixed Parameter Tractable (FPT) algorithm that can recover a approximately sparse non-trivial solution to a system of homogeneous linear equations. The fixed parameter tractability of even the exact version was a long standing open problem in parameterized complexity. We show that assuming Randomized Gap-ETH (Exponential Time Hypothesis), there are no FPT algorithms for the EVEN-SET problem, which give a constant factor approximation, for any constant. Our techniques also extend to rule out constant factor FPT approximation algorithms for the parameterized variant of the Shortest Vector Problem, which is a fundamental computational problems on lattices with widespread applications in cryptography.

-- Vertex Deletion into Easy CSPs.
We consider the following question. Given a Constraint Satisfaction Problem (CSP) with the promise that deleting a few vertices makes it easy , are their efficient algorithms that can delete a relatively small number of vertices to find a large vertex induced easy sub-instance? This fairly general question models a wide range of combinatorial tasks such as deleting the smallest number of vertices to make a graph bipartite, or deleting the smallest number of variables to make a system of 2-variable equations satisfiable, to list a few. In particular, we study the UNIQUE-GAMES variant of the above problem -- known as STRONG-UNIQUE-GAMES -- where given a UNIQUE-GAMES instance with the promise that deleting a few vertices makes it fully satisfiable, the objective is to efficiently delete a relatively small subset of vertices to make the remaining instance fully satisfiable. In addition to being an interesting problem on its own, STRONG-UNIQUE-GAMES is a useful variant which is often easier to reduce from, and has been used to show tight UG inapproximability for several problems such as VERTEX-COVER, MAX-BI-CLIQUE, SCHEDULING-WITH-PRECEDENCE-CONSTRAINTS, etc. However the vertex deletion nature of this problem pushes its complexity into the realm of CSPs with global constraints, which are in general far less well understood in comparison to general Max CSPs. By exploiting a novel connection between STRONG-UNIQUE-GAMES and SMALL-SET-VERTEX-EXPANSION in graphs, we give new algorithmic and hardness results for STRONG UNIQUE GAMES, that are matching for constant alphabet sizes. In particular, this leads to tight bounds for the well studied ODD-CYCLE-TRANSVERSAL problem, which is the vertex deletion version of MAX-CUT. We also extend the algorithmic techniques used here to analogous variants of settings such as 3-COLORING and CSPs with Low Threshold Rank.

-- Testing Sparsity over Known and Unknown Bases.

In the final part, we take a detour from the framework of approximation algorithms and study the following question under the lens of property testing. Given a random sketch of a matrix, can we efficiently test whether it admits a sparse representation in terms of an unknown/known bases ? This can be thought of as the property testing version of the well studied Dictionary Learning problem, albeit, in a non-distributional setting. As expected, dictionary learning is a NP -Hard problem, and efficient algorithms with provable guarantees are known to exist only under distributional settings. Hence, apriori it is not self-evident whether an efficient algorithm exists for the testing version of the problem in the non-distributional setting, let alone a query efficient one. In this work, we give a simple testing algorithm for the above problem with some standard assumptions on A . Our tester is noise tolerant, and surprisingly, the number of sketches required by the tester is independent of the ambient dimension. The tester is based on gaussian width of sets, which is an intrinsic measure of complexity of geometric objects. The analysis of the tester combines tools from gaussian processes and high dimensional geometry.

Microsoft Teams Meeting Link:

https://teams.microsoft.com/l/meetup-join/19%3ameeting_NDdlY2UwZTMtZmM3OC00ODdjLWE2MzQtYmY0ZTA4MzJhMjg1%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97-f6a7-41e3-b2c5-ad4193976476%22%2c%22Oid%22%3a%22dff3e365-caca-4573-a6d4-f5c40f3fd57b%22%7d

Speaker Bio:

Host Faculty: