Seminars

View all Seminars  |  Download ICal for this event

New Algorithmic and Hardness results in Learning, Error Correcting Codes, and Constraint Satisfaction Problems

Series: Ph.D (Engg.)Thesis Defence -ON-LINE

Speaker: Mr. Suprovat GhoshalPh.D (Engg) studentDept. of CSA

Date/Time: May 11 16:00:00

Location: Microsoft Teams - ONLINE

Faculty Advisor: Prof. Arnab Bhattacharyya and Prof. Siddharth Barman

Abstract:
Approximation algorithms are a natural way to deal with the intractability barrier that is inherent in many naturally arising computational problems. However, it is often the case that the task of solving the approximation version of the problem is as hard as the exact version of the problem itself, which is the underlying philosophy driving the study of hardness of approximation. While the last couple of decades has seen significant progress in terms of tight inapproximability results for many important problems, there are still several fundamental problems for which best known lower bounds in terms of the hardness of approximation is still quite far off from their best known upper bounds achievable using efficient algorithms. In this thesis, we investigate this phenomenon in the context of several fundamental com- putational problems in Learning Theory, Error Correcting Codes and Constraint Satisfaction Problems (CSPs), along the following themes:
1. Hardness of Learning using Thresholds: Improper learning – i.e., learning a concept class with a different and potentially easier to deal with hypothesis class – is often used as natural relaxation when the exact (proper) learning problem is intractable. In this thesis, we show the NP-Hardness of (improperly) learning natural concept classes such as Halfspaces and Disjunctive Normal Forms using threshold functions. Our results are tight with respect to efficiently achievable approximation factors for learning using arbitrary functions of thresholds.
2. Parameterized Complexity of Finding Sparse Solutions: Many natural problems such as k-VectorSum and k-EvenSet can be modeled as finding a sparse solution to a system of equations over finite fields. We study these problems in the framework of parameterized complexity and give new (and often tight) lower bounds for exact and approximation versions of these problems under various hypotheses. We also use these results to establish new hardness results for learning sparse parities.
3. Vertex Deletion CSPs: We study Vertex deletion CSPs, which are a variant of CSPs where the objective is to delete the least number of vertices to recover a fully satisfi- able sub-instance. In particular, we give new algorithmic and hardness results for the StrongUniqueGames problem which is the vertex deletion variant of UniqueGames, establishing a connection with small-set-vertex-expansion en route. We also give new algo- rithmic results for vertex deletion CSPs such as OddCycleTransversal, Balanced- VertexSeparator, Partial-3-Coloring in more tractable settings such as when the underlying constraint graph has low threshold rank or is sampled semi-randomly.
4. Testing Sparsity: We also investigate the following question: can one design efficient algorithms for testing a property for which the search/optimization version is intractable? We study this question in the form of testing sparsity, where one simply wishes to check whether a matrix admits a sparse factorization. While the optimization of the problem (Dictionary Learning) is intractable without distributional assumptions, we design an efficient algorithm for the property testing formulation of this problem. We also extend our results to the noise tolerant setting and settings where the basis of representation is known.

Microsoft Teams link:
https://teams.microsoft.com/l/meetup-join/19%3ameeting_OGQzZDlkODItZDBmNi00M2JmLWJjYmEtZmRjYWJkOTU4NWQy%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97-f6a7-41e3-b2c5-ad4193976476%22%2c%22Oid%22%3a%22497b6b67-f9b1-41bc-a3fb-f22b0fd477ef%22%7d

Speaker Bio:

Host Faculty: