Seminars

View all Seminars  |  Download ICal for this event

On Learning and Lower Bound Problems Related to Iterated Matrix Multiplication Polynomial

Series: Ph.D. Thesis Defense

Speaker: Mr. Vineer NairPh.D studentDept. of CSAIISc

Date/Time: Mar 04 09:45:00

Location: CSA Seminar Hall (Room No. 254, First Floor)

Faculty Advisor: Prof. Chandan Saha

Abstract:
The iterated matrix multiplication polynomial (IMM) of width w and length d is the 1x1 entry in the product of d square matrices of size w. The w^2d entries in the d matrices are distinct variables. In this thesis, we study certain learning and lower bound problems related to IMM. Our first work gives a polynomial time randomized algorithm for equivalence testing of IMM. At its core, the equivalence testing algorithm exploits a connection between the irreducible invariant subspaces of the Lie algebra of the group of symmetries of a polynomial f that is equivalent to IMM and the layer spaces of a full-rank algebraic branching program computing f. This connection also helps determine the group of symmetries of IMM and show that IMM is characterized by its group of symmetries. Our second work is related to learning affine projections of IMM, which is believed to be a very hard problem as it is equivalent to reconstructing a powerful model to compute polynomials called algebraic branching programs (ABP). Equivalence test for IMM can be viewed as reconstructing ABPs in the average-case, when the width of the ABP is at most (n/d)^0.5, where n and d are the number of variables and the degree of the polynomial computed by the ABP respectively. Our second work improves this by first considering a related problem called `linear matrix factorization’ (LMF) which is a natural generalization of the polynomial factorization problem. We give a polynomial time randomized algorithm for average-case LMF for matrix products of width at most 0.5(n^0.5). In fact, we give a polynomial time randomized algorithm that solves (worst-case) LMF problem when the input matrix product is non-degenerate or pure product-- a notion we define in this work. Using our algorithm for LMF, we give a non-trivial average-case reconstruction algorithm for ABPs of width at most 0.5(n^0.5), which is interesting in the context of the Ω(n^0.5) width lower bound known for homogeneous ABPs. Our last work gives lower bounds on interesting restrictions of arithmetic formulas computing IMM. We prove a w^Ω(d) lower bound on the size of multilinear depth three formulas computing IMM of width w and length d. The lower bound is proved by introducing a novel variant of the partial derivatives measure called skewed partial derivatives, which found applications in other subsequent works. Improving this result to w^Ω(log d) size lower bound on multilinear formulas computing IMM would imply a super-polynomial separation between ABPs and arithmetic formulas. We also show an exponential separation between multilinear depth three and multilinear depth four formulas which was an improvement over the quasi-polynomial separation already known. We also consider a restriction of multilinear formulas, called interval set-multilinear formulas computing IMM. Proving a super-polynomial size lower bound on interval set-multilinear formulas computing IMM would imply a super-polynomial separation between algebraic branching programs and homogeneous formulas in the non-commutative world. We make progress in this direction by giving a super-polynomial size lower bound on an interesting restriction of the interval set-multilinear formula computing IMM..

Speaker Bio:

Host Faculty: