Department of Computer Science and Automation Department of Computer Science and Automation, IISc, Bangalore, India Indian Institute of Science
HOME | ABOUT US | PEOPLE | RESEARCH | ACADEMICS | FACILITIES | EVENTS / SEMINARS | NEWS | CONTACT US

UPCOMING SEMINARS

Series: Department Seminar
Title: Approximating Geometric Knapsack via L-packings

  • Speaker: Dr. Arindam Khan
                   Researcher, IDSIA SUPSI
                   Switzerland
  • Date and Time: Monday, August 21, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
We study the two-dimensional geometric knapsack problem (2DK), a geometric variant of the classical knapsack problem. In this problem, we are given a set of axis-aligned rectangular items, each one with an associated profit, and an axis-aligned square knapsack. The goal is to find a (non-overlapping) packing of a maximum profit subset of items inside the knapsack without rotating items. This is a very well-studied optimization problem and finds applications in scheduling, memory allocation, advertisement placement etc. The best-known polynomial-time approximation factor for this problem (even just in the cardinality case) is $2+eps$ [Jansen and Zhang, SODA 2004].

After more than a decade, in this paper we break the 2-approximation barrier, achieving a polynomial-time $17/9+eps<1.89$ approximation, which improves to $558/325+eps<1.72$ in the cardinality case. We also consider the variant of the problem with rotations (2DKR) , where the items can be rotated by $90$ degrees. Also in this case the best-known polynomial-time approximation factor (even for the cardinality case) is $2+eps$ [Jansen and Zhang, SODA 2004]. Exploiting part of the machinery developed for 2DK plus a few additional ideas, we obtain a polynomial-time $3/2+eps$-approximation for 2DKR, which improves to $4/3+eps$ in the cardinality case.

This is a joint work with Waldo Galvez, Fabrizio Grandoni, Sandy Heydrich, Salvatore Ingala and Andreas Wiese. This result will appear in FOCS 2017.

Speaker Bio:
Arindam Khan is a researcher in IDSIA, SUPSI in Lugano, Switzerland. His research areas include approximation algorithms, online algorithms and computational geometry. He has obtained his PhD in Algorithms, Combinatorics and Optimization (ACO) from Georgia Institute of Technology, Atlanta, USA under Prof. Prasad Tetali. Previously he has been a research intern in Theory group, Microsoft Research Redmond and Microsoft Research Silicon Valley USA, a visiting researcher at Simons Institute, Berkeley, USA and a blue scholar in IBM Research India.

Host Faculty: Dr. Anand Louis

Video Archive Go Top

 

PAST SEMINARS

Series: Ph.D. Thesis Defense
Title: Number Theoretic, Computational and Cryptographic Aspects of a Certain Sequence of Arithmetic Progressions

  • Speaker: Mr. Cherukapally Srikanth
                   Ph.D student
                   Dept. of CSA
  • Faculty Advisor: Prof. C.E. Veni Madhavan & Dr. Sanjit Chatterjee
  • Date and Time: Wednesday, August 09, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
This thesis introduces a new mathematical object: collection of arithmetic progressions with elements satisfying the inverse property,

``j-th terms of i-th and (i+1)-th progressions are multiplicative inverses of each other modulo (j+1)-th term of i-th progression''. Such a collection is uniquely defined for any pair $(a,d)$ of co-prime integers. The progressions of the collection are ordered. Thus we call it a sequence rather than a collection. The results of the thesis are on the following number theoretic, computational and cryptographic aspects of the defined sequence and its generalizations.

The sequence is closely connected to the classical Euclidean algorithm. Precisely, certain consecutive progressions of the sequence form ``groupings''. The difference between the common differences of any two consecutive progressions of a grouping is same. The number of progressions in a grouping is connected to the quotient sequence of the Euclidean algorithm on co-prime input pairs. The research community has studied extensively the behavior of the Euclidean algorithm. For the first time in the literature, the connection (proven in the thesis) shows what the quotients of the algorithm signify. Further, the leading terms of progressions within groupings satisfy a mirror image symmetry property, called ``symmetricity''. The property is subject to the quotient sequence of the Euclidean algorithm and divisors of integers of the form $x^2-y^2$ falling in specific intervals. The integers $a$, $d$ are the primary quantities of the defined sequence in a computational sense. Given the two, leading term and common difference of any progression of the sequence can be computed in time quadratic in the binary length of $d$. On the other hand, the inverse computational question of finding $(a,d)$, given information on some terms of the sequence, is interesting. This problem turns out to be hard as it requires finding solutions to an nearly-determined system of multivariate polynomial equations. Two sub-problems arising in this context are shown to be equivalent to the problem of factoring integers. The reduction to the factoring problem, in both cases, is probabilistic.

Utilizing the computational difficulty of solving the inverse problem, and the sub-problems (mentioned above), we propose a symmetric-key cryptographic scheme (SKCS), and a public key cryptographic scheme (PKCS). The PKCS is also based on the hardness of the problem of finding square-roots modulo composite integers. Our proposal uses the same algorithmic and computational primitives for effecting both the PKCS and SKCS. In addition, we use the notion of the sequence of arithmetic progressions to design an entity authentication scheme. The proof of equivalence between one of the inverse computational problems (mentioned above) and integer factoring led us to formulate and investigate an independent problem concerning the largest divisor of integer $N$ bounded by the square-root of $N$. We present some algorithmic and combinatorial results. In the course of the above investigations, we are led to certain open questions of number theoretic, combinatorial and algorithmic nature. These pertain to the quotient sequence of the Euclidean algorithm, divisors of integers of the form $x^2-y^2$ in specific intervals, and the largest divisor of integer $N$ bounded by its square-root.

Video Archive Go Top

 

Series: Ph.D. Colloquium
Title: Approximation Algorithms for Geometric Packing and Covering Problems

  • Speaker: Mr. Aniket Basu Roy
                   PhD student at CSA department
  • Faculty Advisor: Prof. Sathish Govindarajan
  • Date and Time: Monday, August 07, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
We study a host of Geometric optimization problems that are NP-hard and design polynomial time approximation algorithms for them. More precisely, we are given a family of geometric objects and a point set, mostly in the plane, and study different variants and generalizations of Packing and Covering problems. Our objects of study are mostly family of non-piercing regions in the plane. We call a set of simple and connected regions to be non-piercing if for any pair of intersecting regions A and B their set difference AB is a connected region e.g., disks, half-planes etc. Whereas, lines and rectangles are examples of piercing objects. For most of the problems we have studied, a simple local search algorithm is enough to yield a PTAS whose analysis of approximation requires a suitable bipartite graph on the local search solution and the optimal solution to have balanced sub-linear separators.

We study a generalization of the standard packing problem, called the capacitated packing problem and its slight variant, the shallow packing problem. We devise a PTAS for both these problems for constant capacities. For the former problem, the objects are non-piercing whereas the objects can be even more general and only have sub-quadratic monotonic union complexity for the latter problem. The non-triviality here is to show that the intersection graph of arrangements with shallow depth, which is not planar, has balanced sub-linear separators. Our results complement the Maximum Independent Set of Rectangles problem as rectangles are both piercing and have quadratic union complexity. We also study the capacitated packing problem for the dual set system and are able to show that local search works here as well for unit capacity and devise a constant factor approximation algorithm using Bronnimann-Goodrich technique which is inspired by the method of multiplicative weight updates.

Runaway Rectangle Escape problem is closely related to the above packing problems and is motivated from routing in printed circuit boards. Here we are given a set of axis-parallel rectangles inside a rectangular boundary and a maximum allowed depth d. The objective is to extend the maximum number of input rectangles to one of the four directions such that the maximum depth of a point is at most d after extension. We devise several approximation algorithms and a hardness result.

We study different variants of the covering problem like the Unique Coverage, Multi-Covering, Prize Collecting Set Cover, Exact Cover, and Multi-Hitting Set. For the first three problems, we show that local search yields a PTAS for non-piercing regions under some assumptions. For the Exact Cover problem, we prove that even finding a feasible solution is NP-hard for objects as special as unit squares. For the Multi-Hitting Set problem, we propose constant factor approximation algorithm for non-piercing regions using the fact that they have shallow dual-cell complexity.

Lastly, we consider variants of the Art Gallery problems called the Minimum (Horizontal) Sliding Cameras problem, M(H)SC. We are given an orthogonal polygon and we need to deploy mobile guards who can walk along an orthogonal (horizontal) line segment and can guard a point inside the polygon if the perpendicular drawn from the point onto the line segment lies inside the polygon. Our local search algorithm yields a PTAS for the MHSC problem and also the MSC problem when the polygon has no holes. Here again, the nontriviality is to prove that an appropriate graph on orthogonal line segments is planar. We also observe that the MSC problem for polygons with holes is APX-hard.

Video Archive Go Top

 

Series: CSA Distinguished Lecture
Title: MULTI-OBJECTIVE NAVIGATION IN UNCERTAINTY

  • Speaker: Professor Krishna R. Pattipati
                   University of Connecticut
                   Storrs, Connecticut
                   USA
  • Date and Time: Thursday, August 03, 2017, 3:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Routing in uncertain environments involves many contextual elements, such as different environmental conditions (ensemble forecasts with varying spatial and temporal uncertainty), multiple objectives, changes in mission goals while en route (e.g., training requirements, pop-up threats, humanitarian aid), and asset status. In this walk, we focus on robust planning under uncertainty by exploiting weather forecast ensembles and realizations using TMPLAR, a Tool for Multi-objective PLanning and Asset Routing, in the context of 3-D and 4-D path navigation. Our approach includes solving for m-best shortest paths for each weather forecast realization via Murty's search space partitioning strategy and evaluating the mean, variance, and signal-to-noise ratio (SNR) of the paths over all weather possibilities. The robust path is one that, for example, minimizes the root mean square (RMS) value or one that maximizes the SNR, given the possible forecast realizations. In a complementary effort to compact the feature-rich multi-dimensional navigational problem space, we develop a self-organizing map (SOM)-based data reduction scheme that filters complex contexts and highlights only the key impacting features within a given information space. We demonstrate the utility of our approaches via application to a real-world shipping tragedy, using the weather forecast realizations available prior to the event.

Speaker Bio:
Krishna R. Pattipati received the B. Tech. degree in electrical engineering with highest honours from the Indian Institute of Technology, Kharagpur, in 1975, and the M.S. and Ph.D. degrees in control and communication systems from UConn, Storrs, in 1977 and 1980, respectively. He was with ALPHATECH, Inc., Burlington, MA from 1980 to 1986. He has been with the Department of Electrical and Computer Engineering at UConn since 1986, where he is currently the Board of Trustees Distinguished Professor and the UTC Chair Professor in Systems Engineering. Dr. Pattipati’s research activities are in the areas of proactive decision support, uncertainty quantification, smart manufacturing, autonomy, knowledge representation, and optimization-based learning and inference. A common theme among these applications is that they are characterized by a great deal of uncertainty, complexity, and computational intractability. He is a cofounder of Qualtech Systems, Inc., a firm specializing in advanced integrated diagnostics software tools (TEAMS, TEAMS-RT, TEAMS-RDS, TEAMATE, PackNGo), and serves on the board of Aptima, Inc. Dr. Pattipati was selected by the IEEE Systems, Man, and Cybernetics (SMC) Society as the Outstanding Young Engineer of 1984, and received the Centennial Key to the Future award. He has served as the Editor-in-Chief of the IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS--PART B from 1998 to 2001. He was co-recipient of the Andrew P. Sage Award for the Best SMC Transactions Paper for 1999, the Barry Carlton Award for the Best AES Transactions Paper for 2000, the 2002 and 2008 NASA Space Act Awards for "A Comprehensive Toolset for Model-based Health Monitoring and Diagnosis," and “Real-time Update of Fault-Test Dependencies of Dynamic Systems: A Comprehensive Toolset for Model-Based Health Monitoring and Diagnostics”, the 2003 AAUP Research Excellence Award and the 2005 School of Engineering Teaching Excellence Award at the University of Connecticut at UCONN. He is an elected Fellow of IEEE for his contributions to discrete-optimization algorithms for large-scale systems and team decision-making, and is a member of the Connecticut Academy of Science and Engineering.

Host Faculty: Prof. Y Narahari

Video Archive Go Top

 

Series: Department Seminar
Title: Data Driven Precondition Inference for Compiler Optimizations

  • Speaker: Dr. Santosh Nagarakatte
                   Assistant Professor
                   Department of Computer Science, Rutgers University
  • Date and Time: Thursday, August 03, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Peephole optimizations are a common source of compiler bugs. Compiler developers typically transform an incorrect peephole optimization into a valid one by strengthening the precondition. This process is challenging and tedious. In this talk, I will describe our new data-driven approach for inferring preconditions for peephole optimizations expressed in Alive. Our main idea is to generate positive and negative examples for an optimization in a context without tests, enumerate predicates on demand, and learn a set of predicates that separate the positive and negative examples. We repeat this process until we are able to find a precondition that ensures the validity of the optimization. Our prototype, ALIVE-INFER, reports both a weakest precondition and a set of succinct partial preconditions to the developer. Our prototype generates preconditions that are weaker than LLVM’s preconditions for majority of the optimizations in the Alive suite. The talk will also demonstrate the applicability of this technique to generalize optimization patterns generated by Souper, an LLVM IR–based super-optimizer.

Speaker Bio:
Santosh Nagarakatte is an Assistant Professor of Computer Science at Rutgers University. He obtained his PhD from the University of Pennsylvania in 2012. His research interests are in Hardware-Software Interfaces spanning Programming Languages, Compilers, Software Engineering, and Computer Architecture. His papers have been selected as IEEE MICRO TOP Picks papers of computer architecture conferences in 2010 and 2013. He has received the NSF CAREER Award in 2015, ACM SIGPLAN PLDI 2015 Distinguished Paper Award, and ACM SIGSOFT ICSE 2016 Distinguished Paper Award for his research on LLVM compiler verification. His papers have been selected as the 2016 SIGPLAN Research Highlights Paper and 2017 Communication of the ACM Research Highlights Paper.

Host Faculty: Prof. Vinod Ganapathy

Video Archive Go Top

 

Series: Department Seminar
Title: The program induction route to explainable AI

  • Speaker: Prof. Subramanian Ramamoorthy
  • Date and Time: Wednesday, August 02, 2017, 11:30 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
The confluence of advances in diverse areas including machine learning, large scale computing and reliable commoditized hardware have brought autonomous robots to the point where they are poised to be genuinely a part of our daily lives. Some of the application areas where this seems most imminent, e.g., autonomous vehicles, also bring with them stringent requirements regarding safety, explainability and trustworthiness. These needs seem to be at odds with the ways in which recent successes have been achieved, e.g., with end-to-end learning. In this talk, I will try to make a case for an approach to bridging this gap, through the use of programmatic representations that intermediate between opaque but efficient learning methods and other techniques for reasoning that benefit from ‘symbolic’ representations.

I will start by attempting to frame the overall problem, drawing on some of the motivations of the DARPA Explainable AI program (under the auspices of which we are starting a new project, COGLE) and on extant ideas regarding safety and dynamical properties in the control theorists’ toolbox.

Our first result will be based on a framework for Grounding and Learning Instances through Demonstration and Eye tracking (GLIDE). The problem here is to learn the mapping between abstract plan symbols and their physical instances in the environment, i.e., physical symbol grounding, starting from cross-modal input provides the combination of high- level task descriptions (e.g., from a natural language instruction) and a detailed video or joint angles signal. This problem is formulated in terms of a probabilistic generative model and addressed using an algorithm for computationally feasible inference to associate traces of task demonstration to a sequence of fixations which we call fixation programs.

The second related result more directly addresses the issue of inferring processes that underlie observed black-box phenomena, in the form of causal mechanisms. We propose to learn high-level programs in order to represent abstract models, which capture the invariant structure in the observed data. We introduce the pi-machine (program-induction machine) – an architecture able to induce interpretable LISP-like programs from observed data traces. We propose an optimization procedure for program learning based on back propagation, gradient descent and A* search. We apply the proposed method to problems including system identification, explaining the behavior of a DQN agent and explaining human demonstrations in planned activities. Our results show that the pi-machine can efficiently induce interpretable programs from individual data traces.

Speaker Bio:
Prof. Subramanian Ramamoorthy is a Reader (Associate Professor) in the School of Informatics, University of Edinburgh, where he has been on the faculty since 2007. He is a Coordinator of the EPSRC Robotarium Research Facility, Executive Committee Member for the Edinburgh Centre for Robotics and at the Bayes Centre. He received his PhD in Electrical and Computer Engineering from The University of Texas at Austin in 2007. He is an elected Member of the Young Academy of Scotland at the Royal Society of Edinburgh, and has been a Visiting Professor at Stanford University and the University of Rome “La Sapienza”. His research focus is on robot learning and decision-making under uncertainty, addressed through a combination machine learning techniques with emphasis on issues of transfer, online and reinforcement learning as well as new representations and analysis techniques based on geometric/topological abstractions.

Host Faculty: Aditya Kanade

Video Archive Go Top

 

Series: Ph.D. Colloquium
Title: Geometric and Topological Methods for Biomolecular Visualization

  • Speaker: Mr. Talha Bin Masood
                   PhD student at CSA department
  • Faculty Advisor: Prof. Vijay Natarajan
  • Date and Time: Monday, July 31, 2017, 11:00 AM
  • Venue: CSA Multimedia Class (Room No. 252, First Floor)

Abstract
In recent years, techniques from the field of scientific visualization and computational geometry have increasingly found application in the study of biomolecular structure. In this thesis, we focus on the problems related to identification and visualization of cavities and channels in proteins. In the first part of the thesis, we describe two methods: one for extraction and visualization of biomolecular channels, and the other for extraction of cavities in uncertain data. We also describe the two software tools based on the proposed methods, called ChExVis and RobustCavities. In the second part of the thesis, we describe efficient GPU based parallel algorithms for two geometric structures widely used in the study of biomolecules. One of the structures we discuss is discrete Voronoi diagram which finds applications in channel visualization, while the other structure is alpha complex which is extremely useful in studying geometric and topological properties of biomolecules.

In this talk, we will focus on the software tool called ChExVis designed for extraction and visualization of channels in biomolecules. We will describe the alpha complex based framework for extraction of geometrically feasible channels. In addition, we will present novel ways of visualizing the amino-acids lining the channels together with their physico-chemical properties. We will discuss a few case studies that demonstrate the utility of the proposed method. Secondly, we will describe a GPU based algorithm for the computation of the alpha complex, a subcomplex of the Delaunay triangulation. The algorithm exploits the knowledge of typical distribution and sizes of atoms in biomolecules to achieve speed-up of up to 13x over the state-of-the-art algorithm.

Speaker Bio:
Talha is a Ph.D. candidate in Computer Science at the Dept. of Computer Science and Automation, Indian Institute of Science, Bangalore. He received B.Tech. degree from Aligarh Muslim University, and M.E. degree from Indian Institute of Science, both in Computer Science and Engineering. His research interests include Scientific visualization, Computational geometry, Computational topology, and their applications to structural biology.

Video Archive Go Top

 

Series: Ph.D. Colloquium
Title: Program analyses to support memory-saving refactorings in Java programs

  • Speaker: Girish Maskeri Rama
  • Faculty Advisor: K. V. Raghavan
  • Date and Time: Friday, July 28, 2017, 11:30 AM
  • Venue: CSA Multimedia Class (Room No. 252, First Floor)

Abstract
Software commonly consumes unexpectedly high amounts of memory, frequently due to programming idioms that are used to make software more reliable, maintainable and understandable. In the case of modern object-oriented systems this problem is partly due to creation of large numbers of co-existing *isomorphic* objects. Intuitively, two objects are isomorphic if they are of the same type, have identical values in corresponding primitive fields, and are such that corresponding reference fields themselves point to isomorphic objects. In other words, the portions of memory rooted at the two objects are isomorphic shape-wise as well as values-wise. A significant reduction in heap usage can therefore be achieved if the code is refactored to *de-duplicate* or *share* objects whenever possible instead of always creating distinct but possibly isomorphic objects. Such a refactoring, which employs a cache to keep track of objects created so far and to share them, is termed as object-sharing refactoring. In practice, object-sharing refactoring is commonly used, as it directly impacts the attributes of software that end users care about - memory utilization and performance. However, manual refactoring is tedious and error prone.

To support object-sharing refactoring we have (1) designed and implemented an approach to estimate memory-savings potential due to this refactoring, (2) espoused the idea of *full initialization points* of objects, and a static analysis to identify these points, and (3) proposed a scalable refinement-based points-to analysis for verifying the safety of object-sharing refactoring, but which has general applicability to several other verification tasks as well.

(1) We present a dynamic analysis technique for *estimating* for all the allocation sites in a program, for a given input, the reduction in heap memory usage (in bytes, or as a percentage) to be obtained by employing object sharing at each site. The quantitative estimates produced by our technique of a user-observable benefit (i.e., actual memory savings) make it easier for developers to select sites to refactor. Experimentation with our estimation tool on real Java programs indicate that nearly all applications have potential for reduction of memory usage by object sharing, with up to 37% estimated reduction in heap footprint in the best case.

(2) We define a novel concept termed full-initialization points (FIPs) to characterize the points in the program where objects allocated at any chosen allocation site become fully initialized. This notion supports object-sharing refactoring as follows - the FIPs are good candidate program points where the cache lookup can be performed to find isomorphic objects and share them. Additionally, we argue that FIPs are useful in the context of other allocation-site refactorings, such as immutability refactoring, and refactoring to the builder pattern. We present a novel and conservative static analysis to detect FIPs for a given allocation site. By introducing code to cache and share objects at the FIPs suggested by our analysis, object-sharing refactoring was able to obtain a mean memory savings of 11.4% on a set of real Java benchmarks. Immutability refactoring guided by our analysis achieved a mean runtime speedup of 1.6X compared to performing the same refactoring using a baseline approach that did not make use of FIPs.

(3) A standard points-to analysis approach, namely, the object sensitivity approach, uses an equal level of precision to represent symbolic objects allocated at all allocation sites in a program. This approach does not scale to large programs unless low levels of precision are used. We show how to use a user's query of interest to identify a limited set of allocation sites that need a higher level of precision, and use higher precision only at these sites. We have implemented our approach. Experiments using our implementation show that for the task of checking safety of applying object-sharing refactoring at each site, our approach gives mean improvement in precision of 4.4% over the previous approach, on a set of real benchmarks. For another well-known verification task, namely downcast safety analysis, our approach results in mean precision improvement of 11.9% over the previous approach.

Speaker Bio:
Girish Rama is employed at Infosys Ltd., and is a PhD student at CSA, IISc. His research interests are in software refactoring, software modularity and metrics, and program analysis techniques.

Video Archive Go Top

 

Series: Ph.D. Thesis Defense
Title: Power Issues in SoCs: Power Aware DFT Architecture and Power Estimation

  • Speaker: Mr. Jaynarayan Thakurdas Tudu
                   Ph.D student
                   Dept. of CSA, IISc
  • Faculty Advisor: Prof. Matthew Jacob Thazhuthaveetil
  • Date and Time: Friday, July 28, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Test power, data volume, and test time have been long-standing problems for sequential scan based testing of system-on-chip (SoC) design. The modern SoCs fabricated at lower technology nodes are complex in nature, the transistor count is as large as billions of gate for some of the microprocessors. The design complexity is further projected to increase in the coming years in accordance with Moore’s law. The larger gate count and integration of multiple functionalities are the causes for higher test power dissipation, test time and data volume. The dynamic power dissipation during scan testing, i.e. during scan shift, launch and response capture, are major concerns for reliable as well as cost effective testing. Excessive average power dissipation leads to a thermal problem which causes burn-out of the chip during testing. Peak power on other hand causes test failure due to power induced additional delay. The test failure has direct impact on yield. The test power problem in modern 3D stacked based IC is even a more serious issue. Estimating the worst case functional power dissipation is yet another great challenge. The worst case functional power estimation is necessary because it gives an upper bound on the functional power dissipation which can further be used to determine the safe power zone for the test.

Several solutions in the past have been proposed to address these issues. In this thesis we have three major contributions: 1) Sequential scan chain reordering, and 2) JScan-an alternative Joint-scan DFT architecture to address primarily the test power issues along with test time and data volume, and 3) an integer linear programming methodology to address the power estimation problem. In order to reduce test power during shift, we have proposed a graph theoretic formulation for scan chain reordering and for optimum scan shift operation. For each formulation a set of algorithms is proposed. The experimental results on ISCAS-89 benchmark circuit show a reduction of around 25% and 15% in peak power and scan shift time respectively.

In order to have a holistic DFT architecture which could solve test power, test time, and data volume problems, a new DFT architecture called Joint-scan (JScan) have been developed. In JScan we have integrated the serial and random access scan architectures in a systematic way by which the JScan could harness the respective advantages from each of the architectures. The serial scan architecture suffers from test power, test time, and data volume problems. However, the serial scan is simple in terms of its functionality and is cost effective in terms of DFT circuitry. Whereas, the random access scan architecture is opposite to this; it is power efficient and it takes lesser time and data volume compared to serial scan. However, the random access scan occupies larger DFT area and introduces routing congestion. Therefore, we have proposed a methodology to realize the JScan architecture as an efficient alternative for standard serial and random access scan. Further, the JScan architecture is optimized and it resulted into a 2-Mode 2M-Jscan Joint-scan architecture. The proposed architectures are experimentally verified on larger benchmark circuits and compared with existing state of the art DFT architectures. The results show a reduction of 50% to 80% in test power and 30% to 50% in test time and data volume. The proposed architectures are also evaluated for routing area minimization and we obtained a saving of around 7% to 15% of chip area.

Estimating the worst case functional power being a challenging problem, we have proposed a binary integer linear programming (BILP) based methodology. Two different formulations have been proposed considering the different delay models namely zero-delay and unit-delay. The proposed methodology generates a pair or input vectors which could toggle the circuit to dissipate worst power. The BILP problems are solved using CPLEX solver for ISCAS-85 combinational benchmark circuits. For some of the circuits, the proposed methodology provided the worst possible power dissipation i.e. 80 to 100% toggling in nets.

Video Archive Go Top

 

Series: M.Sc. (Engg) Thesis Defense
Title: Efficient Whole Program Path Tracing

  • Speaker: Mr. Sridhar G
                   M.Sc. (Engg) Student
                   Dept. of CSA
                   IISc
  • Faculty Advisor: Prof. Matthew Jacob Thazhuthaveetil
  • Date and Time: Thursday, July 27, 2017, 10:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Obtaining an accurate whole program path (WPP) that captures a program's runtime behavior in terms of a control-flow trace has a number of well-known benefits,including opportunities for code optimization, bug detection, program analysis refinement, etc. Existing techniques to compute WPPs perform sub-optimal instrumentation resulting in significant space and time overheads. Our goal in this paper is to minimize these overheads without losing precision.

To do so, we design a novel and scalable whole program analysis to determine instrumentation points used to obtain WPPs. Our approach is divided into three components: (a) an efficient summarization technique for inter-procedural path reconstruction, (b) specialized data structures called conflict sets that serve to effectively distinguish between pairs of paths, and (c) an instrumentation algorithm that computes the minimum number of edges to describe a path based on these conflict sets. We show that the overall problem is a variant of the minimum hitting set problem, which is NP-hard, and employ various sound approximation strategies to yield a practical solution.

We have implemented our approach and performed elaborate experimentation on Java programs from the DaCapo benchmark suite to demonstrate the efficacy of our approach across multiple dimensions. On average, our approach necessitates instrumenting only 9% of the total number of CFG edges in the program. The average runtime overhead incurred by our approach to collect WPPs is 1.97x, which is only 26% greater than the overhead induced by only instrumenting edges guaranteed to exist in an optimal solution. Furthermore, compared to the state-of-the-art, we observe a reduction in runtime overhead by an average and maximum factor of 2.6 and 5.4, respectively.

Speaker Bio:
Sridhar Gopinath is a M.Sc.(Engg.) student in the Department of Computer Science and Automation since Aug 2015. His research interests are in software engineering and programming languages. He is a member of the Scalable Software Systems lab in CSA. He received a B.E in Computer Science from SJCE, Mysore in 2015.

Video Archive Go Top

 

Series: Ph.D. Colloquium
Title: Falcon:- A Graph Manipulation Language for Distributed Heterogeneous Systems

  • Speaker: Unnikrishnan C.
  • Faculty Advisor: Y.N. Srikant
  • Date and Time: Tuesday, July 25, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Graphs model relationships across real-world entities in web graphs, social network graphs, and road network graphs. Graph algorithms analyze and transform a graph to discover graph properties or to apply a computation. For instance, a pagerank algorithm computes a rank for each page in a webgraph, and a community detection algorithm discovers likely communities in a social network, while a shortest path algorithm computes the quickest way to reach a place from another, in a road network. In Domains such as social information systems, the number of edges can be in billions or trillions. Such large graphs are processed on distributed computer systems or clusters.

Graph algorithms can be executed on multicore-CPUs, GPUs with thousands of cores, multi-GPU devices, and CPU+GPU clusters, depending on the size of the graph object. While programming such algorithms on heterogeneous targets, a programmer is required to deal with parallelism and and also manage explicit data communication between distributed devices. This implies that a programmer is required to learn CUDA, OpenMP, MPI, etc., and also the details of the hardware architecture. Such codes are error prone and difficult to debug. A Domain Specific Language (DSL) which hides all the hardware details and lets the programmer concentrate only the algorithmic logic will be very useful.

With this as the research goal, Falcon, graph DSL and its compiler have been developed. Falcon programs are explicitly parallel and Falcon hides all the hardware details from the programmer. Large graphs that do not fit into the memory of a single device are automatically partitioned by the Falcon compiler. Another feature of Falcon is that it supports mutation of graph objects and thus enables programming dynamic graph algorithms. The Falcon compiler converts a single DSL code to heterogeneous targets such as multi-core CPU, GPU, multi-GPU device, and CPU+GPU cluster. Falcon compiled codes match or outperform state-of-the-art graph frameworks for different target platforms and benchmarks.

Speaker Bio:
Unnikrishnan is a Ph.D student in the CSA dpeartment. His areas of interest are compilation for heterogeneous architectures and compiler optimizations.

Host Faculty: Y.N. Srikant

Video Archive Go Top

 

Series: Department Seminar
Title: Learning with Limited Rounds of Adaptivity: Coin Tossing, Multi-Armed Bandits, and Ranking from Pairwise Comparisons

  • Speaker: Mr. Arpit Agarwal
                   PhD student
                   Department of Computer & Information Science
                   University of Pennsylvania
  • Date and Time: Monday, July 24, 2017, 2:30 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
In many learning settings, active/adaptive querying is possible, but the number of rounds of adaptivity is limited. We study the relationship between query complexity and adaptivity in identifying the k-most biased coins among a set of n coins with unknown biases. This problem is a common abstraction of many well-studied problems, including the problem of identifying the k-best arms in a stochastic multi-armed bandit, and the problem of top-k ranking from pairwise comparisons.

An r-round adaptive algorithm for the k most biased coins problem specifies in each round the set of coin tosses to be performed based on the observed outcomes in earlier rounds, and outputs the set of k-most biased coins at the end of r rounds. When r = 1, the algorithm is known as non-adaptive; when r is unbounded, the algorithm is known as fully adaptive. While the power of adaptivity in reducing query complexity is well known, full adaptivity requires repeated interaction with the coin tossing (feedback generation) mechanism, and is highly sequential, since the set of coins to be tossed in each round can only be determined after we have observed the outcomes of the coin tosses from the previous round. In contrast, algorithms with only few rounds of adaptivity require fewer rounds of interaction with the feedback generation mechanism, and offer the benefits of parallelism in algorithmic decision-making. Motivated by these considerations, we consider the question of how much adaptivity is needed to realize the optimal worst case query complexity for identifying the k most biased coins. Given any positive integer r, we derive essentially matching upper and lower bounds on the query complexity of r-round algorithms. We then show that Θ(log* n) rounds are both necessary and sufficient for achieving the optimal worst case query complexity for identifying the k-most biased coins. In particular, our algorithm achieves the optimal query complexity in at most log* n rounds, which implies that on any realistic input, 5 parallel rounds of exploration suffice to achieve the optimal worst-case sample complexity. The best known algorithm prior to our work required Θ(log n) rounds to achieve the optimal worst case query complexity even for the special case of k = 1.

Speaker Bio:
Arpit is a PhD student at the Department of Computer & Information Science, University of Pennsylvania. Previously he completed M.E. from CSA, IISc for which he received the Computer Society of India (Bangalore Chapter) medal.

Host Faculty: Dr. Siddharth Barman

Video Archive Go Top

 

Series: Ph.D. Colloquium
Title: Data Structures and Algorithms to Analyze Concurrency in Android Applications

  • Speaker: Ms. Pallavi Maiya H P
                   PhD Student
  • Faculty Advisor: Prof. Aditya Kanade
  • Date and Time: Tuesday, July 18, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Android is a popular mobile operating system, providing a rich ecosystem for the development of applications which run on the Android platform. Entities such as the device user, network and sensors interact continuously with the mobile device causing an Android application to face a continuous stream of input (events). In spite of having to perpetually handle a huge volume of events, Android applications are expected to be responsive to new events from the environment. The Android framework achieves this by exposing applications to a concurrency model which combines multi-threading with asynchronous event-based dispatch. While this enables development of efficient applications, unforeseen thread interleavings combined with non-deterministic ordering of events can lead to subtle concurrency bugs in Android applications.

In an Android application, threads communicate through shared variables and by sending events to each others event queues. Even though typically a thread processes events in its event queue and invokes corresponding event handlers in a FIFO order, this ordering however can be altered by attributes such as delays and priorities associated with the events. Existing literature extensively describe techniques to detect concurrency bugs such as data races, deadlocks and atomicity violations in multi-threaded programs. Recent works also present techniques to analyze bugs manifested due to single-threaded event-driven concurrency. However, the complex event-driven semantics of Android applications combined with the thread-based semantics render a naive extension of such concurrency analysis techniques either less effective or unsound for Android applications. This thesis takes the initial steps towards developing data structures and algorithms to facilitate effective dynamic concurrency analysis of Android applications.

We firstly formalize the concurrency behaviour of Android applications by giving concurrency semantics. Using this semantics we derive a set of rules to reason about the happens-before ordering between operations in an Android execution trace. These rules induce a partial order called a happens-before (HB) relation on the operations of an execution trace. Our HB relation generalizes the so far independently studied HB relations for multi-threaded programs and single-threaded event-driven programs. We have developed a happens-before based dynamic race detector for Android applications, called DroidRacer, which detects data races across threads as well as race conditions between memory accesses from different event handlers on the same thread. DroidRacer has detected several races in various Android applications.

We identify a major bottleneck in the computation of the HB relation for Android applications, that of discovering HB order between event handlers executed on the same thread. HB order between events in Android is characterized by complex HB rules which order a pair of event handlers by inspecting, for example, the order between operations which enqueued the corresponding events. Android applications usually receive a large number of events even within a short span of time, which increases the cost of evaluating such HB rules. As a result HB computation using standard data structures such as vector clocks alone becomes inefficient in case of Android applications. We propose a novel data structure, called "event graph", that maintains a subset of the HB relation to efficiently infer order between any pair of events. We present an algorithm, called EventTrack, which improves efficiency of vector clock based HB computation for event-driven programs using event graphs. Evaluation of EventTrack against a state-of-the-art HB computation technique for Android applications, yielded significant speedup.

The scope of happens-before based dynamic race detector is limited to the thread interleaving corresponding to the execution trace inspected. However, a systematic state space exploration technique such as a stateless model checker can explore all the thread interleavings, and also identify harmful manifestations of data races which may happen only when multiple racing memory accesses are performed in a particular order. Partial order reduction (POR) is a technique used by stateless model checkers to reduce the state space to be explored while preserving certain interesting program properties. The existing POR techniques selectively explore different thread interleavings only to reorder pairs of racing operations from different threads. However, they fail to follow this strategy for events and hence end up eagerly exploring all possible ordering between event handlers executed on the same thread, even if reordering them does not lead to different states. We propose a new POR technique based on a novel backtracking set called the "dependence-covering set". Events handled by the same thread are reordered by our POR technique only if necessary. We prove that exploring dependence-covering sets suffices to detect all deadlock cycles and assertion violations. To evaluate effectiveness of our POR scheme, we have implemented a dynamic algorithm to compute dependence-covering sets. On execution traces obtained from a few Android applications, we demonstrate that our technique explores many fewer transitions —often orders of magnitude fewer— compared to exploration based on persistent sets, a popular POR technique which explores all possible orderings between events. The tools, (i) Droidracer, a race detector for Android applications, (ii) a standalone implementation of EventTrack, to compute HB relation over Android execution traces, and (iii) EM-Explorer --- a proof-of-concept model checking framework which simulates the non-deterministic behaviour exhibited by Android applications, developed by this thesis have been made public.

Video Archive Go Top

 

Series: M.Sc. (Engg) Colloquium
Title: Towards a characterization of the symmetries of Nisan-Wigderson polynomial family

  • Speaker: Mr. Nikhil Gupta
                   M.Sc. (Engg)student
                   Dept. of CSA
                   IISc
  • Faculty Advisor: Dr. Chandan Saha
  • Date and Time: Friday, July 14, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Understanding the structure and complexity of a polynomial family is a fundamental problem of arithmetic circuit complexity. There are various approaches like studying the lower bounds,which deals with finding the smallest circuit required to compute a polynomial, studying the orbit and stabilizer of a polynomial with respect to an invertible transformation etc to do this. We have a rich understanding of some of the well known polynomial families like determinant, permanent, IMM etc. In this thesis we study some of the structural properties of the polyno-mial family called the Nisan-Wigderson polynomial family. This polynomial family is inspired from a well known combinatorial design called Nisan-Wigderson design and is recently used to prove strong lower bounds on some restricted classes of arithmetic circuits ([Kayal-Limaye-Saha-Srinivasan(14)], [Kayal-Saha-Saptharishi(14)],[Kayal-Saha-Tavenas(16)]). But unlike determinant, permanent, IMM etc, our understanding of the Nisan-Wigderson polynomial family is inadequate. For example we do not know if this polynomial family is in VP or VNP complete or VNP-intermediate assuming VP not equal to VNP, nor do we have an understanding of the complexity of its equivalence test. We hope that the knowledge of some of the inherent properties of Nisan- Wigderson polynomial like group of symmetries and Lie algebra would provide us some insights in this regard.

An invertible square matrix A of size n^2 is called a symmetry of an n-variate polynomial f if f(A·X) = f(X), where X is the set of n variables. The set of symmetries of f forms a subgroup of general linear group, which is also known as group of symmetries of f, denoted G_f . A vector space is attached to G_f to get the complete understanding of the symmetries of f. This vector space is known as Lie algebra of group of symmetries of f (or Lie algebra of f), represented as L_f . Lie algebra of f contributes some elements of G_f, known as continuous symmetries of f. Lie algebra has also been instrumental in designing efficient randomized equivalence tests for some polynomial families like determinant, permanent, IMM etc ([Kayal(11)], [Kayal-Nair-Saha-Tavenas(17)]).

In this work we completely characterize the Lie algebra of Nisan-Wigderson polynomial family. We show that L_NW contains diagonal matrices of a specific type. The Lie algebra L_NW turns out to be a proper linear subspace of L_Perm , the Lie algebra of the symbolic Permanent polynomial. The knowledge of L_NW not only helps us to completely figure out the continuous symmetries of the Nisan-Wigderson polynomial family, but also gives some crucial insights into the other symmetries of Nisan-Wigderson polynomial (i.e. the discrete symmetries). Thereafter using Hessian matrix of Nisan-Wigderson polynomial and the concept of evaluation dimension, we are able to almost completely identify the structure of G_NW . In particular we prove that any A in G_NW is a product of diagonal and permutation matrices of certain kind that we call block-permuted permutation matrix. Finally, we give explicit examples of nontrivial block-permuted permutation matrices using the Frobenious automorphisms that establishes the richness of the discrete symmetries of the Nisan-Wigderson polynomial family.

Video Archive Go Top

 

Series: Ph.D. Thesis Defense
Title: Grobner Basis Algorithms for Polynomial Ideal Theory over Noetherian Commutative Rings

  • Speaker: Ms. Maria Francis
  • Faculty Advisor: Prof. Ambedkar Dukkipati
  • Date and Time: Thursday, July 13, 2017, 2:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
One of the fundamental problems in commutative algebra and algebraic geometry is to understand the nature of the solution space of a system of multivariate polynomial equations over a field $Bbbk$, such as real or complex numbers. An important algorithmic tool in this study is the notion of Gr"obner bases citep{Buchberger:1965:thesis}. Given a system of polynomial equations, $f_1 =0, ldots, f_m =0$, Gr"obner basis is a ``canonical" generating set of the ideal generated by $f_1, ldots, f_m$, that can answer, constructively, many questions in computational ideal theory. It generalizes several concepts of univariate polynomials like resultants to the multivariate case, and answers decisively the ideal membership problem. The dimension of the solution set of an ideal $mathfrak{a}$ called the affine variety, an important concept in algebraic geometry, is equal to the Krull dimension of the corresponding coordinate ring, $Bbbk[x_1, ldots,x_n]/mathfrak{a}$. Gr"obner bases were first introduced to compute $Bbbk$-vector space bases of $Bbbk[x_1, ldots,x_n]/mathfrak{a}$ and use that to characterize zero-dimensional solution sets. Since then, Gr"obner basis techniques have provided a generic algorithmic framework for computations in control theory, cryptography, formal verification, robotics, etc, that involve multivariate polynomials over fields.

The main aim of this thesis is to study problems related to computational ideal theory over Noetherian commutative rings (e.g: the ring of integers, $mathbb{Z}$, the polynomial ring over a field, $Bbbk[y_1, ldots, y_m]$, etc ) using the theory of Gr"obner bases. These problems surface in many domains including lattice based cryptography, control systems, system-on-chip design, etc. Although, formal and standard techniques are available for polynomial rings over fields, the presence of zero divisors and non units make developing similar techniques for polynomial rings over rings challenging.

Given a polynomial ring over a Noetherian commutative ring, $A$ and an ideal $mathfrak{a}$ in $A[x_1, ldots, x_n]$, the first fundamental problem that we study is whether the residue class polynomial ring, $A[x_1, ldots, x_n]/mathfrak{a}$ is a free $A$-module or not. Note that when $A=Bbbk$, the answer is always `yes' and the $Bbbk$-vector space basis of $Bbbk[x_1, ldots, x_n]/mathfrak{a}$ plays an important role in computational ideal theory over fields. In our work, we give a Gr"obner basis characterization for $A[x_1, ldots,x_n]/mathfrak{a}$ to have a free $A$-module representation w.r.t. a monomial ordering. For such $A$-algebras, we give an algorithm to compute its $A$-module basis. This extends the Macaulay-Buchberger basis theorem to polynomial rings over Noetherian commutative rings. These results help us develop a theory of border bases in $A[x_1, ldots,x_n]$ when the residue class polynomial ring is finitely generated. The theory of border bases is handled as two separate cases: (i) $A[x_1, ldots,x_n]/mathfrak{a}$ is free and (ii) $A[x_1, ldots,x_n]/mathfrak{a}$ has torsion submodules.

For the special case of $A = mathbb{Z}$, we show how short reduced Gr"obner bases and the characterization for a free $A$-module representation help identify the cases when $mathbb{Z}[x_1, ldots,x_n]/mathfrak{a}$ is isomorphic to $mathbb{Z}^N$ for some $Nin mathbb{N}$. Ideals in such $mathbb{Z}$-algebras are called ideal lattices. These structures are interesting since this means we can use the algebraic structure, $mathbb{Z}[x_1, ldots, x_n]/mathfrak{a}$ as a representation for point lattices and extend all the computationally hard problems in point lattice theory to $mathbb{Z}[x_1, ldots, x_n]/mathfrak{a}$. Univariate ideal lattices are widely used in lattice based cryptography for they are a more compact representation for lattices than matrices. In this thesis, we give a characterization for multivariate ideal lattices and construct collision resistant hash functions based on them using Gr"obner basis techniques. For the construction of hash functions, we define a worst case problem, shortest substitution problem w.r.t. an ideal in $mathbb{Z}[x_1,ldots, x_n]$, and establish hardness results for this problem.

Finally, we develop an approach to compute the Krull dimension of $A[x_1,ldots,x_n]/mathfrak{a}$ using Gr"obner bases, when $A$ is a Noetherian integral domain. When $A$ is a field, the Krull dimension of $A[x_1,ldots,x_n]/mathfrak{a}$ has several equivalent algorithmic definitions by which it can be computed. But this is not true in the case of arbitrary Noetherian rings. We introduce the notion of combinatorial dimension of $A[x_1, ldots, x_n]/mathfrak{a}$ and give a Gr"obner basis method to compute it for residue class polynomial rings that have a free $A$-module representation w.r.t. a lexicographic ordering. For such $A$-algebras, we derive a relation between Krull dimension and combinatorial dimension of $A[x_1, ldots, x_n]/mathfrak{a}$. For $A$-algebras that have a free $A$-module representation w.r.t. degree compatible monomial orderings, we introduce the concepts of Hilbert function, Hilbert series and Hilbert polynomials and show that Gr"obner basis methods can be used to compute these quantities. We then proceed to show that the combinatorial dimension of such $A$-algebras is equal to the degree of the Hilbert polynomial. This enables us to extend the relation between Krull dimension and combinatorial dimension to $A$-algebras with a free $A$-module representation w.r.t. a degree compatible ordering as well.

Video Archive Go Top

 

Series: M.Sc. (Engg) Colloquium
Title: Fully Resilient Non-Interactive Id-based Hierarchical Key Agreement

  • Speaker: Mayank Tiwari (MSc Engg.)
  • Faculty Advisor: Dr. Sanjit Chatterjee
  • Date and Time: Tuesday, July 11, 2017, 12:00 PM
  • Venue: CSA Multimedia Class (Room No. 252, First Floor)

Abstract
Non-Interactive Key Agreement (NIKA) is a cryptographic primitive which allows two parties to agree on a shared key without any interaction. Identity-based Non-Interactive Key Agreement (ID-NIKA) allows each party to compute shared secret key using its own secret key and the peer's identity. ID-NIKA can be used to establish shared secret keys in Ad-hoc networks using minimal battery power and communication.

Mobile Ad-hoc NETwork (MANET) is a network of mobile and moderately resource constrained devices communicating through a wireless medium. Examples of standard MANET devices are laptops, cellphones etc. Due to the inherent characteristics like mobility, dynamic topology and lack of centralized infrastructure, MANETs face some serious security issues. We are particularly interested about ID-NIKA in MANETs. This is of crucial interest for secure communication between two nodes in MANETs.

In 2008, Gennaro et al. introduced a scheme called Hybrid Hierarchical Key Agreement Scheme (HH-KAS). HH-KAS uses a linear hierarchical key agreement scheme at the non-leaf levels and a key agreement scheme due to Sakai et al. (referred as SOK-KAS) at the leaf level. HH-KAS is (i) non-interactive, (ii) identity based, (iii) hierarchical and (iv) fully resilient against node compromises at leaf level and resilient against node compromises upto certain threshold values in non-leaf levels. Thus one can say that HH-KAS is partially resilient against node compromises. In their paper the authors claim that there is no key agreement scheme for MANETs in the literature, with all above four properties. This was motivated as an interesting open problem in this area.

Guo et al. proposed a scheme known as Strong Key Agreement Scheme (SKAS) in 2012. The authors claimed it a potential solution to the open problem posed by Gennaro et al. in their work. However, in 2014, Zhu et al. showed a concrete attack on SKAS. This attack makes SKAS practically useless for real life applications.

Our main contribution is a hybrid scheme using two existing schemes. Our scheme uses a deterministic key pre-distribution scheme by Lee and Stinson termed as Basic Id One-way function Scheme (BIOS) at level 1 (where root is at level 0). Beyond level 1, we use SOK-KAS for key agreement. We refer our scheme as BIOS-SOK key agreement. BIOS and SOK schemes satisfy properties (i), (ii) and (iv) but none of them is hierarchical in nature. In our work we have made an amalgam of both schemes which is hierarchical in nature. Thus, BIOS-SOK scheme satisfies (i), (ii), (iii) and is also fully resilient against arbitrary number of node compromises at any level.

BIOS-SOK scheme also possesses the benefits of low space requirement, low shared key computation time and better scalability for many real-life applications when compared with the scheme of Gennaro et al. In HH-KAS, the key agreement is carried out only at the leaf level. In BIOS-SOK scheme, any two nodes in the hierarchy (at same or different levels) can compute shared secret key. We also provide a rigorous security analysis for our scheme in a stronger security model compared to the security model used for HH-KAS.

Video Archive Go Top

 

Series: M.Sc. (Engg) Colloquium
Title: Improved lower bounds for depth four arithmetic circuits.

  • Speaker: Mr. Abhijat Sharma
                   M.Sc. (Engg)
  • Faculty Advisor: Dr . Chandan Saha
  • Date and Time: Tuesday, July 11, 2017, 11:00 AM
  • Venue: CSA Multimedia Class (Room No. 252, First Floor)

Abstract
We study the problem of proving lower bounds for depth four arithmetic circuits. Depth four circuits have been receiving much attraction when it comes to recent circuit lower bound results, as a result of the series of results culminating in a proof that strong enough lower bounds for depth four circuits will imply super-polynomial lower bounds for general arithmetic circuits, and hence solve one of the most central open problems in algebraic complexity i.e a separation between the VP and VNP complexity classes.

However despite several efforts, the best known lower bounds for general circuits remains Omega(N logN), where N is the number of input variables.

In the case of arithmetic formulas, an O(N^2) lower bound is known, which has not seen any improvement since then.

The situation for depth three arithmetic circuits was also similar for many years, until a recent result that achieved an almost cubic lower bound improving upon the previous best quadratic lower bound.

However, unlike depth three circuits, proving a super-quadratic lower bound for depth four circuits remains a prevalent open problem for many years.

Previous results implied a super-linear lower bound of Omega(N^{1.33}) for depth four circuits. As the main contribution of this thesis, we improve upon this super-linear bound and prove an Omega(N^(1.5)) lower bound on the size of depth four circuits, computing an explicit N-variate polynomial in VNP with degree d = O(sqrt{N}).

Our approach offers a potential route to proving a super-quadratic lower bound for depth four circuits. Taking cue from the numerous successful results recently, we use the technique of the shifted partial derivatives measures to achieve the said lower bound.

Particularly, we use the Dimension of Projected Shifted Partials (DPSP) measure. Coming to the choice of the hard polynomial, we again follow the status quo and use a variant of the Nisan-Wigderson (NW) polynomial family that has proved to be very helpful over the past few years in arithmetic circuit complexity.

Finally, we do a careful analysis of two previous work on general depth four circuits and compare their techniques to ours.

We conclude that our result can potentially be used as a starting point, and techniques similar to that of the almost cubic lower bound result for depth three circuits can likely be used to strengthen our lower bound to Omega(N^(2.5)), for general depth four arithmetic circuits.

Video Archive Go Top

 

Series: Department Seminar
Title: Approximating the Sparsest Cut on Expander-Like Graphs

  • Speaker: Dr. Rakesh Venkat
                   Postdoctoral Fellow
                   Hebrew University of Jerusalem, Israel
  • Date and Time: Monday, July 10, 2017, 3:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
The Sparsest-Cut problem is a fundamental NP-Hard problem in graph optimization that has many practical applications. Simultaneously, on the theoretical side, designing approximation algorithms for it has revealed intriguing connections to questions in geometry involving metric spaces and embeddings. The best known algorithm for the Sparsest Cut problem due to Arora, Rao and Vazirani (2004) gives a $O(sqrt{log n})$ approximation on n-vertex graphs, and this is believed to be the best possible in general. However, one could ask if better approximation algorithms exist for specific graph classes. In this talk, I will survey the relevant background, and describe better approximation algorithms for the Sparsest Cut problem on an important class of graphs called low threshold-rank graphs, that are generalizations of the well-known class of expander graphs. (Based on joint works with Amit Deshpande, Prahladh Harsha and Yuval Rabani).

Speaker Bio:
Rakesh Venkat is currently a Postdoctoral Fellow at the Hebrew University of Jerusalem, Israel with Prof. Yuval Rabani. He received his PhD from TIFR, Mumbai under the supervision of Prof. Prahladh Harsha. His research interests include topics in approximation algorithms, complexity theory and communication complexity.

Host Faculty: Dr. Anand Louis

Video Archive Go Top

 

Series: M.Sc. (Engg) Colloquium
Title: A Fine Grained Dynamic Information Flow Analysis for Android Apps

  • Speaker: Shyam Sankaran
  • Faculty Advisor: Prof. Aditya Kanade
  • Date and Time: Friday, July 07, 2017, 11:30 AM
  • Venue: CSA Multimedia Class (Room No. 252, First Floor)

Abstract
Android has been steadily gaining popularity ever since its launch in 2008. One of the major factors for this is the easy availability of a large variety of apps. They range from simple apps such as calculator apps to apps which can help people maintain their schedules and thus manage many aspects of their lives. In addition, a lot of free apps are available to the user thanks to the power of in-app purchases and advertisements. However, these also raise many security concerns. Apps are privy to a lot of private information regarding the user, such as his contacts, location, etc. It is essential to ascertain that apps do not leak such information to untrustworthy entities. In order to solve this problem, there have been many static and dynamic analyses which aim to track private data accessed or generated by the app to its destination. Such analyses are commonly known as Information Flow analyses. Dynamic analysis techniques, such as TaintDroid, tracks private information and alerts the user when it is accessed by specific API calls. However, they do not track the path taken by the information, which can be useful in debugging and validation scenarios.

We propose a model to perform dynamic information flow analysis, inspired by FlowDroid and TaintDroid, at a fine granularity so that we can obtain the path taken by the information flow. The model uses path-edges to track the information flows during a dynamic run which can then be used to obtain the paths taken by the flows. We have implemented this model and used it to obtain path-edges which can then be seeded into FlowDroid to obtain sound and quicker results compared to stand-alone runs of FlowDroid. We tested the model on 10 real-world apps where we find on average about 20% of the total path-edges found by FlowDroid which gave us an average of 1.8x speed-up of the analysis.

Video Archive Go Top

 

Series: Department Seminar
Title: Testing and Analysis of Web Applications using Page Models

  • Speaker: Snigdha Athaiya
                   PhD student
  • Faculty Advisor: Prof. K V Raghavan
  • Date and Time: Wednesday, July 05, 2017, 4:00 PM
  • Venue: CSA Multimedia Class (Room No. 252, First Floor)

Abstract
Web applications are difficult to analyze using code-based tools because data-flow and control-flow through the application occurs via both server-side code and client-side pages. Client-side pages are typically specified in a scripting language that is different from the main server-side language; moreover, the pages are generated dynamically from the scripts. To address these issues we propose a static-analysis approach that automatically constructs a ``model'' of each page in a given application. A page model is a code fragment in the same language as the server-side code, which faithfully over-approximates the possible elements of the page as well as the control-flows and data-flows due to these elements. The server-side code in conjunction with the page models then becomes a standard (non-web) program, thus amenable to analysis using standard code-based tools.

We have implemented our approach in the context of J2EE applications. We demonstrate the versatility and usefulness of our approach by applying three standard analysis tools on the resultant programs from our approach: a concolic-execution based model checker (JPF), a dynamic fault localization tool (Zoltar), and a static slicer (Wala).

*This is a practice talk for ISSTA 2017*

Video Archive Go Top

 

 

 

 

 

 

 

 

 

 

Copyright: CSA, IISc 2017      Phone: +91-80-22932368          Fax: +91-80-23602911 Travel Blog    Feedback    Credits