Department of Computer Science and Automation Department of Computer Science and Automation, IISc, Bangalore, India Indian Institute of Science
HOME | ABOUT US | PEOPLE | RESEARCH | ACADEMICS | FACILITIES | EVENTS / SEMINARS | NEWS | CONTACT US

UPCOMING SEMINARS

Series: M.Sc. (Engg) Colloquium
Title: Design of Trusted Market Platforms using Permissioned Blockchains and Game Theory

  • Speaker: Ms. Shivika Narang
                   M.Tech. (Research) Student
                   Dept. of CSA
                   IISc
  • Faculty Advisor: Prof. Y. Narahari
  • Date and Time: Wednesday, May 02, 2018, 4:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
The blockchain concept forms the backbone of a new wave technology that promises to be deployed extensively in a wide variety of industrial and societal applications. Governments, financial institutions, banks, industrial supply chains, service companies, and even educational institutions and hospitals are investing in a substantial manner in the hope of improving business efficiency and operational robustness through deployment of blockchain technology. This thesis work is concerned with designing trustworthy business-to-business (B2B) market platforms drawing upon blockchain technology and game theory. The proposed platform is built upon three key ideas. First, we use permissioned blockchains with smart contracts as a technically sound approach for building the B2B platform. The blockchain deploys smart contracts that govern the interactions of enterprise buyers and sellers. Second, the smart contracts are designed using a rigorous analysis of a repeated game model of the strategic interactions between buyers and sellers. We show that such smart contracts induce honest behavior from buyers and sellers. Third, we embed cryptographic regulation protocols into the permissioned blockchain to ensure that business sensitive information is not revealed to the competitors. We believe our work is an important step in the direction of building a powerful B2B platform that maximizes social welfare and enables trusted collaboration between strategic enterprise agents.

Video Archive Go Top

 

Series: Ph.D. Colloquium
Title: On representations and spectral inequalities for non-uniform hypergraphs

  • Speaker: Ashwin Guha
  • Faculty Advisor: Prof. Ambedkar Dukkipati
  • Date and Time: Thursday, April 26, 2018, 3:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Spectral graph theory involves the study of the eigenvalues of graph connectivity matrices such as the adjacency matrix, Laplacian or the signless Laplacian. Spectral methods are often efficient and are widely used in solving graph problems. While graph theory has a variety of applications, it has often been observed in many real-world instances that the pair-wise relationships captured in graphs do not describe the data in its entirety. To overcome this limitation, the notion of hypergraphs was introduced. Hypergraphs are a general version of a graph, where an edge may span more than two nodes. They have been studied extensively from a combinatorial perspective. Recently there has been renewed interest in applying spectral methods to problems in hypergraphs, especially hypergraph partitioning and community detection, which have applications in machine learning. In order to employ spectral techniques, it is necessary to have an appropriate representation of the hypergraph.

In this work, we review some of the representations proposed in the literature and analyze their suitability for spectral algorithms. These representations include the straightforward clique expansion, star expansion, simplicial complexes and tensor representations. We discuss the relative merits and disadvantages of each representation and present some of the ways to extend existing results for graphs to hypergraphs, particularly to non-uniform hypergraphs where the edges are of varying sizes.

Several properties of graphs such as chromatic number, diameter, randomness can be determined from its spectrum. One of the most famous results is the Cheeger inequality, which relates the isoperimetric number of a graph and the second smallest eigenvalue of the graph Laplacian. Some of the results have been extended to hypergraphs, often in multiple ways in different representations. For example, a Cheeger-type inequality has been proved with respect to simplicial complexes. We provide a generalized version of Cheeger inequality for non-uniform hypergraphs using a tensor approach, which is the major contribution of this work. We also prove a few other results, namely, bounds for the analytic connectivity of general hypergraphs with respect to diameter and degree sequence of the hypergraph using tensors.

Video Archive Go Top

 

Series: M.Sc. (Engg) Thesis Defense
Title: Deep Learning Models for Few-shot and Metric Learning

  • Speaker: Mr. Akshay Mehrotra
  • Faculty Advisor: Prof. Ambedkar Dukkipati
  • Date and Time: Wednesday, April 25, 2018, 3:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Deep neural network based models have achieved unprecedented performance levels over many tasks in the traditional supervised setting and scale well with large quantities of data. On the other hand, improving performance in the low-data regime is understudied and the sheer number of parameters versus small amounts of data makes it a challenging task. The problem of learning from a few instances of data is known as few-shot learning. A related problem is that of metric learning over disjoint training and testing classes, where the task is to learn a function that maps a data point to a low-dimensional manifold such that similar points are closer in the low-dimensional space. In this thesis, we propose a couple of deep learning approaches for few-shot learning and then extend them to the related problem of metric learning over disjoint training and testing classes.

We first argue that a more expressive pairwise similarity matching component is crucial for solving the few-shot learning problem. Towards that end, a network design with a learnable and more expressive similarity objective is proposed by extending the deep residual network. This residual pairwise network approximates a learned metric in the representation space and outperforms previous state-of-the-art on the challenging mini-Imagenet dataset for few-shot learning by getting over 54% accuracy for the 5-way classification task over unseen classes. We also evaluate the generalization behaviour of deep residual networks with a varying number of parameters over classes not observed during training.

Next, since regularization plays a key role in learning with small amounts of data, an additional generator network is proposed by extending the Generative Adversarial Network (GAN) framework for disjoint training and testing classes. This provides a strong regularizer by leveraging the generated data samples. The proposed model can generate plausible variations of exemplars over unseen classes and outperforms strong discriminative baselines with L2 regularization for few shot classification tasks.

Finally, the idea of regularizing by adversarial generation is extended to the metric learning setting over disjoint training and testing classes. The proposed model is an efficient alternative to the hard negative mining scheme necessary when training with triplets in the large margin nearest neighbours setting. The proposed model shows performance and efficiency improvement over models trained without negative-mining with the triplet loss.

Video Archive Go Top

 

PAST SEMINARS

Series: Department Seminar
Title: Approximation Algorithms for Multidimensional Packing.

  • Speaker: Dr. Arindam Khan
                   Postdoctoral Researcher TU Munchen
  • Date and Time: Monday, April 23, 2018, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Multidimensional packing problems find numerous applications in robotics, cloud computing, smart-grids and many other scheduling and resource allocation problems. This talk will mainly focus on the two-dimensional geometric knapsack problem (2DK), a geometric variant of the classical knapsack problem. In this problem, we are given a set of axis-aligned rectangular items, each one with an associated profit, and an axis-aligned square knapsack. The goal is to find a (non-overlapping) packing of a maximum profit subset of items inside the knapsack without rotating items. This is a very well studied NP-hard optimization problem and finds applications in scheduling, memory allocation, advertisement placement etc. The best-known polynomial-time approximation factor for this problem (even just in the cardinality case) is (2+epsilon) by [Jansen and Zhang, SODA 2004]. After more than a decade, by introducing new algorithmic techniques, we break the 2-approximation barrier, achieving a polynomial-time (17/9+epsilon) <1.89-approximation.

This result appeared in FOCS 2017. This is a joint work with Waldo Galvez, Fabrizio Grandoni, Sandy Heydrich, Salvatore Ingala and Andreas Wiese. The talk will also briefly mention some related problems (such as vector packing and geometric bin packing) and other general research areas of the speaker.

Speaker Bio:
Arindam Khan is a postdoctoral researcher in TU Munich, Germany. His research areas include approximation algorithms, online algorithms, and computational geometry. He has obtained his Ph.D. in Algorithms, Combinatorics and Optimization (ACO) from Georgia Institute of Technology, Atlanta, USA under Prof. Prasad Tetali. Previously he has been a research intern in Theory group, Microsoft Research Redmond and Microsoft Research Silicon Valley USA; a visiting researcher at Simons Institute, University of California, Berkeley, USA; a blue scholar in IBM Research India; and a researcher in IDSIA, Lugano, Switzerland.

Host Faculty: Prof. Matthew Jacob

Video Archive Go Top

 

Series: CSA Faculty Colloquium
Title: Challenges and opportunities in the era of heterogeneous computing

  • Speaker: Dr. Arkaprava Basu
                   Assistant Professor
                   Dept. of CSA
  • Date and Time: Friday, April 20, 2018, 4:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Sustaining the growth in computing capability is a challenge like never before as one of the primary drivers of computing over last five decades, namely, scaling of the transistor technology, has stalled. This has necessitated re-thinking of how future computers should be designed and how the software could program them. In this talk, we will discuss how this upheaval in the transistor technology is turning computers heterogeneous. In a heterogeneous computer, domain-specific accelerators like graphics processing units (GPUs) are coupled with a general purpose CPU for achieving better performance and energy efficiency. However, programming such heterogeneous processors, comprising of multiple disparate computing and memory elements, is itself a challenge. We will discuss a possible way to ease programmability of such emerging class of processors, and also analyze performance overheads of doing so. We will then elaborate on one of our recent work that takes a step towards making heterogeneous processors both easily programmable and performant, at the same time. Finally, we will end the talk with a brief discussion on some of the emerging technological trends, their implications on future computers, and potential avenues for research to address impending challenges in computing.

Speaker Bio:
Arkaprava Basu joined the department of computer science and automation in February, 2018, as an assistant professor. He is broadly interested in exploring hardware and software interactions in the context of emerging technologies. Prior to joining IISc, he was a researcher at AMD Research in Austin, USA for four years. Even before that, he obtained his PhD in computer science from University of Wisconsin-Madison, USA.

Host Faculty: Prof. Sunil L Chandran

Video Archive Go Top

 

Series: Ph.D. Thesis Defense
Title: Model Checking Temporal Properties of Counter Systems

  • Speaker: Ms. K Vasanta Lakshmi
                   Ph.D student
                   Dept of CSA
  • Faculty Advisor: Prof. K V Raghavan
  • Date and Time: Friday, April 20, 2018, 2:30 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Counter systems are a well-known and powerful modeling notation for specifying infinite-state systems. In this work we target the problem of checking temporal properties of counter systems. We address both predominant families of temporal property specifications, namely, Computation Tree Logic (CTL) and Linear Temporal Logic (LTL).

We provide two algorithms. The first algorithm performs model checking of the "liveness" fragment of CTL properties. The second algorithm performs model checking of (the entire class of) LTL properties. Each of our algorithms returns a Presburger formula that encodes the set of reachable states of the given system that satisfy the given temporal property.

A novel aspect of our algorithms is that they use reachability analysis techniques for counter systems, which are well studied in the literature, as black boxes. This brings two advantages to the table. The first is that these algorithms are able to compute precise answers on a much wider class of systems than previous approaches. Secondly, they compute results by iterative expansion or contraction, and hence permit an over-approximated or under-approximated solution to be obtained at any point even in cases where termination may not occur. We prove the formal properties of our algorithms, and also provide experimental results on standard benchmarks (such as communication protocols and control mechanisms) to show the precision as well as efficiency of our algorithms.

Video Archive Go Top

 

Series: Department Seminar
Title: Representation and Representative Families

  • Speaker: Dr Fahad Panolan
                   Postdoctoral Researcher, University of Bergen, Norway
  • Date and Time: Monday, April 16, 2018, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
A subfamily F' of a family F, where each set in F is of size p, is a q-representative family for F if the following condition holds: for every set A in F and a set B of size q such that A is disjoint from B, there exists a set A' in F such that A' is disjoint from B. By the classic result of Bollobas, every family of sets of size p has a q-representative family with at most $binom{p+q}{p}$ sets (which is independent of the size of F). The notion of representative families can be extended to matroids. A subfamily F' of a family F, where each set in F is of size p and is an independent set of a matroid M, is a q-representative family for F if the following condition holds: for every set A in F and a set B of size q such that A is disjoint from B, and A union B is an independent set in M, there exists a set A' in F such that A' is disjoint from B, and A' union B is an independent set in M. By the two families theorem of Lovasz, every family of independent sets of size p in a linear matroid has a q-representative family with at most $binom{p+q}{p}$ sets. We designed fast algorithms to compute q-representative families on both set systems (uniform matroid) and linear matroids. In this talk, I demonstrate how the efficient construction of representative families on set systems as well as linear matroids can be used to design fast parameterized algorithms.

Our algorithm for the computation of a representative family on a linear matroid crucially uses the given linear representation of the matroid. Unfortunately, for some well known matroids like transversal matroids and gammoids, so far we only know how to compute their linear representations in randomized polynomial time and those algorithms uses Polynomial Identity Testing (PIT). Derandomization of PIT and deterministic polynomial time computation of linear representations of transversal matroids and gammoids are long standing open problems. To overcome this, we defined a notion of union representation of matroids and give a deterministic computation of union representation of a transversal matroid in time quasipolynomial in the rank of the matroid. The notion of union representation yields a fast deterministic computation of representative families on transversal matroids.

Host Faculty: Prof. Matthew Jacob

Video Archive Go Top

 

Series: Department Seminar
Title: The Billion Buildings Challenge

  • Speaker: Dr Nipun Batra
                   Postdoctoral Researcher, University of Virgina, Charlottesville
  • Date and Time: Wednesday, April 11, 2018, 11:00 AM
  • Venue: CDS Seminar Hall

Abstract
Buildings contribute roughly one-third of the total energy consumption. Energy breakdown - providing energy consumption at a per-appliance level can help occupants save up to 15% energy. In this talk, I will propose the building billion challenge - providing an energy breakdown to a billion buildings. Previous approaches required sensor hardware to be installed in each home and thus do not scale well. Towards the challenge, I will present my approach called collaborative sensing which provides an energy breakdown without installing any additional sensors. The key intuition is that the energy consumption variation across buildings can be represented via a low dimensional representation. Our approach leverages transfer learning techniques to scale energy breakdown to regions with none to low instrumentation. Finally, I will talk about providing actionable insights based on the energy breakdown data, and about making energy breakdown research comparable.

Speaker Bio:
Nipun Batra is a CS postdoc at the University of Virginia. Previously, he completed his PhD from IIIT Delhi and his bachelors from Delhi College of Engineering. His work has received various awards such as best doctoral consortium presentation at SenSys 2015, best demo award at Buildsys 2014, finalist in the UChicago Urban Indian challenge, among others. He recently served as the TPC chair of the NILM 2018 workshop.

Host Faculty: Dr. Yogesh Simmhan

Video Archive Go Top

 

Series: M.Sc. (Engg) Colloquium
Title: Adaptively Secure Primitives in the Random Oracle Model

  • Speaker: Mr. Pratik Sarkar
                   M.Sc (Engg)student
                   Dept. of CSA
  • Faculty Advisor: Dr. Arpita Patra
  • Date and Time: Monday, April 09, 2018, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Adaptive security embodies one of the strongest notions of security that allows an adversary to corrupt parties at any point during protocol execution and gain access to its internal state. Since it models real-life situations such as ``hacking", efficient adaptively-secure multiparty computation (MPC) protocols are desirable. Such protocols demand primitives such as zero knowledge (ZK), oblivious transfer (OT) and commitment schemes that are adaptively-secure as building blocks. Efficient realizations of these primitives have been found to be challenging, especially in the no erasure model. We make progress in this direction and provide efficient constructions that are Universally-Composable in the random oracle model.

Zero Knowledge: We construct an efficient UC-secure constant round ZK protocols from garbled circuits that are secure against adaptive corruptions, with communication linear in the size of the statement. Our work builds upon the efficient 5 round ZK protocol of Jawurek et al. (CCS 2013). We show that the ZK protocol can be made adaptively secure when the underlying oblivious transfer (OT) satisfies a mild adaptive security guarantee and a conditional verification technique is employed. The conditional verification technique gives us a three-round adaptively secure zero-knowledge argument in the plain random oracle model.

Oblivious Transfer: We present the first round optimal framework for building adaptively-secure OT in the programmable random oracle (PRO) model, relying upon the framework of Peikert et al. (Crypto 2008) that is only statically secure. When instantiated with Decisional Diffie Hellman assumption, it incurs a nominal communication overhead over its static counterpart. We complete the picture of efficient OT constructions by presenting the first adaptively secure OT Extension using PRO. Commitment Scheme: We present an adaptively secure efficient commitment scheme solely relying on observable random oracle (ORO). Our commitment scheme has a one-time offline setup phase, where a common reference string (crs) is generated between the parties using an ORO. In the online phase, the parties use the crs and ORO to generate commitments in a non-interactive fashion.

Video Archive Go Top

 

Series: Research Student Colloquium
Title: 1) BE NOT ASHAMED OF THE PIMPLE - FLAUNT IT! 2) Modelling Dynamic Networks

  • Speaker: 1) Ullas Aparanji
                   2) Shubham Gupta
  • Date and Time: Friday, April 06, 2018, 4:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
1) "Stopping by Code on a Summer Evening" Whose fields these are I think I know. His implementation is in a structure though He will not see me stopping here To watch his structure's size just grow. My little compiler must think it queer To parse a struct without definition near With the header files, it shall make The darkest translation unit of the year. I give my numbing brain a shake To ask if there is some mistake. The implementation all the struct did sweep Only a handle is all we have at stake. The structure is scary, dark and deep. I have the speaker to make me weep, And pointers are here to ruin my sleep, And pointers are here to ruin my sleep. ---------------------------------------- We'd like to have a design pattern to hide the implementation details of an interface from ordinary clients, so that the implementation may be modified without the need to recompile the modules dependent upon it. This benefits the programmer as well since a simple interface can be created, and most details can be hidden in another file. This is important for providing binary code compatibility through different versions of a shared library, for example. In case you didn't get that previous paragraph, fret not. It's not relevant for following the talk (but I guarantee that it will be comprehensible ex post facto). I will be discussing the very basic nuances of what the Pimple is to demystify its effectiveness and usage, and will convince you rigorously why you shouldn't despise the "measly" Pimple (oops, inadvertent pun :P). This talk will be comprehensible to anybody with a very basic understanding of the C programming language (in particular, structures in C, and a rudimentary understanding of pointers (though some of you might argue that there is nothing rudimentary about pointers (by the way, the ability to parse nested parentheses is not a strict requirement to understand the talk (although the ability to bear my puns is an absolute necessity) :P))), and nothing deeper than this is required to follow along :)

2) Networks, like everything else in real world, change with time. For example, over time, new nodes take birth while old nodes are lost into oblivion. Local disturbances in the network ripple through it, to show their effect on the overall community structure. Since the behaviour, stability and dynamics of the network are a function of the community structure, in order to understand the networks, one needs to explicitly model the temporal aspect of these networks. Most of the earlier work done in the domain of network modelling was restricted to static networks (which are frozen in time). But as more computational power became available and practical applications like large scale social network analysis emerged, researchers started searching for good dynamic network models. In this talk, I will give a brief overview of the literature on network modelling (both static and dynamic). I will then describe our recently proposed Evolving Latent Space Model (ELSM) for dynamic networks and demonstrate its performance on secondary tasks like community detection and link prediction. Can we see the future using ELSM? Stay tuned!

Speaker Bio:
1) Ullas Aparanji is a second year PhD student, advised by Prof. Y N Srikant. His technical area of interest lies in Programming Languages, Compilers and Program Analysis. His non-technical area of interest lies in poetry, puns and parentheses-nesting (hey, that's alliteration (almost)! :P) 2) Shubham Gupta is a PhD candidate at Statistics and Machine Learning Group, Department of Computer Science and Automation, IISc. He is working under the guidance of Dr. Ambedkar Dukkipati. His broad research interests lie in the domain of clustering relational data, with the focus, at present, on statistical modelling of networks. I ​n general, he is a big fan of probability theory and likes to apply it to all sorts of unconventional real life situations like finding the optimal time for going to mess etc.

Host Faculty: Prof. Sunil Chandran & Prof. Shalabh Bhatnagar

Video Archive Go Top

 

Series: Department Seminar
Title: Learning and Testing Causal Models with Interventions

  • Speaker: Saravanan Kandasamy (IISc)
  • Date and Time: Tuesday, April 03, 2018, 5:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Causality has long been a subject of study in philosophy, logic, computer science and beyond. We consider the framework of causal Bayesian networks as defined by Pearl (2009). Almost all prior work in this area assumes the existence of a conditional independence oracle, which implicitly requires an unbounded number of samples in general. We obtain efficient algorithms for goodness-of-fit testing, two-sample testing, and learning causal Bayesian networks on given graphs of bounded degree and bounded "scope of hidden variables", in the setting where the algorithm can only sample from interventions of the input causal models.

Joint work with Jayadev Acharya, Arnab Bhattacharyya, and Costantinos Daskalakis.

Host Faculty: Dr. Arnab Bhattacharyya

Video Archive Go Top

 

Series: CSA Colloquium
Title: Synthesizing Heap Manipulations via Integer Linear Programming

  • Speaker: Prof. Subhajit Roy
  • Date and Time: Tuesday, April 03, 2018, 12:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Writing heap manipulating programs is hard. Even though the high-level algorithms may be simple, it is often tedious to express them using low-level operations. We present a tool — Synlip — that uses expression of intent in the form of concrete examples drawn using box-and-arrow diagrams to synthesize heap-manipulations automatically. Instead of modeling the concrete examples in a monolithic manner, Synlip attempts to extract a set of /patterns of manipulation/ that can be applied repeatedly to construct such programs. It, then, attempts to infer these /patterns/ as linear transformations, leveraging the power of ILP solvers for program synthesis.

Speaker Bio:
Subhajit Roy is an Assistant Professor with the Department of Computer Science and Engineering, IIT Kanpur. His research interests are in formal techniques for automated debugging, program verification and synthesis, program profiling, compiler optimizations, and GPU algorithms. His recent publications appear in top conferences like ICSE, FSE, and OOPSLA. He obtained his PhD from the Dept. of CSA, IISc, guided by Prof. Y N Srikant.

Host Faculty: K. V. Raghavan

Video Archive Go Top

 

Series: Ph.D. Thesis Defense
Title: Static analysis and automated testing of file-processing programs

  • Speaker: Mr. Raveendra Kumar Medicherla
                   Ph.D student (ERP)
                   Dept of CSA
  • Faculty Advisor: Prof. K V Raghavan & Dr. M Girish Chandra (Organizational guide, TCS Limited)
  • Date and Time: Monday, April 02, 2018, 10:30 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Programs that process data residing in files are widely used in domains such as banking, healthcare, networking, and log-file analysis. For these programs, there is a strong need for automated tool support for analysis and transformation tasks such as bug detection, program understanding, and program re-modularization. However, due to the complex idioms used in file-processing programs, precise analysis of these programs as well as automated testing of these programs is a challenging task. These are the problems that we address in this thesis.

Our first contribution is about dataflow analysis of file-processing programs. Our key insight here is that the analysis of a file processing program can be made more precise and useful if knowledge of the input file format of the program is made available to the analysis and incorporated into the transfer functions. We show empirical results for this using standard dataflow problems. We also develop two novel applications of this technique -- the first one to specialize a program to a given "restricted" input file format, and the second one to verify if a program "conforms" to a given input file format. Experiments on a set of benchmark programs showed that our approach provides precise output compared to naive approaches.

For precise dataflow analysis of programs with multiple procedures, *context sensitive* inter-procedural data flow analysis is required. It is very expensive in its full form. As our second contribution, we propose a *prefix* approximation for context-sensitive analysis, wherein a prefix of the full context stack is used to tag dataflow facts. Our technique, which is in contrast with the standard *suffix* approximation technique, is designed to be more scalable on programs that have distinct *application* and *library* layers, while offering comparable or even better precision within the application layer. We analyzed several large enterprise file-processing programs using an implementation of our technique and compared it with the suffix approximation technique. Performance of our approach was several times better than that of the suffix approximation, while generally not suffering any precision penalty in the application layer of the programs. It is notable that this approach could be equally applicable in the context of non file-processing programs as well.

Our final contribution is regarding automated testing of file-processing programs. *Fuzz testing* is an automated software-testing technique that aims to uncover run-time errors by executing the target program with a large number of inputs generated automatically and systematically. *Grey box* fuzzing is a kind of fuzzing that employs lightweight instrumentation of the program, and observes the behaviors exhibited on a set of test runs. It then uses this information to generate new test inputs that might exercise other behaviors. This process is iterated until as many behaviors as possible get exhibited. AFL is a state-of-the-art fuzzer for file-processing programs. With AFL, if a test input causes a new region of code in the program to be reached, it is considered as a new behavior. In other words, code coverage is the objective. However, a vulnerability at a program point may not get exploited in every visit to the program point; only certain program paths that lead to that point may expose the vulnerability. We extend grey-box fuzzing to take into account a given vulnerability of interest (e.g., buffer overruns), and to try explicitly to traverse paths that are likely to expose such vulnerabilities. We have implemented our approach by enhancing AFL, in the context of the buffer overrun vulnerability. We evaluated our approach on a suite of benchmark programs, and compared it with AFL in its standard form. Our approach uncovered many more number of buffer overflow errors in these benchmarks compared to standard AFL for the same given amount of fuzzing time.

Video Archive Go Top

 

Series: Department Seminar
Title: Stochastic gradient algorithms on Riemannian manifolds.

  • Speaker: Prof. Hiroyuki Kasai,
                   The University of Electro-Communications, Tokyo
  • Date and Time: Friday, March 23, 2018, 1:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
There has been recently increasing interest in optimization algorithms on Riemannian manifolds in machine learning, statistical learning, and signal processing fields. This talk addresses Riemannian stochastic gradient algorithms, and introduce s some state-of-the-arts results for finite-sum problems carried out recently. These include variance reduced R-SVRG and R - S Q N - V R algorithms , a n d also a new variant R-SRG . In this talk, we will particularly address how they are different from, and what challenges exist in comparisons to the algorithms in the Euclidean case.

Speaker Bio:
Hiroyuki Kasai received his B.Eng., M.Eng., and Dr.Eng. degrees in Electronics, Information, and Communication Engineering from Waseda University, Tokyo, Japan in 1996, 1998, and 2000, respectively. He was a visiting researcher at British Telecommunication BTexacT Technologies, U.K. during 2000 ‒ 2001. He joined Network Laboratories, NTT DoCoMo, Japan, in 2002, and since 2007 has been an Associate Professor at The University of Electro-Communications, Tokyo. He was a senior policy researcher at Council for Science, Technology and Innovation Policy (CSTP), Cabinet Office of Japan, during 2011-2013. He was a visiting researcher at the Technical University of Munich, Germany during 2014-2015. His research interests include machine learning, optimization, and multimedia signal processing.

Host Faculty: Dr. Partha PratimTalukdar

Video Archive Go Top

 

Series: Ph.D. Thesis Defense
Title: Geometric and Topological Methods for Biomolecular Visualization

  • Speaker: Talha Bin Masood
  • Faculty Advisor: Prof. Vijay Natarajan
  • Date and Time: Thursday, March 22, 2018, 3:00 PM
  • Venue: CSA Faculty Room, (Room No. 101, Ground floor)

Abstract
In recent years, techniques from the field of scientific visualization and computational geometry have increasingly found application in the study of biomolecular structure. In this thesis, we focus on the problems related to identification and visualization of cavities and channels in proteins. In the first part of the thesis, we describe two methods: one for extraction and visualization of biomolecular channels, and the other for extraction of cavities in uncertain data. We also describe the two software tools based on the proposed methods, called ChExVis and RobustCavities. In the second part of the thesis, we describe efficient GPU based parallel algorithms for two geometric structures widely used in the study of biomolecules. One of the structures we discuss is discrete Voronoi diagram which finds applications in channel visualization, while the other structure is alpha complex which is extremely useful in studying geometric and topological properties of biomolecules.

In this talk, we will focus on the software tool called ChExVis designed for extraction and visualization of channels in biomolecules. We will describe the alpha complex based framework for extraction of geometrically feasible channels. In addition, we will present novel ways of visualizing the amino-acids lining the channels together with their physico-chemical properties. We will discuss a few case studies that demonstrate the utility of the proposed method. Secondly, we will describe our integrated topological and geometric approach towards identifying biomolecular cavities in uncertain data. We propose an approach that connects user-specified cavities by computing an optimal conduit within the region occupied by the molecule. Later, we will also briefly touch upon GPU based algorithms for faster computation of discrete Voronoi diagram and alpha complex.

Speaker Bio:
Talha is a Ph.D. candidate in Computer Science at the Dept. of Computer Science and Automation, Indian Institute of Science, Bangalore. He received B.Tech. degree from Aligarh Muslim University, and M.E. degree from Indian Institute of Science, both in Computer Science and Engineering. His research interests include Scientific visualization, Computational geometry, Computational topology, and their applications to structural biology.

Video Archive Go Top

 

Series: Ph.D. Colloquium
Title: Interactions Between the Procrastination-Based Synchronization and Memory Allocator

  • Speaker: Mr. Aravinda Prasad
                   Ph.D student (ERP)
                   Dept. of CSA
  • Faculty Advisor: Prof. K Gopinath
  • Date and Time: Tuesday, March 20, 2018, 12:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
The evolution of multicore systems with thousands of cores has led to the exploration of non-traditional procrastination-based synchronization techniques such as Read-Copy-Update (RCU). Deferred destruction is the fundamental technique used in such procrastination-based synchronization where writers in order to synchronize with the readers defer the freeing of the objects until the completion of all pre-existing readers. This writer-wait time period is referred to as a grace period (GP). The readers, as a consequence, need not explicitly synchronize with the writers resulting in low overhead, wait free read-side synchronization primitives.

We observe that the deferred destruction of objects leads to newer and complex forms of interactions between the synchronization technique and the memory allocator. We study and analyze the impact of such interactions in the operating system kernels for enterprise workloads, high-performance computing environments, idle systems and virtualized environments. We explore different solutions to efficiently handle deferred destructions where our general solution integrates synchronization technique with the memory allocator. Our general solution further exploits interaction between the synchronization technique and memory allocator to optimize both of them.

In the first part we analyze the implication of deferred destruction in enterprise environments. We observe that RCU determines when the deferred object is safe to reclaim and when it is actually reclaimed. As a result, the memory reclamation of the deferred objects are completely oblivious of the memory allocator state leading to poor memory allocator performance. Furthermore, we observe that the deferred objects provide hints about the future that inform memory regions that are about to be freed. Although useful, hints are not exploited as the deferred objects are not "visible" to memory allocators. We design Prudence, a new dynamic memory allocator, that is tightly integrated with RCU to ensure visibility of deferred objects to the memory allocator. Prudence exploits optimizations based on the hints about the future during important state transitions. Our evaluation in the Linux kernel shows that Prudence performs 3.9x to 28x better in micro-benchmarks compared to SLUB allocator. It also improves the overall performance perceptibly (4%-18%) for a mix of widely used synthetic and application benchmarks.

In the second part we analyze the implication of deferred destruction in idle and High-performance computing (HPC) environments where the amount of memory waiting for reclamation in a grace period is negligible due to limited OS kernel activity. The default grace period computation is not only futile but also detrimental as the CPU cycles consumed to compute a grace period leads to jitter in HPC and frequent CPU wake-ups in idle environments. We design a frugal approach to reduce RCU grace period overhead that reduces the number of grace periods by 68% to 99% and the CPU time consumed by grace periods by 39% to 99% for NAS parallel benchmarks and idle systems.

Finally, we analyze the implication of deferred destruction in a virtualized environment. Preemption of RCU-readers can cause multi-second latency spikes and can increase peak memory footprint inside VMs which in turn can negate the server-consolidation benefits of virtualization. Although preemption of lock holders in VMs has been well-studied, the corresponding solutions do not apply to RCU due to its exceedingly lightweight read-side primitives. We present the first evaluation of RCU-reader preemption in a virtualized environment. Our evaluation shows 50% increase in the peak memory footprint and 155% increase in fragmentation for a microbenchmark. We propose Conflux, a confluence of three mutually independent solutions that enhances the RCU synchronization technique, memory allocator and the hypervisor to efficiently handle the RCU-reader preemption problem.

Speaker Bio:
Aravinda Prasad is a PhD (ERP) student in the Department of Computer Science and Automation at Indian Institute of Science, Bangalore. He is advised by Prof. K Gopinath and Dr. Vinayaka D Pandit (IBM).

Video Archive Go Top

 

Series: CSA Distinguished Lecture
Title: Software Challenges for Extreme Heterogeneity

  • Speaker: Prof. Vivek Sarkar, Georgia Institute of Technology
  • Date and Time: Tuesday, March 20, 2018, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
It is widely recognized that a major disruption is under way in computer hardware as processors strive to extend, and go beyond, the end-game of Moore's Law. This disruption will include new forms of heterogeneous processors, heterogeneous memories, near-memory computation structures, and, in some cases, Non-von Neumann computing elements. In this talk, we summarize the software challenges for these levels of "extreme heterogeneity", with a focus on the role of programming systems, which encompass programming models, compilers and related tools, and runtime systems. These challenges anticipate a new vision for programming systems that goes beyond their traditional role of mapping a specific subcomputation to a specific hardware platform, to an expanded world view in which programming systems manage the global selection of computation and data mappings of subcomputations on heterogeneous subsystems. We will discuss recent trends in programming models, compilers, and runtime systems that point the way towards addressing the challenges of extreme heterogeneity.

Speaker Bio:
Vivek Sarkar is a Professor in the School of Computer Science, and the Stephen Fleming Chair for Telecommunications in the College of Computing at at Georgia Institute of Technology, since August 2017. Prior to joining Georgia Tech, Sarkar was a Professor of Computer Science at Rice University, and the E.D. Butcher Chair in Engineering. During 2007 - 2017, Sarkar led Rice's Habanero Extreme Scale Software Research Laboratory which focused on unifying parallelism and concurrency elements of high-end computing, multicore, and embedded software stacks (http://habanero.rice.edu). He also served as Chair of the Department of Computer Science at Rice during 2013 - 2016. Prior to joining Rice in 2007, Sarkar was Senior Manager of Programming Technologies at IBM Research. His research projects at IBM included the X10 programming language, the Jikes Research Virtual Machine for the Java language, the ASTI optimizer used in IBM's XL Fortran product compilers, and the PTRAN automatic parallelization system. Sarkar became a member of the IBM Academy of Technology in 1995, and was inducted as an ACM Fellow in 2008. He has been serving as a member of the US Department of Energy's Advanced Scientific Computing Advisory Committee (ASCAC) since 2009, and on CRA's Board of Directors since 2015.

Host Faculty: R. Govindarajan

Video Archive Go Top

 

Series: Department Seminar
Title: Speeding up MWU based approximation schemes for positive LPs and some applications

  • Speaker: Prof. Chandra Chekuri
                   Department of Computer Science
                   University of Illinois, UIUC
  • Date and Time: Friday, March 16, 2018, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
One of the recent exciting directions in algorithms is developing faster algorithms for various problems in discrete and combinatorial optimization via sophisticated continuous optimization methods. The multiplicative weight update (MWU) method is an older technique and has been used to approximately solve positive linear programs (packing, covering and mixed packing and covering LPs) starting from the early 90s. MWU based methods have some advantages when dealing with implicit linear programs that arise frequently in combinatorial optimization. We build upon several existing ideas and propose some new ones to obtain significantly faster algorithms for several problems. The talk will illustrate some examples including Metric-TSP, spanning tree packing, and geometric packing and covering problems.

Based on joint work with Kent Quanrud.

Host Faculty: Dr. Anand Louis

Video Archive Go Top

 

Series: CSA Distinguished Lecture
Title: Predictive Control Methods for Networked Cyber-Physical Systems

  • Speaker: Prof. Daniel Quevedo,
                   Paderborn University, Germany
  • Date and Time: Wednesday, March 14, 2018, 3:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
The opportunities provided by feedback control of networked dynamical systems are enormous. Yet it is by no means clear how to harness modern communication, network and computation technologies to obtain high-quality designs. The main stumbling blocks stem from the significant gaps which exist between understanding of constituent parts and the challenges faced when bringing them together. The vast realm of applications of networked cyber-physical systems brings a variety of issues. A common thread is that many of the standard paradigms that allow the separation of computation, communications and systems control are no longer valid. Thus, the need for more holistic approaches arises. This talk illustrates how various communication and computation aspects can be integrated into suitable predictive control formulations.

Speaker Bio:
Daniel Quevedo is Head of the Chair for Automatic Control (Regelungs- und Automatisierungstechnik) at Paderborn University, Germany. He received Ingeniero Civil Electrónico and M.Sc. degrees from the Universidad Técnica Federico Santa María, Chile, in 2000. In 2005, he was awarded the Ph.D. degree from the University of Newcastle in Australia. Dr. Quevedo was supported by a full scholarship from the alumni association during his time at the Universidad Técnica Federico Santa María and received several university-wide prizes upon graduating. He received the IEEE Conference on Decision and Control (CDC) Best Student Paper Award in 2003 and was also a finalist in 2002. In 2009 he was awarded a five-year Research Fellowship from the Australian Research Council. Prof. Quevedo is Associate Editor of the IEEE Control Systems Magazine, Editor of the International Journal of Robust and Nonlinear Control, and serves as Chair of the IEEE Control Systems Society Technical Committee on Networks & Communication Systems. His research interests are in control of networked systems and of power converters.

Host Faculty: Prof. Shalabh Bhatnagar

Video Archive Go Top

 

Series: Faculty Colloquium
Title: Provable Fairness

  • Speaker: Dr. Siddharth Barman
                   Assistant Professor
                   Dept. of CSA
  • Date and Time: Friday, March 09, 2018, 4:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Fairness is a fundamental consideration in many real-world allocation problems. This talk will provide a formal framework to address this consideration in the context of indivisible resources/goods. Specifically, we will deem an allocation of indivisible goods to be fair if it is envy-free up to one good (EF1), which means that each participant (in the allocation problem) prefers its own bundle over the bundle of any other participant, up to the removal of one good.

In the first part of the talk, I will present recent work which leads a somewhat surprising result that economic efficiency is not sacrificed by imposing fairness. Here, an allocation is said to be efficient if it satisfies Pareto efficiency. In particular, I will detail a (pseudo) polynomial-time algorithm for finding allocations which are both EF1 and Pareto efficient.

The second part of the talk will address fairness in sponsored search auctions, which are a critical revenue-generating mechanism employed by online search engines. Sponsored search auctions entail assigning advertisement slots among bidders/advertisers, and in these settings fairness is an important concern to ensure quality of service. I will describe our work on truthful auctions which are not only fair (i.e., lead to EF1 allocations) but also approximately maximize welfare.

Including joint work with Sanath Krishnamurthy and Rohit Vaish (preprint https://arxiv.org/abs/1707.04731) along with Ganesh Ghalme, Shweta Jain, Pooja Kulkarni, and Shivika Narang.

Speaker Bio:
Siddharth Barman is an Assistant Professor of Computer Science and Ramanujan Fellow at the Indian Institute of Science. Siddharth's research interests lie in the design, analysis, and applications of algorithms. His current research focuses on algorithmic game theory and approximation algorithms. Siddharth received his Ph.D. from the University of Wisconsin - Madison and held a postdoctoral position at the California Institute of Technology.

Host Faculty: Prof. Sunil L Chandran

Video Archive Go Top

 

 

 

 

 

 

 

 

 

 

Copyright: CSA, IISc 2017      Phone: +91-80-22932368          Fax: +91-80-23602911 Travel Blog    Feedback    Credits