Events 

Seminars 

Prof. I. G. Sarma Memorial Lecture Series 

Prof. Priti Shankar Memorial Seminar Series 

Multimedia Archive 



UPCOMING SEMINARS 


PAST SEMINARS 

Series: Department Seminar Title: Generalized Pointsto Graphs: A New Abstraction for Memory in the Presence of Pointers  Speaker: Prof. Uday Khedker, IITB, Mumbai
 Date and Time: Wednesday, May 23, 2018, 12:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract 
Scaling flow and contextsensitive exhaustive pointsto analysis is
a challenge that has not been met satisfactorily yet. For topdown
approaches, the problem centers on repeated analysis of the same
procedure; for bottomup approaches, the abstractions used to represent
procedure summaries have not scaled while preserving precision.
Bottomup approaches for pointsto analysis require modelling unknown
pointees accessed indirectly through pointers that may be defined in the
callers.
We propose a novel abstraction called the Generalized Pointsto Graph
(GPG) which views pointsto relations as memory updates and generalizes
them using the counts of indirection levels leaving the unknown pointees
implicit. This allows us to construct GPGs as compact representations
of bottomup procedure summaries in terms of memory updates and control
flow between them. Their compactness is ensured by the following
optimizations: strength reduction reduces the indirection levels,
redundancy elimination removes redundant memory updates and minimizes
control flow (without overapproximating data dependence between
memory updates), and call inlining enhances the opportunities of these
optimizations.
The effectiveness of GPGs lies in the fact that they discard as much
control flow as possible without losing precision (i.e., by preserving
data dependence without overapproximation). This is the reason why the
GPGs are very small even for the main procedures that contain the effect
of entire programs. This allows our implementation to scale to 158kLoC
for C programs.
At a more general level, GPGs provide a convenient abstraction of
memory and memory transformers in the presence of pointers. Future
investigations can try to combine it with other abstractions for static
analyses that can benefit from pointsto information.
Host Faculty: R. Govindarajan
 Series: CSA Faculty Colloquium Title: Vignettes in Model Discovery  Speaker: Prof. Chiranjib Bhattacharyya
Dept. of CSA  Date and Time: Friday, May 18, 2018, 4:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract As Data Driven models get increasingly adopted in different disciplines the need for providing guarantees on the quality of learnt models
is increasingly becoming important. Though these problems are intractable but through a judicious mix of domain driven understanding and algorithms we show that in several cases it is possible to characterise the tradeoff between the quality of models and amount of Data. Specifically, we report
recent results in Model discovery arising in different contexts, namely Topic models, Nonnegative matrix factorization, Lasso and Deep networks.
Speaker Bio: Chiranjib Bhattacharyya is currently Professor in the Department of
Computer Science and Automation, Indian Institute of Science. His
research interests are in Machine Learning, Optimisation
and their applications to Industrial problems. Recently he was inducted as a
fellow of Indian Academy of Engineering.
He joined the Department of CSA, IISc, in 2002 as an Assistant
Professor. Prior to joining the Department he was a postdoctoral fellow at
UC Berkeley. He holds BE and ME degrees, both in Electrical Engineering,
from Jadavpur University and the Indian Institute of Science,
respectively, and completed his PhD from the Department of Computer
Science and Automation, Indian Institute of Science. For more information
about his work please see http://mllab. csa. iisc. ernet. in.
Host Faculty: Prof. Sunil L Chandran & Prof. Shalabh Bhatnagar
 Series: Ph.D. Thesis Defense Title: Approximation Algorithms for Geometric Packing and Covering Problems  Speaker: Mr. Aniket Basu Roy
Ph.D student
Dept. of CSA  Faculty Advisor: Prof. Sathish Govindarajan
 Date and Time: Friday, May 11, 2018, 10:30 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract We study a host of geometric optimization problems that are NPhard and design polynomial time approximation algorithms for them. More precisely, we are given a family of geometric objects and a point set and study different variants and generalizations of Packing and Covering problems. Our objects of study are mostly family of nonpiercing regions in the plane. We call a set of simple and connected regions to be nonpiercing if for any pair of intersecting regions A and B both A B and B A are connected regions. A set of disks, squares, halfplanes are examples of nonpiercing regions, whereas, a set of lines, rectangles are examples of piercing objects. For most of the problems we have studied, a simple local search algorithm is enough to yield a PTAS whose analysis of approximation requires a suitable bipartite graph on the local search solution and the optimal solution to have a balanced sublinear separator.
We study a generalization of the standard packing problem, called the Capacitated Region Packing problem and its slight variant, the Shallow Packing problem. We devise a PTAS for both these problems with restrictions on the capacities. For the former problem, the objects are nonpiercing whereas for the latter problem the objects can be even more general and only have subquadratic union complexity with the capacities at most some constant for both the cases. The nontriviality here is to show that the intersection graph of arrangements with shallow depth, which is not planar, has balanced sublinear separators. To this end, we consider a graph on a point set and use the separator of this graph to obtain a separator of the intersection graph of the arrangement. We also study the Shallow Point Packing problem and are able to show that local search works here as well for unit capacity and devise a constant factor approximation algorithm using an adaptation of BronnimannGoodrich technique for packing problems.
Runaway Rectangle Escape problem is closely related to the above packing problems and is motivated from routing in printed circuit boards. Here we are given a set of axisparallel rectangles inside a rectangular boundary R and a maximum allowed depth d. The objective is to extend the maximum number of input rectangles to one of the four sides of R such that the maximum depth of a point is at most d after extension. We show that local search gives a (2 + epsilon)approximation for d = O(1). When the input rectangles are all disjoint then we devise a simple 4(1 + 1/(d  1))approximation algorithm. We also propose a randomized (1 + epsilon)approximation algorithm based on randomized rounding making some density assumptions. Lastly, we show the problem to be NPhard even when the rectangles are unit squares aligned in a grid.
We study the MultiCover problem which is a generalization of the Set Cover problem. We give a PTAS for nonpiercing regions when the depth of every point is at most constant. We also study different variants of the covering problem like the Unique Coverage, and Prize Collecting Set Cover problem. For Unique Cover, we show that local search yields a PTAS for nonpiercing regions for bounded depth and degree. For Prize Collecting Set Cover a PTAS works for nonpiercing regions if the weight of every region is within a range [1, a], where a is some constant.
Lastly, we consider variants of the Art Gallery problems called the Minimum (Horizontal) Sliding Cameras problem, M(H)SC. We are given an orthogonal polygon and we need to deploy mobile guards that can walk along an orthogonal (horizontal) line segment and can guard a point inside the polygon if the perpendicular drawn from the point onto the line segment lies inside the polygon. Our local search algorithm yields a PTAS for the MHSC problem and also the MSC problem when the polygon has no holes. In order to do so, we prove an appropriate graph on orthogonal line segments to be planar by proposing a graph drawing scheme.
 Series: Ph.D. Thesis Defense Title: Program Analyses to Support Memorysaving Refactorings in Java Programs  Speaker: Mr. Girish M. R.
Ph.D student (ERP)
Dept. of CSA  Faculty Advisor: Prof. K V Raghavan & Dr. Anjaneyulu Pasala (Organizational guide, Inforsys Ltd)
 Date and Time: Tuesday, May 08, 2018, 10:00 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Software commonly consumes unexpectedly high amounts of memory, frequently due
to programming idioms that are used to make software more reliable, maintainable
and understandable. In the case of modern objectoriented systems this problem
is partly due to creation of large numbers of coexisting isomorphic objects. A
significant reduction in heap usage can therefore be achieved if the code is
refactored to deduplicate or share objects whenever possible instead of always
creating distinct but possibly isomorphic objects. Such a refactoring, which
employs a cache to keep track of objects created so far and to share them, is
termed as objectsharing refactoring. In practice, objectsharing refactoring is
commonly used, as it has the potential to reduce memory utilization
significantly. However, manual refactoring is tedious and error prone. To
support objectsharing refactoring we make the following three distinct
contributions:
(1) We present a dynamic analysis technique for estimating, for all the
allocation sites in a program, for a given input, the reduction in heap memory
usage to be obtained by employing object sharing at each site. Experimentation
with our estimation tool on real Java programs indicates that nearly all
applications have potential for reduction of memory usage by object sharing,
with a mean savings of 12. 62% per benchmark.
(2) We define a novel concept termed full initialization points (FIPs) to
characterize the points in the program where objects allocated at any chosen
allocation site become fully initialized. We also present a novel , conservative
static analysis to detect them. By introducing code to cache and share objects
at the FIPs suggested by our analysis, objectsharing refactoring was able to
obtain a mean memory savings of 11. 4% on a set of real Java benchmarks.
(3) A standard pointsto analysis approach, namely, the object sensitivity
approach, uses an equal level of precision to represent symbolic objects
allocated at all allocation sites in a program. This approach does not scale to
large programs unless low levels of precision are used. We propose a novel ,
programslicing based approach to improve the precision of objectsensitivity
pointsto analysis for verifying the safety of objectsharing refactoring, but
which has general applicability to several other verification tasks as well .
Our evaluation reveals that for a given time budget, our approach gives more
precise results than the most precise results possible under the baseline object
sensitivity analysis on nine of 10 DaCapo benchmarks. Our approach exhibits 28%
greater precision over the baseline object sensitivity approach in identifying
allocation sites where objectsharing refactoring can be performed.
 Series: M.Sc. (Engg) Colloquium Title: Design of Trusted Market Platforms using Permissioned Blockchains and Game Theory  Speaker: Ms. Shivika Narang
M.Tech. (Research) Student
Dept. of CSA
IISc  Faculty Advisor: Prof. Y. Narahari
 Date and Time: Wednesday, May 02, 2018, 4:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract The blockchain concept forms the backbone of a new wave technology that promises to be deployed extensively in a wide variety of industrial and societal applications. Governments, financial institutions, banks, industrial supply chains, service companies, and even educational institutions and hospitals are investing in a substantial manner in the hope of improving business efficiency and operational robustness through deployment of blockchain technology. This thesis work is concerned with designing trustworthy businesstobusiness (B2B) market platforms drawing upon blockchain technology and game theory.
The proposed platform is built upon three key ideas. First, we use permissioned blockchains with smart contracts as a technically sound approach for building the B2B platform. The blockchain deploys smart contracts that govern the interactions of enterprise buyers and sellers. Second, the smart contracts are designed using a rigorous analysis of a repeated game model of the strategic interactions between buyers and sellers. We show that such smart contracts induce honest behavior from buyers and sellers. Third, we embed cryptographic regulation protocols into the permissioned blockchain to ensure that business sensitive information is not revealed to the competitors. We believe our work is an important step in the direction of building a powerful B2B platform that maximizes social welfare and enables trusted collaboration between strategic enterprise agents.
 Series: Ph.D. Colloquium Title: On representations and spectral inequalities for nonuniform hypergraphs  Speaker: Ashwin Guha
 Faculty Advisor: Prof. Ambedkar Dukkipati
 Date and Time: Thursday, April 26, 2018, 3:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Spectral graph theory involves the study of the eigenvalues of graph connectivity matrices such as the adjacency matrix, Laplacian or the signless Laplacian. Spectral methods are often efficient and are widely used in solving graph problems. While graph theory has a variety of applications, it has often been observed in many realworld instances that the pairwise relationships captured in graphs do not describe the data in its entirety. To overcome this limitation, the notion of hypergraphs was introduced. Hypergraphs are a general version of a graph, where an edge may span more than two nodes. They have been studied extensively from a combinatorial perspective. Recently there has been renewed interest in applying spectral methods to problems in hypergraphs, especially hypergraph partitioning and community detection, which have applications in machine learning. In order to employ spectral techniques, it is necessary to have an appropriate representation of the hypergraph.
In this work, we review some of the representations proposed in the literature and analyze their suitability for spectral algorithms. These representations include the straightforward clique expansion, star expansion, simplicial complexes and tensor representations. We discuss the relative merits and disadvantages of each representation and present some of the ways to extend existing results for graphs to hypergraphs, particularly to nonuniform hypergraphs where the edges are of varying sizes.
Several properties of graphs such as chromatic number, diameter, randomness can be determined from its spectrum. One of the most famous results is the Cheeger inequality, which relates the isoperimetric number of a graph and the second smallest eigenvalue of the graph Laplacian. Some of the results have been extended to hypergraphs, often in multiple ways in different representations. For example, a Cheegertype inequality has been proved with respect to simplicial complexes. We provide a generalized version of Cheeger inequality for nonuniform hypergraphs using a tensor approach, which is the major contribution of this work. We also prove a few other results, namely, bounds for the analytic connectivity of general hypergraphs with respect to diameter and degree sequence of the hypergraph using tensors.
 Series: M.Sc. (Engg) Thesis Defense Title: Deep Learning Models for Fewshot and Metric Learning  Speaker: Mr. Akshay Mehrotra
 Faculty Advisor: Prof. Ambedkar Dukkipati
 Date and Time: Wednesday, April 25, 2018, 3:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Deep neural network based models have achieved unprecedented performance levels over many tasks in the traditional supervised setting and scale well with large quantities of data. On the other hand, improving performance in the lowdata regime is understudied and the sheer number of parameters versus small amounts of data makes it a challenging task. The problem of learning from a few instances of data is known as fewshot learning. A related problem is that of metric learning over disjoint training and testing classes, where the task is to learn a function that maps a data point to a lowdimensional manifold such that similar points are closer in the lowdimensional space. In this thesis, we propose a couple of deep learning approaches for fewshot learning and then extend them to the related problem of metric learning over disjoint training and testing classes.
We first argue that a more expressive pairwise similarity matching component is crucial for solving the fewshot learning problem. Towards that end, a network design with a learnable and more expressive similarity objective is proposed by extending the deep residual network. This residual pairwise network approximates a learned metric in the representation space and outperforms previous stateoftheart on the challenging miniImagenet dataset for fewshot learning by getting over 54% accuracy for the 5way classification task over unseen classes. We also evaluate the generalization behaviour of deep residual networks with a varying number of parameters over classes not observed during training.
Next, since regularization plays a key role in learning with small amounts of data, an additional generator network is proposed by extending the Generative Adversarial Network (GAN) framework for disjoint training and testing classes. This provides a strong regularizer by leveraging the generated data samples. The proposed model can generate plausible variations of exemplars over unseen classes and outperforms strong discriminative baselines with L2 regularization for few shot classification tasks.
Finally, the idea of regularizing by adversarial generation is extended to the metric learning setting over disjoint training and testing classes. The proposed model is an efficient alternative to the hard negative mining scheme necessary when training with triplets in the large margin nearest neighbours setting. The proposed model shows performance and efficiency improvement over models trained without negativemining with the triplet loss.
 Series: Department Seminar Title: Approximation Algorithms for Multidimensional Packing.  Speaker: Dr. Arindam Khan
Postdoctoral Researcher TU Munchen  Date and Time: Monday, April 23, 2018, 11:00 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Multidimensional packing problems find numerous applications in robotics, cloud computing, smartgrids and many other scheduling and resource allocation problems. This talk will mainly focus on the twodimensional geometric knapsack problem (2DK), a geometric variant of the classical knapsack problem. In this problem, we are given a set of axisaligned rectangular items, each one with an associated profit, and an axisaligned square knapsack. The goal is to find a (nonoverlapping)
packing of a maximum profit subset of items inside the knapsack without rotating items. This is a very well studied NPhard optimization problem and finds applications in scheduling, memory allocation, advertisement placement etc. The bestknown polynomialtime approximation factor for this problem (even just in the cardinality case) is (2+epsilon) by [Jansen and Zhang, SODA 2004]. After more than a decade, by introducing new algorithmic techniques, we break the 2approximation barrier,
achieving a polynomialtime (17/9+epsilon) <1.89approximation.
This result appeared in FOCS 2017. This is a joint work with Waldo Galvez, Fabrizio Grandoni, Sandy Heydrich, Salvatore Ingala and Andreas Wiese. The talk will also briefly mention some related problems (such as vector packing and geometric bin packing) and other general research areas of the speaker.
Speaker Bio: Arindam Khan is a postdoctoral researcher in TU Munich, Germany. His research areas include approximation algorithms, online algorithms, and computational geometry. He has obtained his Ph.D. in Algorithms, Combinatorics and Optimization (ACO) from Georgia Institute of Technology, Atlanta, USA under Prof. Prasad Tetali. Previously he has been a research intern in Theory group, Microsoft Research Redmond and Microsoft Research Silicon Valley USA; a visiting researcher at Simons Institute, University of California, Berkeley, USA; a blue scholar in IBM Research India; and a researcher in IDSIA, Lugano, Switzerland.
Host Faculty: Prof. Matthew Jacob
 Series: CSA Faculty Colloquium Title: Challenges and opportunities in the era of heterogeneous computing  Speaker: Dr. Arkaprava Basu
Assistant Professor
Dept. of CSA  Date and Time: Friday, April 20, 2018, 4:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Sustaining the growth in computing capability is a challenge like never before as one of the primary drivers of computing over last five decades, namely, scaling of the transistor technology, has stalled. This has necessitated rethinking of how future computers should be designed and how the software could program them.
In this talk, we will discuss how this upheaval in the transistor technology is turning computers heterogeneous. In a heterogeneous computer, domainspecific accelerators like graphics processing units (GPUs) are coupled with a general purpose CPU for achieving better performance and energy efficiency. However, programming such heterogeneous processors, comprising of multiple disparate computing and memory elements, is itself a challenge. We will discuss a possible way to ease programmability of such emerging class of processors, and also analyze performance overheads of doing so. We will then elaborate on one of our recent work that takes a step towards making heterogeneous processors both easily programmable and performant, at the same time. Finally, we will end the talk with a brief discussion on some of the emerging technological trends, their implications on future computers, and potential avenues for research to address impending challenges in computing.
Speaker Bio: Arkaprava Basu joined the department of computer science and automation in February, 2018, as an assistant professor. He is broadly interested in exploring hardware and software interactions in the context of emerging technologies. Prior to joining IISc, he was a researcher at AMD Research in Austin, USA for four years. Even before that, he obtained his PhD in computer science from University of WisconsinMadison, USA.
Host Faculty: Prof. Sunil L Chandran
 Series: Ph.D. Thesis Defense Title: Model Checking Temporal Properties of Counter Systems  Speaker: Ms. K Vasanta Lakshmi
Ph.D student
Dept of CSA  Faculty Advisor: Prof. K V Raghavan
 Date and Time: Friday, April 20, 2018, 2:30 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Counter systems are a wellknown and powerful modeling notation for
specifying infinitestate systems. In this work we target the problem of
checking temporal properties of counter systems. We address both
predominant families of temporal property specifications, namely,
Computation Tree Logic (CTL) and Linear Temporal Logic (LTL).
We provide two algorithms. The first algorithm performs model checking of
the "liveness" fragment of CTL properties. The second algorithm performs
model checking of (the entire class of) LTL properties. Each of our
algorithms returns a Presburger formula that encodes the set of reachable
states of the given system that satisfy the given temporal property.
A novel aspect of our algorithms is that they use reachability analysis
techniques for counter systems, which are well studied in the literature,
as black boxes. This brings two advantages to the table. The first is that
these algorithms are able to compute precise answers on a much wider class
of systems than previous approaches. Secondly, they compute results by
iterative expansion or contraction, and hence permit an overapproximated
or underapproximated solution to be obtained at any point even in cases
where termination may not occur. We prove the formal properties of our
algorithms, and also provide experimental results on standard benchmarks
(such as communication protocols and control mechanisms) to show the
precision as well as efficiency of our algorithms.
 Series: Department Seminar Title: Representation and Representative Families  Speaker: Dr Fahad Panolan
Postdoctoral Researcher, University of Bergen, Norway  Date and Time: Monday, April 16, 2018, 11:00 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract A subfamily F' of a family F, where each set in F is of size p, is a qrepresentative family for F if the following condition holds: for every set A in F and a set B of size q such that A is disjoint from B, there exists a set A' in F such that A' is disjoint from B. By the classic result of Bollobas, every family of sets of size p has a qrepresentative family with at most $binom{p+q}{p}$ sets (which is independent of the size of F). The notion of representative families can be extended to matroids. A subfamily F' of a family F, where each set in F is of size p and is an independent set of a matroid M, is a qrepresentative family for F if the following condition holds: for every set A in F and a set B of size q such that A is disjoint from B, and A union B is an independent set in M, there exists a set A' in F such that A' is disjoint from B, and A' union B is an independent set in M. By the two families theorem of Lovasz, every family of independent sets of size p in a linear matroid has a qrepresentative family with at most $binom{p+q}{p}$ sets. We designed fast algorithms to compute qrepresentative families on both set systems (uniform matroid) and linear matroids. In this talk, I demonstrate how the efficient construction of representative families on set systems as well as linear matroids can be used to design fast parameterized algorithms.
Our algorithm for the computation of a representative family on a linear matroid crucially uses the given linear representation of the matroid. Unfortunately, for some well known matroids like transversal matroids and gammoids, so far we only know how to compute their linear representations in randomized polynomial time and those algorithms uses Polynomial Identity Testing (PIT). Derandomization of PIT and deterministic polynomial time computation of linear representations of transversal matroids and gammoids are long standing open problems. To overcome this, we defined a notion of union representation of matroids and give a deterministic computation of union representation of a transversal matroid in time quasipolynomial in the rank of the matroid. The notion of union representation yields a fast deterministic computation of representative families on transversal matroids.
Host Faculty: Prof. Matthew Jacob
 Series: Department Seminar Title: The Billion Buildings Challenge  Speaker: Dr Nipun Batra
Postdoctoral Researcher, University of Virgina, Charlottesville  Date and Time: Wednesday, April 11, 2018, 11:00 AM
 Venue: CDS Seminar Hall
Abstract Buildings contribute roughly onethird of the total energy consumption. Energy breakdown  providing energy consumption at a perappliance level can help occupants save up to 15% energy. In this talk, I will propose the building billion challenge  providing an energy breakdown to a billion buildings. Previous approaches required sensor hardware to be installed in each home and thus do not scale well. Towards the challenge, I will present my approach called collaborative sensing which provides an energy breakdown without installing any additional sensors. The key intuition is that the energy consumption variation across buildings can be represented via a low dimensional representation. Our approach leverages transfer learning techniques to scale energy breakdown to regions with none to low instrumentation. Finally, I will talk about providing actionable insights based on the energy breakdown data, and about making energy breakdown research comparable.
Speaker Bio: Nipun Batra is a CS postdoc at the University of Virginia. Previously, he completed his PhD from IIIT Delhi and his bachelors from Delhi College of Engineering. His work has received various awards such as best doctoral consortium presentation at SenSys 2015, best demo award at Buildsys 2014, finalist in the UChicago Urban Indian challenge, among others. He recently served as the TPC chair of the NILM 2018 workshop.
Host Faculty: Dr. Yogesh Simmhan
 Series: M.Sc. (Engg) Colloquium Title: Adaptively Secure Primitives in the Random Oracle Model  Speaker: Mr. Pratik Sarkar
M.Sc (Engg)student
Dept. of CSA  Faculty Advisor: Dr. Arpita Patra
 Date and Time: Monday, April 09, 2018, 11:00 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Adaptive security embodies one of the strongest notions of security that allows an adversary to corrupt parties at any point during protocol execution and gain access to its internal state. Since it models reallife situations such as ``hacking", efficient adaptivelysecure multiparty computation (MPC) protocols are desirable. Such protocols demand primitives such as zero knowledge (ZK), oblivious transfer (OT) and commitment schemes that are adaptivelysecure as building blocks. Efficient realizations of these primitives have been found to be challenging, especially in the no erasure model. We make progress in this direction and provide efficient constructions that are UniversallyComposable in the random oracle model.
Zero Knowledge: We construct an efficient UCsecure constant round ZK protocols from garbled circuits that are secure against adaptive corruptions, with communication linear in the size of the statement. Our work builds upon the efficient 5 round ZK protocol of Jawurek et al. (CCS 2013). We show that the ZK protocol can be made adaptively secure when the underlying oblivious transfer (OT) satisfies a mild adaptive security guarantee and a conditional verification technique is employed. The conditional verification technique gives us a threeround adaptively secure zeroknowledge argument in the plain random oracle model.
Oblivious Transfer: We present the first round optimal framework for building adaptivelysecure OT in the programmable random oracle (PRO) model, relying upon the framework of Peikert et al. (Crypto 2008) that is only statically secure. When instantiated with Decisional Diffie Hellman assumption, it incurs a nominal communication overhead over its static counterpart. We complete the picture of efficient OT constructions by presenting the first adaptively secure OT Extension using PRO.
Commitment Scheme: We present an adaptively secure efficient commitment scheme solely relying on observable random oracle (ORO). Our commitment scheme has a onetime offline setup phase, where a common reference string (crs) is generated between the parties using an ORO. In the online phase, the parties use the crs and ORO to generate commitments in a noninteractive fashion.
 Series: Research Student Colloquium Title: 1) BE NOT ASHAMED OF THE PIMPLE  FLAUNT IT!
2) Modelling Dynamic Networks  Speaker: 1) Ullas Aparanji
2) Shubham Gupta  Date and Time: Friday, April 06, 2018, 4:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract 1) "Stopping by Code on a Summer Evening"
Whose fields these are I think I know.
His implementation is in a structure though
He will not see me stopping here
To watch his structure's size just grow.
My little compiler must think it queer
To parse a struct without definition near
With the header files, it shall make
The darkest translation unit of the year.
I give my numbing brain a shake
To ask if there is some mistake.
The implementation all the struct did sweep
Only a handle is all we have at stake.
The structure is scary, dark and deep.
I have the speaker to make me weep,
And pointers are here to ruin my sleep,
And pointers are here to ruin my sleep.

We'd like to have a design pattern to hide the implementation details of an interface from
ordinary clients, so that the implementation may be modified without the need to recompile the
modules dependent upon it. This benefits the programmer as well since a simple interface can
be created, and most details can be hidden in another file. This is important for providing binary
code compatibility through different versions of a shared library, for example.
In case you didn't get that previous paragraph, fret not. It's not relevant for following the talk
(but I guarantee that it will be comprehensible ex post facto). I will be discussing the very basic
nuances of what the Pimple is to demystify its effectiveness and usage, and will convince you
rigorously why you shouldn't despise the "measly" Pimple (oops, inadvertent pun :P). This talk
will be comprehensible to anybody with a very basic understanding of the C programming
language (in particular, structures in C, and a rudimentary understanding of pointers (though
some of you might argue that there is nothing rudimentary about pointers (by the way, the ability
to parse nested parentheses is not a strict requirement to understand the talk (although the
ability to bear my puns is an absolute necessity) :P))), and nothing deeper than this is required
to follow along :)
2) Networks, like everything else in real world, change with time. For example, over time, new
nodes take birth while old nodes are lost into oblivion. Local disturbances in the network ripple
through it, to show their effect on the overall community structure. Since the behaviour,
stability and dynamics of the network are a function of the community structure, in order to
understand the networks, one needs to explicitly model the temporal aspect of these networks.
Most of the earlier work done in the domain of network modelling was restricted to static
networks (which are frozen in time). But as more computational power became available and
practical applications like large scale social network analysis emerged, researchers started
searching for good dynamic network models. In this talk, I will give a brief overview of the
literature on network modelling (both static and dynamic). I will then describe our recently
proposed Evolving Latent Space Model (ELSM) for dynamic networks and demonstrate its
performance on secondary tasks like community detection and link prediction. Can we see the
future using ELSM? Stay tuned!
Speaker Bio: 1) Ullas Aparanji is a second year PhD student, advised by Prof. Y N Srikant. His technical area of
interest lies in Programming Languages, Compilers and Program Analysis. His nontechnical
area of interest lies in poetry, puns and parenthesesnesting (hey, that's alliteration (almost)! :P)
2) Shubham Gupta is a PhD candidate at Statistics and Machine Learning Group, Department of
Computer Science and Automation, IISc. He is working under the guidance of Dr. Ambedkar
Dukkipati. His broad research interests lie in the domain of clustering relational data, with the
focus, at present, on statistical modelling of networks. I n general, he is a big fan of probability
theory and likes to apply it to all sorts of unconventional real life situations like finding the
optimal time for going to mess etc.
Host Faculty: Prof. Sunil Chandran & Prof. Shalabh Bhatnagar
 Series: Department Seminar Title: Learning and Testing Causal Models with Interventions  Speaker: Saravanan Kandasamy (IISc)
 Date and Time: Tuesday, April 03, 2018, 5:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Causality has long been a subject of study in philosophy, logic, computer science and beyond. We consider the framework of causal Bayesian networks as defined by Pearl (2009). Almost all prior work in this area assumes the existence of a conditional independence oracle, which implicitly requires an unbounded number of samples in general. We obtain efficient algorithms for goodnessoffit testing, twosample testing, and learning causal Bayesian networks on given graphs of bounded degree and bounded "scope of hidden variables", in the setting where the algorithm can only sample from interventions of the input causal models.
Joint work with Jayadev Acharya, Arnab Bhattacharyya, and Costantinos Daskalakis.
Host Faculty: Dr. Arnab Bhattacharyya
 Series: CSA Colloquium Title: Synthesizing Heap Manipulations via Integer Linear Programming  Speaker: Prof. Subhajit Roy
 Date and Time: Tuesday, April 03, 2018, 12:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Writing heap manipulating programs is hard. Even though the highlevel
algorithms may be simple, it is often tedious to express them using lowlevel
operations. We present a tool — Synlip — that uses expression of intent in the
form of concrete examples drawn using boxandarrow diagrams to synthesize
heapmanipulations automatically. Instead of modeling the concrete examples in
a monolithic manner, Synlip attempts to extract a set of /patterns of
manipulation/ that can be applied repeatedly to construct such programs. It,
then, attempts to infer these /patterns/ as linear transformations, leveraging
the power of ILP solvers for program synthesis.
Speaker Bio: Subhajit Roy is an Assistant Professor with the Department of Computer Science
and Engineering, IIT Kanpur. His research interests are in formal techniques for
automated debugging, program verification and synthesis, program profiling,
compiler optimizations, and GPU algorithms. His recent publications appear in
top conferences like ICSE, FSE, and OOPSLA. He obtained his PhD from
the Dept. of CSA, IISc, guided by Prof. Y N Srikant.
Host Faculty: K. V. Raghavan
 Series: Ph.D. Thesis Defense Title: Static analysis and automated testing of fileprocessing programs  Speaker: Mr. Raveendra Kumar Medicherla
Ph.D student (ERP)
Dept of CSA  Faculty Advisor: Prof. K V Raghavan & Dr. M Girish Chandra (Organizational guide, TCS Limited)
 Date and Time: Monday, April 02, 2018, 10:30 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Programs that process data residing in files are widely used in domains
such as banking, healthcare, networking, and logfile analysis. For these
programs, there is a strong need for automated tool support for analysis
and transformation tasks such as bug detection, program understanding, and
program remodularization. However, due to the complex idioms used in
fileprocessing programs, precise analysis of these programs as well as
automated testing of these programs is a challenging task. These are the
problems that we address in this thesis.
Our first contribution is about dataflow analysis of fileprocessing
programs. Our key insight here is that the analysis of a file processing
program can be made more precise and useful if knowledge of the input file
format of the program is made available to the analysis and incorporated
into the transfer functions. We show empirical results for this using
standard dataflow problems. We also develop two novel applications of this
technique  the first one to specialize a program to a given "restricted"
input file format, and the second one to verify if a program "conforms" to
a given input file format. Experiments on a set of benchmark programs
showed that our approach provides precise output compared to naive
approaches.
For precise dataflow analysis of programs with multiple procedures,
*context sensitive* interprocedural data flow analysis is required. It is
very expensive in its full form. As our second contribution, we propose a
*prefix* approximation for contextsensitive analysis, wherein a prefix of
the full context stack is used to tag dataflow facts. Our technique, which
is in contrast with the standard *suffix* approximation technique, is
designed to be more scalable on programs that have distinct *application*
and *library* layers, while offering comparable or even better precision
within the application layer. We analyzed several large enterprise
fileprocessing programs using an implementation of our technique and
compared it with the suffix approximation technique. Performance of our approach
was several times better than that of the suffix approximation,
while generally not suffering any precision penalty in the application
layer of the programs. It is notable that this approach could be equally
applicable in the context of non fileprocessing programs as well.
Our final contribution is regarding automated testing of fileprocessing
programs. *Fuzz testing* is an automated softwaretesting technique that
aims to uncover runtime errors by executing the target program with a
large number of inputs generated automatically and systematically. *Grey
box* fuzzing is a kind of fuzzing that employs lightweight instrumentation
of the program, and observes the behaviors exhibited on a set of test
runs. It then uses this information to generate new test inputs that might
exercise other behaviors. This process is iterated until as many behaviors
as possible get exhibited. AFL is a stateoftheart fuzzer for
fileprocessing programs. With AFL, if a test input causes a new region of
code in the program to be reached, it is considered as a new behavior. In
other words, code coverage is the objective. However, a vulnerability at a
program point may not get exploited in every visit to the program point;
only certain program paths that lead to that point may expose the
vulnerability. We extend greybox fuzzing to take into account a given
vulnerability of interest (e.g., buffer overruns), and to try explicitly to
traverse paths that are likely to expose such vulnerabilities. We have
implemented our approach by enhancing AFL, in the context of the buffer
overrun vulnerability. We evaluated our approach on a suite of benchmark
programs, and compared it with AFL in its standard form. Our approach
uncovered many more number of buffer overflow errors in these
benchmarks compared to standard AFL for the same given amount of fuzzing
time.
 Series: Department Seminar Title: Stochastic gradient algorithms on Riemannian manifolds.  Speaker: Prof. Hiroyuki Kasai,
The University of ElectroCommunications, Tokyo  Date and Time: Friday, March 23, 2018, 1:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract There has been recently increasing interest in optimization algorithms on Riemannian manifolds in machine learning, statistical learning, and signal processing fields. This talk addresses Riemannian stochastic gradient algorithms, and introduce s some stateofthearts results for finitesum
problems carried out recently. These include variance reduced RSVRG and R  S Q N  V R algorithms , a n d also a new variant RSRG . In this talk, we will particularly address how they are different from, and what challenges exist in comparisons to the algorithms in the Euclidean case.
Speaker Bio: Hiroyuki Kasai received his B.Eng., M.Eng., and Dr.Eng. degrees in Electronics, Information, and Communication Engineering from Waseda University, Tokyo, Japan in 1996, 1998, and 2000, respectively. He was a visiting researcher at British Telecommunication BTexacT Technologies, U.K. during 2000 ‒ 2001. He joined Network Laboratories, NTT DoCoMo, Japan, in 2002, and since 2007 has been an Associate Professor at The University of ElectroCommunications, Tokyo. He was a senior policy researcher at Council for Science, Technology and Innovation Policy (CSTP), Cabinet Office of Japan, during 20112013. He was a visiting researcher at the Technical University of Munich, Germany during 20142015. His research interests include machine learning, optimization, and multimedia signal processing.
Host Faculty: Dr. Partha PratimTalukdar
 Series: Ph.D. Thesis Defense Title: Geometric and Topological Methods for Biomolecular Visualization  Speaker: Talha Bin Masood
 Faculty Advisor: Prof. Vijay Natarajan
 Date and Time: Thursday, March 22, 2018, 3:00 PM
 Venue: CSA Faculty Room, (Room No. 101, Ground floor)
Abstract In recent years, techniques from the field of scientific visualization and computational geometry have increasingly found application in the study of biomolecular structure. In this thesis, we focus on the problems related to identification and visualization of cavities and channels in proteins. In the first part of the thesis, we describe two methods: one for extraction and visualization of biomolecular channels, and the other for extraction of cavities in uncertain data. We also describe the two software tools based on the proposed methods, called ChExVis and RobustCavities. In the second part of the thesis, we describe efficient GPU based parallel algorithms for two geometric structures widely used in the study of biomolecules. One of the structures we discuss is discrete Voronoi diagram which finds applications in channel visualization, while the other structure is alpha complex which is extremely useful in studying geometric and topological properties of biomolecules.
In this talk, we will focus on the software tool called ChExVis designed for extraction and visualization of channels in biomolecules. We will describe the alpha complex based framework for extraction of geometrically feasible channels. In addition, we will present novel ways of visualizing the aminoacids lining the channels together with their physicochemical properties. We will discuss a few case studies that demonstrate the utility of the proposed method. Secondly, we will describe our integrated topological and geometric approach towards identifying biomolecular cavities in uncertain data. We propose an approach that connects userspecified cavities by computing an optimal conduit within the region occupied by the molecule. Later, we will also briefly touch upon GPU based algorithms for faster computation of discrete Voronoi diagram and alpha complex.
Speaker Bio: Talha is a Ph.D. candidate in Computer Science at the Dept. of Computer Science and Automation, Indian Institute of Science, Bangalore. He received B.Tech. degree from Aligarh Muslim University, and M.E. degree from Indian Institute of Science, both in Computer Science and Engineering. His research interests include Scientific visualization, Computational geometry, Computational topology, and their applications to structural biology.
 Series: Ph.D. Colloquium Title: Interactions Between the ProcrastinationBased
Synchronization and Memory Allocator  Speaker: Mr. Aravinda Prasad
Ph.D student (ERP)
Dept. of CSA  Faculty Advisor: Prof. K Gopinath
 Date and Time: Tuesday, March 20, 2018, 12:00 PM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract The evolution of multicore systems with thousands of cores has led to
the exploration of nontraditional procrastinationbased synchronization
techniques such as ReadCopyUpdate (RCU). Deferred destruction is the
fundamental technique used in such procrastinationbased synchronization
where writers in order to synchronize with the readers defer the freeing
of the objects until the completion of all preexisting readers. This
writerwait time period is referred to as a grace period (GP). The
readers, as a consequence, need not explicitly synchronize with the
writers resulting in low overhead, wait free readside synchronization
primitives.
We observe that the deferred destruction of objects leads to newer and
complex forms of interactions between the synchronization technique and
the memory allocator. We study and analyze the impact of such
interactions in the operating system kernels for enterprise workloads,
highperformance computing environments, idle systems and virtualized
environments. We explore different solutions to efficiently handle
deferred destructions where our general solution integrates
synchronization technique with the memory allocator. Our general
solution further exploits interaction between the synchronization
technique and memory allocator to optimize both of them.
In the first part we analyze the implication of deferred destruction in
enterprise environments. We observe that RCU determines when the
deferred object is safe to reclaim and when it is actually reclaimed. As
a result, the memory reclamation of the deferred objects are completely
oblivious of the memory allocator state leading to poor memory allocator
performance. Furthermore, we observe that the deferred objects provide
hints about the future that inform memory regions that are about to be
freed. Although useful, hints are not exploited as the deferred objects
are not "visible" to memory allocators. We design Prudence, a new
dynamic memory allocator, that is tightly integrated with RCU to ensure
visibility of deferred objects to the memory allocator. Prudence
exploits optimizations based on the hints about the future during
important state transitions. Our evaluation in the Linux kernel shows
that Prudence performs 3.9x to 28x better in microbenchmarks compared
to SLUB allocator. It also improves the overall performance perceptibly
(4%18%) for a mix of widely used synthetic and application benchmarks.
In the second part we analyze the implication of deferred destruction in
idle and Highperformance computing (HPC) environments where the amount
of memory waiting for reclamation in a grace period is negligible due to
limited OS kernel activity. The default grace period computation is not
only futile but also detrimental as the CPU cycles consumed to compute a
grace period leads to jitter in HPC and frequent CPU wakeups in idle
environments. We design a frugal approach to reduce RCU grace period
overhead that reduces the number of grace periods by 68% to 99% and the
CPU time consumed by grace periods by 39% to 99% for NAS parallel
benchmarks and idle systems.
Finally, we analyze the implication of deferred destruction in a
virtualized environment. Preemption of RCUreaders can cause
multisecond latency spikes and can increase peak memory footprint
inside VMs which in turn can negate the serverconsolidation benefits of
virtualization. Although preemption of lock holders in VMs has been
wellstudied, the corresponding solutions do not apply to RCU due to its
exceedingly lightweight readside primitives. We present the first
evaluation of RCUreader preemption in a virtualized environment. Our
evaluation shows 50% increase in the peak memory footprint and 155%
increase in fragmentation for a microbenchmark. We propose Conflux, a
confluence of three mutually independent solutions that enhances the RCU
synchronization technique, memory allocator and the hypervisor to
efficiently handle the RCUreader preemption problem.
Speaker Bio: Aravinda Prasad is a PhD (ERP) student in the Department of Computer
Science and Automation at Indian Institute of Science, Bangalore. He is
advised by Prof. K Gopinath and Dr. Vinayaka D Pandit (IBM).

