Department of Computer Science and Automation Department of Computer Science and Automation, IISc, Bangalore, India Indian Institute of Science
HOME | ABOUT US | PEOPLE | RESEARCH | ACADEMICS | FACILITIES | EVENTS / SEMINARS | NEWS | CONTACT US

UPCOMING SEMINARS

Series: CSA Distinguished Lecture
Title: Beating Floating Point at its Own Game: Posit Arithmetic

  • Speaker: Prof. John L. Gustafson
                   National University of Singapore
  • Date and Time: Thursday, October 26, 2017, 4:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
A new data type called "posit" is designed as a direct drop-in replacement for IEEE Standard 754 floating-point numbers (floats). Unlike earlier forms of universal number (unum) arithmetic, posits do not require interval arithmetic or variable size operands; like floats, they round if an answer is inexact. However, they provide compelling advantages over floats, including larger dynamic range, higher accuracy, better closure, bitwise identical results across systems, simpler hardware, and simpler exception handling. Posits never overflow to infinity or underflow to zero, and Not-a-Number (NaN) indicates an action instead of a bit pattern. A posit processing unit takes less circuitry than an IEEE float FPU. With lower power use and smaller silicon footprint, the posit operations per second (POPS) supported by a chip can be significantly higher than the FLOPS using similar hardware resources. GPU accelerators and Deep Learning processors, in particular, can do more per watt and per dollar with posits, yet deliver superior answer quality.

A comprehensive series of benchmarks compares floats and posits for decimals of accuracy produced for a set precision. Low precision posits provide a better solution than approximate computing methods that try to tolerate decreased answer quality. High precision posits provide more correct decimals than floats of the same size; in some cases, a 32-bit posit may safely replace a 64-bit float. In other words, posits beat floats at their own game.

Speaker Bio:
Dr. John L. Gustafson is an applied physicist and mathematician who is a Visiting Scientist at A-STAR and Professor at NUS. He is a former Director at Intel Labs and former Chief Product Architect at AMD. A pioneer in high-performance computing, he introduced cluster computing in 1985 and first demonstrated scalable massively parallel performance on real applications in 1988. This became known as Gustafson's Law, for which he won the inaugural ACM Gordon Bell Prize. He is also a recipient of the IEEE Computer Society's Golden Core Award.

Host Faculty: Prof. R. Govindarajan

Video Archive Go Top

 

PAST SEMINARS

Series: Department Seminar
Title: Evidence of non-Gaussianity in the Cosmic Microwave Background & Visualizing Cosmic Structures

  • Speaker: Dr. Pratyush Pranav
                   Technion, Israel
  • Date and Time: Monday, October 09, 2017, 11:30 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
The standard cosmological paradigm asserts that the Universe underwent a phase of rapid inflation moments after it's birth. In the simplest form, inflationary theories predict the spectrum of the fluctuation of matter distribution to be Gaussian distributed. Consequently, deviations from Gaussianity, if found, will point to novel physics driving the early phase of the Universe. In this talk, I will present an evidence of non-Gaussianity in the CMB, through new topological methods.

In a related aspect, as the universe evolves, the initial small perturbations in the matter distribution in the Universe give rise to the structures like galaxies and groups of galaxies, woven into a three dimensional web like network. Understanding the properties of this Cosmic Web is crucial to understanding the nature of the matter distribution. Visualizing these cosmic structures plays a crucial role in this effort. In an ongoing collaboration with Prof. Natarajan's visualization lab, we have developed visualization software based on topological methods. I will briefly describe the methodology and some results, with a view to develop this further.

Speaker Bio:
PP obtained his B.Sc. at St. Stephens college Delhi University, and the M.S. degree at IISc. There after he moved to the Kapteyn Astronomical Institute, Groningen , the Netherlands, to obtain a PhD. Currently he is in a transition, finishing his postdoctoral at Technion, Israel and starting the second one at Ecole Normale Superiure in Lyon, France.

Host Faculty: Prof. Vijay Natarajan

Video Archive Go Top

 

Series: Ph.D. Thesis Defense
Title: Optimization Algorithms for Deterministic, Stochastic and Reinforcement Learning Settings

  • Speaker: Mr. Ajin George Joseph
  • Faculty Advisor: Prof. Shalabh Bhatnagar
  • Date and Time: Thursday, October 05, 2017, 2:30 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Optimization is a very important field with diverse applications in physical, social and biological sciences and in various areas of engineering. It appears widely in machine learning, information retrieval, regression, estimation, operations research and a wide variety of computing domains. The subject is being deeply studied both theoretically and experimentally and several algorithms are available in the literature. These algorithms which can be executed (sequentially or concurrently) on a computing machine explore the space of input parameters to seek high quality solutions to the optimization problem with the search mostly guided by certain structural properties of the objective function. In certain situations, the setting might additionally demand for ``absolute optimum'' or solutions close to it, which makes the task even more challenging.

In this thesis, we propose an optimization algorithm which is ``gradient-free'', i.e., does not employ any knowledge of the gradient or higher order derivatives of the objective function, rather utilizes objective function values themselves to steer the search. The proposed algorithm is particularly effective in a black-box setting, where a closed-form expression of the objective function is unavailable and gradient or higher-order derivatives are hard to compute or estimate. Our algorithm is inspired by the well known cross entropy (CE) method. The CE method is a model based search method to solve continuous/discrete multi-extremal optimization problems, where the objective function has minimal structure. The proposed method seeks, in the statistical manifold of the parameters which identify the probability distribution/model defined over the input space to find the degenerate distribution concentrated on the global optima (assumed to be finite in quantity). In the early part of the thesis, we propose a novel stochastic approximation version of the CE method to the unconstrained optimization problem, where the objective function is real-valued and deterministic. The basis of the algorithm is a stochastic process of model parameters which is probabilistically dependent on the past history, where we reuse all the previous samples obtained in the process till the current instant based on discounted averaging. This approach can save the overall computational and storage cost. Our algorithm is incremental in nature and possesses attractive features such as stability, computational and storage efficiency and better accuracy. We further investigate, both theoretically and empirically, the asymptotic behaviour of the algorithm and find that the proposed algorithm exhibits global optimum convergence for a particular class of objective functions.

Further, we extend the algorithm to solve the simulation/stochastic optimization problem. In stochastic optimization, the objective function possesses a stochastic characteristic, where the underlying probability distribution in most cases is hard to comprehend and quantify. This begets a more challenging optimization problem, where the ostentatious nature is primarily due to the hardness in computing the objective function values for various input parameters with absolute certainty. In this case, one can only hope to obtain noise corrupted objective function values for various input parameters. Settings of this kind can be found in scenarios where the objective function is evaluated using a continuously evolving dynamical system or through a simulation. We propose a multi-timescale stochastic approximation algorithm, where we integrate an additional timescale to accommodate the noisy measurements and decimate the effects of the gratuitous noise asymptotically. We found that if the objective function and the noise involved in the measurements are well behaved and the timescales are compatible, then our algorithm can generate high quality solutions.

In the later part of the thesis, we propose algorithms for reinforcement learning/Markov decision processes using the optimization techniques we developed in the early stage. MDP can be considered as a generalized framework for modeling planning under uncertainty. We provide a novel algorithm for the problem of prediction in reinforcement learning, emph{i.e.}, estimating the value function of a given stationary policy of a model free MDP (with large state and action spaces) using the linear function approximation architecture. Here, the value function is defined as the long-run average of the discounted transition costs. The resource requirement of the proposed method in terms of computational and storage cost scales quadratically in the size of the feature set. The algorithm is an adaptation of the multi-timescale variant of the CE method proposed in the earlier part of the thesis for simulation optimization. We also provide both theoretical and empirical evidence to corroborate the credibility and effectiveness of the approach.

In the final part of the thesis, we consider a modified version of the control problem in a model free MDP with large state and action spaces. The control problem most commonly addressed in the literature is to find an optimal policy which maximizes the value function, i.e., the long-run average of the discounted transition payoffs. The contemporary methods also presume access to a generative model/simulator of the MDP with the hidden premise that observations of the system behaviour in the form of sample trajectories can be obtained with ease from the model. In this thesis, we consider a modified version, where the cost function to be optimized is a real-valued performance function (possibly non-convex) of the value function. Additionally, one has to seek the optimal policy without presuming access to the generative model. In this thesis, we propose a stochastic approximation algorithm for this peculiar control problem. The only information, we presuppose, available to the algorithm is the sample trajectory generated using a priori chosen behaviour policy. The algorithm is data (sample trajectory) efficient, stable, robust as well as computationally and storage efficient. We provide a proof of convergence of our algorithm to a high performing policy relative to the behaviour policy.

Video Archive Go Top

 

Series: Department Seminar
Title: Reachability for dynamic parametric processes

  • Speaker: Prof. Helmut Seidl,
                   Technical University of Munich
  • Date and Time: Wednesday, September 27, 2017, 3:30 PM
  • Venue: CSA Lecture Hall (Room No. 117, Ground Floor)

Abstract
In a dynamic parametric process every subprocess may spawn arbitrarily many, identical child processes, that may communicate either over global variables, or over local variables that are shared with their parent. We show that reachability for dynamic parametric processes is decidable under mild assumptions. These assumptions are met if individual processes are realized by pushdown systems, or even higher-order pushdown systems. We also provide algorithms for subclasses of pushdown dynamic parametric processes, with complexity ranging between NP and DEXPTIME.

These results are based on joint work with Anca Muscholl and Igor Walukiewicz.

Speaker Bio:
Helmut Seidl is a Professor at the Institute for Informatics, at the Technical University of Munich, Germany. His research interests include Program Analysis, Semi-Structured Data, and Tree Automata.

Host Faculty: Prof. Deepak D Souza

Video Archive Go Top

 

Series: Ph.D. Colloquium
Title: Deep Learning with Minimal Supervision

  • Speaker: Mr. Gaurav Pandey
                   Ph.D student
                   Dept. of CSA
  • Faculty Advisor: Prof. Ambedkar Dukkipati
  • Date and Time: Tuesday, September 26, 2017, 12:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
In recent years, deep neural networks have achieved extraordinary performance on supervised learning tasks. Convolutional neural networks (CNN) have vastly improved the state of the art for most computer vision tasks including object recognition and segmentation. However, their success relies on the presence of a large amount of labeled data. In contrast, relatively fewer work has been done in deep learning to handle scenarios when access to ground truth is limited, partial or completely absent. In this thesis, we propose some deep neural network models to handle challenging problems with limited labeled information.

Our first contribution is a neural architecture that allows for the extraction of infinitely many features from an object while allowing for tractable inference. This is achieved by using the `kernel trick', that is, we express the inner product in the infinite dimensional feature space as a kernel. The kernel can either be computed exactly for single layer feedforward networks, or approximated by an iterative algorithm for deep convolutional networks. The corresponding models are referred to as stretched deep networks (SDN). We show that when the amount of training data is limited, SDNs with random weights drastically outperform fully supervised CNNs with similar architectures.

While SDNs perform reasonably well for classification with limited labeled data, they can not utilize unlabeled data which is often much easier to obtain. A common approach to utilize unlabeled data is to couple the classifier with an autoencoder (or its variants) thereby minimizing reconstruction error in addition to the classification error. We discuss the limitations of decoder-based architectures and propose a model that allows for the utilization of unlabeled data without the need of a decoder. This is achieved by jointly modeling the distribution of data and latent features in a manner that explicitly assigns zero probability to unobserved data. The joint probability of the data and the latent features is maximized using a two step EM-like procedure. Depending on the task, we allow the latent features to be one-hot or real-valued vectors and define a suitable prior on the features. For instance, one-hot features correspond to class labels and are directly used for the unsupervised and semi-supervised classification task, whereas real-valued feature vectors are fed as input to simple classifiers for auxiliary supervised discrimination tasks. The proposed model, which we refer to as discriminative encoder (or DisCoder), is flexible in the type of latent features that it can capture. The proposed model achieves state-of-the-art performance on several challenging datasets.

Having addressed the problem of utilizing unlabeled data for classification, we move to a domain where obtaining labels is a lot more expensive, that is, semantic image segmentation. Explicitly labeling each pixel of an image with the object that the pixel belongs to, is an expensive operation, in terms of time as well as effort. Currently, only a few classes of images have been densely (pixel-level) labeled. Even among these classes, only a few images per class have pixel-level supervision. Models that rely on densely-labeled images, can not utilize a much larger set of weakly annotated images available on the web. Moreover, these models can not learn the segmentation masks for new classes, where there is no densely labeled data.

Hence, we propose a model for utilizing weakly-labeled data for semantic segmentation of images. This is achieved by generating fake labels for each image, while simultaneously forcing the output of the CNN to satisfy the mean-field constraints imposed by a conditional random field. We show that one can enforce the CRF constraints by forcing the distribution at each pixel to be close to the distribution of its neighbors. The proposed model is very fast to train and achieves state-of-the-art performance on the popular VOC-2012 dataset for the task of weakly supervised semantic image segmentation.

Video Archive Go Top

 

Series: Department Seminar
Title: SOPER : Discovering the Influence of Fashion and the Many Faces of User from Session Logs using Stick Breaking Process

  • Speaker: Ms. Lucky Dhakad
                   Flipkart
  • Date and Time: Monday, September 25, 2017, 2:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Recommending lifestyle articles is of immediate interest to the e-commerce industry and is beginning to attract research attention. Often followed strategies, such as recommending popular items are inadequate for this vertical because of two reasons. Firstly, users have their own personal preference over items, referred to as personal styles, which lead to the long-tail phenomenon. Secondly, each user displays multiple personas, each persona has a preference over items which could be dictated by a particular occasion, e.g. dressing for a party would be different from dressing to go to office. Recommendation in this vertical is crucially dependent on discovering styles for each of the multiple personas. There is no literature which addresses this problem.

We posit a generative model which describes each user by a Simplex Over PERsona, SOPER, where a persona is described as the individuals preferences over prevailing styles modeled as topics over items. The choice of simplex and the long-tail nature necessitates the use of stick-breaking process. The main technical contribution is an efficient collapsed Gibbs sampling based algorithm for solving the attendant inference problem.

Trained on large-scale interaction logs spanning more than half-a-million sessions collected from an e-commerce portal, SOPER out-performs previous baselines by a large margin of 35% in identifying persona. Consequently it outperforms several competitive baselines comprehensively on the task of recommending from a catalogue of roughly 150 thousand lifestyle articles, by improving the recommendation quality as measured by AUC by a staggering 12.23%, in addition to aiding the interpretability of uncovered personal and fashionable styles thus advancing our precise understanding of the underlying phenomena.

Speaker Bio:
Lucky Dhakad completed her ME from CSA, IISc in 2016 and is presently working with Flipkart. *This is a practice talk for CIKM 2017*

Host Faculty: Prof. Chiranjib Bhattacharyya

Video Archive Go Top

 

Series: M.Sc. (Engg) Thesis Defense
Title: A Study of Thompson Sampling Approach for Sleeping Multi-Armed Bandit Problems

  • Speaker: Mr. Aritra Chatterjee
                   M.Sc. (Engg) Student
                   Dept of CSA
                   IISc
  • Faculty Advisor: Prof. Y. Narahari
  • Date and Time: Monday, September 25, 2017, 11:30 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
The Multi-armed bandit (MAB) problem provides a convenient abstraction for many online learning problems arising in modern applications including Internet advertising, crowdsourcing, online procurement, smart grids, etc. Several variants of the MAB problem have been proposed to extend the basic model to a variety of practical and general settings. The sleeping multi-armed (S-MAB) problem is one such variant where the subset of available arms varies with time. This study is focused on analyzing the efficacy of the Thompson Sampling algorithm for solving the S-MAB problem.

Any algorithm for the classical MAB problem is expected to choose one of K available arms (actions) in each of T consecutive rounds. Each choice of an arm generates a stochastic reward from an unknown but fixed distribution. The goal of the algorithm is to maximize the expected sum of rewards over the T rounds (or equivalently minimize the expected total regret), relative to the best-fixed action in hindsight. In many of these settings, however, not all arms may be available in any given round. For example, in Internet advertising, some advertisers might choose to stay away from the auction due to budget constraints; in crowdsourcing, some workers may not be available at a given time due to timezone difference, etc. This implies that the algorithm needs to work only with the set of available arms. Further, unlike in the classical setting, the performance of an algorithm can no longer be evaluated relative to the best-fixed arm in hindsight; the regret is now to be measured relative to the best "available" arm in hindsight. One possible way is to compute the overall regret as the worst-case regret over all possible sample paths of availabilities (that is, under adversarial selection of available arms).

In the literature, upper confidence bound (UCB)-based approaches have been proposed and investigated for the S-MAB problem. Our contribution is to investigate a Thompson sampling-based approach. Our key finding is to establish a logarithmic regret bound, which non-trivially generalizes a similar bound known for this approach in the classical MAB setting. Our bound also matches (up to constants) the best-known lower bound for the S-MAB problem. And, we show via detailed simulations, that the Thompson Sampling approach, in fact, outperforms the known algorithms for the S-MAB problem.

Video Archive Go Top

 

Series: Department Seminar
Title: Maliciously Secure Oblivious Linear Function Evaluation with Constant Overhead

  • Speaker: Mr. Satrajit Ghosh
                   PhD Student
                   Cryptography and Security Group
                   Aarhus University
                   Denmark
  • Date and Time: Thursday, September 21, 2017, 3:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
In this work we consider the problem of oblivious linear function evaluation (OLE). OLE is a special case of oblivious polynomial evaluation (OPE) and deals with the oblivious evaluation of a linear function $f(x)=ax+b$. This problem is non-trivial in the sense that the sender chooses $a,b$ and the receiver $x$, but the receiver may only learn $f(x)$. We present a highly efficient and UC-secure construction of OLE in the OT-hybrid model that requires only $O(1)$ OTs per OLE. The construction is based on noisy encodings introduced by Naor and Pinkas (STOC'99) and used for passive secure OLE's by Ishai, Prabhakaran and Sahai (TCC'09). A result asymptotically similar to ours is known by applying the IPS compiler to the mentioned passive secure OLE protocol, but our protocol provides better constants and would be considerably simpler to implement. Concretely we use only $16$ OTs to generate one active secure OLE, and our protocol achieves active security by adding fairly simple checks to the passive secure protocol. We therefore believe our protocol takes an important step towards basing practical active-secure arithmetic computations on OLEs. Our result requires novel techniques that might be of independent interest. As an application we present the currently most efficient OPE construction.

Speaker Bio:
Satrajit Ghosh is doing his PhD at Cryptography and Security Group, Aarhus University, Denmark under supervision of Jesper Buus Nielsen and Claudio Orlandi. He did his Masters from the Computer Science and Engineering Department, IIT Kharagpur under the guidance of Dr. Abhijit Das. Prior to that, he did his B.Tech in Computer Science and Engineering and B.Sc (Honors) in Physics from University of Calcutta. Before joining Aarhus University, he also worked as researcher at the R. C. Bose Centre of Cryptology and Security, Indian Statistical Institute, Kolkata and in Saarland University, Germany.

Host Faculty: Dr. Arpita Patra

Video Archive Go Top

 

Series: M.Sc. (Engg) Thesis Defense
Title: Improved lower bounds for depth four arithmetic circuits.

  • Speaker: Mr. Abhijat Sharma, M.Sc. (Engg)
  • Faculty Advisor: Dr . Chandan Saha
  • Date and Time: Wednesday, September 20, 2017, 10:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
We study the problem of proving lower bounds for depth four arithmetic circuits. Depth four circuits have been receiving much attraction when it comes to recent circuit lower bound results, as a result of the series of results culminating in a proof that strong enough lower bounds for depth four circuits will imply super-polynomial lower bounds for general arithmetic circuits, and hence solve one of the most central open problems in algebraic complexity i.e a separation between the VP and VNP complexity classes.

However despite several efforts, the best known lower bounds for general circuits remains Omega(N logN), where N is the number of input variables. In the case of arithmetic formulas, an O(N^2) lower bound is known, which has not seen any improvement since then. The situation for depth three arithmetic circuits was also similar for many years, until a recent result that achieved an almost cubic lower bound improving upon the previous best quadratic lower bound. However, unlike depth three circuits, proving a super-quadratic lower bound for depth four circuits remains a prevalent open problem for many years.

Previous results implied a super-linear lower bound of Omega(N^{1.33}) for depth four circuits. As the main contribution of this thesis, we improve upon this super-linear bound and prove an Omega(N^(1.5)) lower bound on the size of depth four circuits, computing an explicit N-variate polynomial in VNP with degree d = O(sqrt{N}). Our approach offers a potential route to proving a super-quadratic lower bound for depth four circuits. Taking cue from the numerous successful results recently, we use the technique of the shifted partial derivatives measures to achieve the said lower bound. Particularly, we use the Dimension of Projected Shifted Partials (DPSP) measure. Coming to the choice of the hard polynomial, we again follow the status quo and use a variant of the Nisan-Wigderson (NW) polynomial family that has proved to be very helpful over the past few years in arithmetic circuit complexity.

Finally, we do a careful analysis of two previous work on general depth four circuits and compare their techniques to ours. We conclude that our result can potentially be used as a starting point, and techniques similar to that of the almost cubic lower bound result for depth three circuits can likely be used to strengthen our lower bound to Omega(N^(2.5)), for general depth four arithmetic circuits.

Video Archive Go Top

 

Series: Department Seminar
Title: SymSum: Symmetric-Sum Distinguishers Against Round Reduced SHA3

  • Speaker: Dr. Dhiman Saha
  • Date and Time: Friday, September 15, 2017, 12:00 PM
  • Venue: CSA Multimedia Class (Room No. 252, First Floor)

Abstract
Cryptographic Hash Functions are ubiquitous in our day-to-day digital lives. Primarily they ensure data integrity which is one of the fundamental crypto goals making the analysis of these hash functions imperative. After briefly introducing the research area, this talk will focus on devising “Distinguishers” of the latest cryptographic hash function – SHA3. “Distinguishers” constitute a specific paradigm of analyzing hash functions that deals with exhibiting non-random behavior. The talk will zoom-in to the domain of higher-order derivatives of Boolean functions and its applications to analysis of SHA3 in the form of the Zero-Sum property. Finally, our latest research result which proposes a new class of Distinguishers of SHA3 will be presented. The technique presented relies on a variant of classical higher order Boolean derivatives over special subspaces. It exploits internal symmetry of the hash function to come up with the best distinguisher on SHA3 penetrating up to 9 rounds that succeeds with probability one and outperforms the closest competitor, in terms of effort, by a factor of 4.

Speaker Bio:
Dhiman Saha is a Research Associate in the Crypto Research Lab, in the Department of Computer Science and Engineering at IIT Kharagpur. He recently received his PhD from the department with his broad area of research encompassing both theoretical and physical aspects of Cryptography. As a part of his thesis he worked on the cryptanalysis of hash functions and authenticated ciphers. He has broken three authenticated encryption schemes submitted to the on-going CAESAR competition and has been involved in the analysis of latest cryptographic hash algorithm - SHA3. Before joining for his PhD he worked in the Electronic Design Automation industry for over two years. He completed his MS degree from IIT Kharagpur in 2009 with a thesis on Hardware Security. He received his BE from NIT Agartala in 2006 and was awarded the Gold Medal for Computer Science and Engineering. He was also recipient of the NEC Scholarship during his BE. Along with hash functions and authenticated ciphers, his recent research interests include memory-hard functions and proofs-of-work.

Host Faculty: Dr. Sanjit Chatterjee

Video Archive Go Top

 

Series: Department Seminar
Title: Provably Efficient Scheduling of Cache-Oblivious Wavefront Algorithms

  • Speaker: Pramod Ganapathi
  • Date and Time: Tuesday, September 12, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Iterative wavefront algorithms for evaluating dynamic programming recurrences exploit optimal parallelism but show poor cache performance. Tiled-iterative wavefront algorithms achieve optimal cache complexity and high parallelism but are cache-aware and hence are not portable and not cache-adaptive. On the other hand, standard cache-oblivious recursive divide-and-conquer algorithms have optimal serial cache complexity but oft‰en have low parallelism due to arti€ficial dependencies among subtasks. We present a generic algorithmic framework to systematically transform standard cache-oblivious recursive divide-and-conquer algorithms into recursive wavefront algorithms to achieve optimal parallel cache complexity and high parallelism under state-of-the-art schedulers for fork-join programs. The recursive wavefront algorithms are efficient both in theory and in practice.

Speaker Bio:
Pramod Ganapathi received his PhD in Computer Science, specializing in algorithms, from Stony Brook University, New York, USA in 2016. He was supervised by Prof. Rezaul Chowdhury. During his PhD, he designed algorithms (and frameworks) that can automatically (and semi-automatically) discover efficient recursive divide-and-conquer algorithms for a wide class of dynamic programming problems. Currently, he is the founder and CEO of a startup called Learning is Beautiful, based in Bengaluru. The company aims to teach hardcore problem-solving and deep thinking on higher education topics through simple and intuitive animated videos.

Host Faculty: Uday Kumar Reddy B

Video Archive Go Top

 

Series: CSA Distinguished Lecture
Title: Technology Disruptions and Innovation at Intersections

  • Speaker: Mr. K Ananth Krishnan
                   Chief Technology Officer
                   Tata Consultancy Services (TCS)
  • Date and Time: Friday, September 08, 2017, 11:30 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
What is the view from the CTO of a company at the intersection of technology and the businesses of the 1000 biggest corporations globally across 15 industries? Is the global industry hungry for research breakthroughs? What are the big disruptions across industries, and how are these triggered? How are industries responding, and which companies will succeed? Which areas of research are most in demand, and are moving at great speed from idea to impact?

How is TCS Research and Innovation responding to this market? Ananth directs TCS’ Research and Innovation and will highlight the current scenario and provide real world examples from around the world. He will also explain why the sciences are intersecting with computing, to drive new ideas and creating value.

Speaker Bio:
As CTO, Ananth directs research and innovation in Tata Consultancy Services. Ananth has architected a 4E model to make invention, innovation and co-innovation at TCS deliver value to TCS business and its customers. Under his leadership the TCS research community has created a significant portfolio of patents, papers and IP. Ananth has been a member of the TCS Corporate Think-Tank since 1999, and has led several strategic initiatives. He has been a Principal Architect and Lead Consultant in the Architecture and Technology Consulting Practice, and earlier the head of the TCS Systems Management and the Systems Software Group. Ananth was elected a Fellow at the Indian Academy of Engineering (INAE) in recognition of his contributions towards engineering in 2013. He is a Senior Member of the IEEE and a member of the Computer Society of India, and is an invited faculty in the Department of Management Studies at the Indian Institute of Technology, Madras. Named a Distinguished Alumnus of the Indian Institute of Technology, Delhi in 2009, Ananth has been listed in Computerworld’s Premier 100 IT Leaders (2007). He has also been chosen as one of Infoworld’s Top 25 CTOs (2007). Ananth is an M. Tech. in Computer Science from the Indian Institute of Technology, Delhi. He also has a Masters degree in Physics from the same Institute and a Bachelor's degree in Physics from Fergusson College, Pune.

Host Faculty: Prof. Y Narahari

Video Archive Go Top

 

Series: Department Seminar
Title: Intercepting Functions for Memoization

  • Speaker: Arjun Suresh
  • Date and Time: Friday, September 01, 2017, 3:00 PM
  • Venue: CSA Multimedia Class (Room No. 252, First Floor)

Abstract
In this talk, we present mechanisms to implement function memoization at a software level. We analyze the potential of function memoization on applications and its performance gain on current architectures. We propose three schemes – a simple load-time approach that works for any dynamically linked function, a compile-time approach using LLVM framework which can enable memoization for any program function, and finally a hardware proposal for performing memoization in hardware and its potential benefits. Demonstration of the link-time approach with transcendental functions showed that memoization is applicable and gives good benefit even under modern architectures and compilers (with the restriction that it can be applied only for dynamically linked functions). Our compile-time approach extends the scope of memoization and also increases the benefit due to memoization. This works for both user-defined functions as well as library functions. It can handle certain kind of non pure functions like those functions with pointer arguments and global variable usage. Hardware memoization lowers the threshold for a function to be memoized and gives more performance gain on average.

Speaker Bio:
Arjun Suresh is currently working on certain extensions of GATE Overflow, a complete exam preparation package for GATE preparation for the Computer Science stream. Before this, he did his post-doctoral research at The Ohio State University on high-performance GPU implementations of tensor algebra operations. His doctoral studies were completed under the guidance of Dr. Erven Rohou in the ALF team at INRIA, Rennes, and his Ph.D. thesis was titled "Intercepting Functions for Memoization". Prior to that, he had obtained his Masters in Engineering from the Dept of CSA at the Indian Institute of Science under the guidance of Prof. Uday Kumar Reddy B, and his Bachelors' from the College of Engineering, Trivandrum.

Host Faculty: Uday Kumar Reddy B

Video Archive Go Top

 

Series: CSA Distinguished Lecture
Title: Unleashing the Potential of Heterogeneous Computing for Mobile and IoT Devices

  • Speaker: Tulika Mitra
                   Professor of Computer Science
                   School of Computing
                   National University of Singapore
  • Date and Time: Friday, September 01, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Transistor count continues to increase for silicon devices following Moore’s Law. But the failure of Dennard scaling has brought the computing community to a crossroad where power/thermal has become the major limiting factor, specially in mobile and IoT devices with constrained form factors. Thus future chips in these devices can have many cores; but only a fraction of them can be switched on at any point in time. This dark silicon era is driving the emergence of heterogeneous multi-core architectures consisting of cores with diverse computational capabilities and power-performance characteristics enabling better match between the application requirements and the compute engine leading to substantially improved energy-efficiency. In this talk, I will introduce the technology trends driving heterogeneous multi-cores, provide an overview of computationally divergent and performance heterogeneous multi-cores, and present my research group's effort in architecture, compiler, and runtime support to fully realize the potential of heterogeneity towards energy-efficient computing.

Speaker Bio:
Tulika Mitra is a Professor of Computer Science at School of Computing, National University of Singapore (NUS). She received her PhD from the State University of New York at Stony Brook (2000) and M.E. from the Indian Institute of Science (1997). Her research interests span various aspects of the design automation of embedded real-time systems, cyber-physical systems (CPS), and Internet of Things (IoT) with particular emphasis on energy-efficient computing, heterogeneous computing, application-specific processors, and software timing analysis/optimizations. She has authored over hundred scientific publications in leading international journals and conferences in this domain. Her research has been recognized by best paper award at FPT 2012 and best paper nominations at DAC (2016, 2012, 2009), DATE, VLSI Design, CODES+ISSS, FPL, ECRTS, CASES. Prof. Mitra serves as Senior Associate Editor of the ACM Transactions on Embedded Computing Systems, Deputy Editor-in-Chief of IEEE Embedded Systems Letters, Associate Editor of IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, IEEE Design & Test Magazine, EURASIP Journal on Embedded Systems, and IET Computers & Digital Techniques. She has served in the organizing and program committees of several major conferences in embedded systems, real-time systems, and electronic design automation.

Host Faculty: R. Govindarajan

Video Archive Go Top

 

Series: M.Sc. (Engg) Thesis Defense
Title: Fast Actively Secure OT Extension for Short Secrets

  • Speaker: Mr. Ajith S
                   M.Sc. (Engg) Student
                   Dept. of CSA
                   IISc
  • Faculty Advisor: Dr. Arpita Patra
  • Date and Time: Thursday, August 31, 2017, 4:30 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Oblivious Transfer (OT) is one of the most fundamental cryptographic primitives with wide-spread application in general secure multi-party computation (MPC) as well as in a number of tailored and special-purpose problems of interest such as private set intersection (PSI), private information retrieval (PIR), contract signing to name a few. Often the instantiations of OT require prohibitive communication and computation complexity. OT extension protocols are introduced to compute a very large number of OTs referred as extended OTs at the cost of a small number of OTs referred as seed OTs.

We present a fast OT extension protocol for small secrets in the active setting. Our protocol when used to produce 1-out-of-n OTs outperforms all the known actively secure OT extensions. Our protocol is built on the semi-honest secure extension protocol of Kolesnikov and Kumaresan of CRYPTO’13 (referred as KK13 protocol henceforth) which is the best-known OT extension for short secrets. At the heart of our protocol lies an efficient consistency checking mechanism that relies on the linearity of Walsh- Hadamard (WH) codes. Asymptotically, our protocol adds a communication overhead of O(µ log κ) bits over KK13 protocol irrespective of the number of extended OTs, where κ and µ refer to computational and statistical security parameter respectively. Concretely, our protocol when used to generate a large enough number of OTs adds only 0.011-0.028% communication overhead and 4-6% runtime overhead both in LAN and WAN over KK13 extension. The runtime overheads drop below 2% when in addition the number of inputs of the sender in the extended OTs is large enough.

As an application of our proposed extension protocol, we show that it can be used to obtain the most efficient PSI protocol secure against a malicious receiver and a semi-honest sender.

Video Archive Go Top

 

Series: Department Seminar
Title: Approximating Geometric Knapsack via L-packings

  • Speaker: Dr. Arindam Khan
                   Researcher, IDSIA SUPSI
                   Switzerland
  • Date and Time: Monday, August 21, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
We study the two-dimensional geometric knapsack problem (2DK), a geometric variant of the classical knapsack problem. In this problem, we are given a set of axis-aligned rectangular items, each one with an associated profit, and an axis-aligned square knapsack. The goal is to find a (non-overlapping) packing of a maximum profit subset of items inside the knapsack without rotating items. This is a very well-studied optimization problem and finds applications in scheduling, memory allocation, advertisement placement etc. The best-known polynomial-time approximation factor for this problem (even just in the cardinality case) is $2+eps$ [Jansen and Zhang, SODA 2004].

After more than a decade, in this paper we break the 2-approximation barrier, achieving a polynomial-time $17/9+eps<1.89$ approximation, which improves to $558/325+eps<1.72$ in the cardinality case. We also consider the variant of the problem with rotations (2DKR) , where the items can be rotated by $90$ degrees. Also in this case the best-known polynomial-time approximation factor (even for the cardinality case) is $2+eps$ [Jansen and Zhang, SODA 2004]. Exploiting part of the machinery developed for 2DK plus a few additional ideas, we obtain a polynomial-time $3/2+eps$-approximation for 2DKR, which improves to $4/3+eps$ in the cardinality case.

This is a joint work with Waldo Galvez, Fabrizio Grandoni, Sandy Heydrich, Salvatore Ingala and Andreas Wiese. This result will appear in FOCS 2017.

Speaker Bio:
Arindam Khan is a researcher in IDSIA, SUPSI in Lugano, Switzerland. His research areas include approximation algorithms, online algorithms and computational geometry. He has obtained his PhD in Algorithms, Combinatorics and Optimization (ACO) from Georgia Institute of Technology, Atlanta, USA under Prof. Prasad Tetali. Previously he has been a research intern in Theory group, Microsoft Research Redmond and Microsoft Research Silicon Valley USA, a visiting researcher at Simons Institute, Berkeley, USA and a blue scholar in IBM Research India.

Host Faculty: Dr. Anand Louis

Video Archive Go Top

 

Series: Ph.D. Thesis Defense
Title: Number Theoretic, Computational and Cryptographic Aspects of a Certain Sequence of Arithmetic Progressions

  • Speaker: Mr. Cherukapally Srikanth
                   Ph.D student
                   Dept. of CSA
  • Faculty Advisor: Prof. C.E. Veni Madhavan & Dr. Sanjit Chatterjee
  • Date and Time: Wednesday, August 09, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
This thesis introduces a new mathematical object: collection of arithmetic progressions with elements satisfying the inverse property,

``j-th terms of i-th and (i+1)-th progressions are multiplicative inverses of each other modulo (j+1)-th term of i-th progression''. Such a collection is uniquely defined for any pair $(a,d)$ of co-prime integers. The progressions of the collection are ordered. Thus we call it a sequence rather than a collection. The results of the thesis are on the following number theoretic, computational and cryptographic aspects of the defined sequence and its generalizations.

The sequence is closely connected to the classical Euclidean algorithm. Precisely, certain consecutive progressions of the sequence form ``groupings''. The difference between the common differences of any two consecutive progressions of a grouping is same. The number of progressions in a grouping is connected to the quotient sequence of the Euclidean algorithm on co-prime input pairs. The research community has studied extensively the behavior of the Euclidean algorithm. For the first time in the literature, the connection (proven in the thesis) shows what the quotients of the algorithm signify. Further, the leading terms of progressions within groupings satisfy a mirror image symmetry property, called ``symmetricity''. The property is subject to the quotient sequence of the Euclidean algorithm and divisors of integers of the form $x^2-y^2$ falling in specific intervals. The integers $a$, $d$ are the primary quantities of the defined sequence in a computational sense. Given the two, leading term and common difference of any progression of the sequence can be computed in time quadratic in the binary length of $d$. On the other hand, the inverse computational question of finding $(a,d)$, given information on some terms of the sequence, is interesting. This problem turns out to be hard as it requires finding solutions to an nearly-determined system of multivariate polynomial equations. Two sub-problems arising in this context are shown to be equivalent to the problem of factoring integers. The reduction to the factoring problem, in both cases, is probabilistic.

Utilizing the computational difficulty of solving the inverse problem, and the sub-problems (mentioned above), we propose a symmetric-key cryptographic scheme (SKCS), and a public key cryptographic scheme (PKCS). The PKCS is also based on the hardness of the problem of finding square-roots modulo composite integers. Our proposal uses the same algorithmic and computational primitives for effecting both the PKCS and SKCS. In addition, we use the notion of the sequence of arithmetic progressions to design an entity authentication scheme. The proof of equivalence between one of the inverse computational problems (mentioned above) and integer factoring led us to formulate and investigate an independent problem concerning the largest divisor of integer $N$ bounded by the square-root of $N$. We present some algorithmic and combinatorial results. In the course of the above investigations, we are led to certain open questions of number theoretic, combinatorial and algorithmic nature. These pertain to the quotient sequence of the Euclidean algorithm, divisors of integers of the form $x^2-y^2$ in specific intervals, and the largest divisor of integer $N$ bounded by its square-root.

Video Archive Go Top

 

Series: Ph.D. Colloquium
Title: Approximation Algorithms for Geometric Packing and Covering Problems

  • Speaker: Mr. Aniket Basu Roy
                   PhD student at CSA department
  • Faculty Advisor: Prof. Sathish Govindarajan
  • Date and Time: Monday, August 07, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
We study a host of Geometric optimization problems that are NP-hard and design polynomial time approximation algorithms for them. More precisely, we are given a family of geometric objects and a point set, mostly in the plane, and study different variants and generalizations of Packing and Covering problems. Our objects of study are mostly family of non-piercing regions in the plane. We call a set of simple and connected regions to be non-piercing if for any pair of intersecting regions A and B their set difference AB is a connected region e.g., disks, half-planes etc. Whereas, lines and rectangles are examples of piercing objects. For most of the problems we have studied, a simple local search algorithm is enough to yield a PTAS whose analysis of approximation requires a suitable bipartite graph on the local search solution and the optimal solution to have balanced sub-linear separators.

We study a generalization of the standard packing problem, called the capacitated packing problem and its slight variant, the shallow packing problem. We devise a PTAS for both these problems for constant capacities. For the former problem, the objects are non-piercing whereas the objects can be even more general and only have sub-quadratic monotonic union complexity for the latter problem. The non-triviality here is to show that the intersection graph of arrangements with shallow depth, which is not planar, has balanced sub-linear separators. Our results complement the Maximum Independent Set of Rectangles problem as rectangles are both piercing and have quadratic union complexity. We also study the capacitated packing problem for the dual set system and are able to show that local search works here as well for unit capacity and devise a constant factor approximation algorithm using Bronnimann-Goodrich technique which is inspired by the method of multiplicative weight updates.

Runaway Rectangle Escape problem is closely related to the above packing problems and is motivated from routing in printed circuit boards. Here we are given a set of axis-parallel rectangles inside a rectangular boundary and a maximum allowed depth d. The objective is to extend the maximum number of input rectangles to one of the four directions such that the maximum depth of a point is at most d after extension. We devise several approximation algorithms and a hardness result.

We study different variants of the covering problem like the Unique Coverage, Multi-Covering, Prize Collecting Set Cover, Exact Cover, and Multi-Hitting Set. For the first three problems, we show that local search yields a PTAS for non-piercing regions under some assumptions. For the Exact Cover problem, we prove that even finding a feasible solution is NP-hard for objects as special as unit squares. For the Multi-Hitting Set problem, we propose constant factor approximation algorithm for non-piercing regions using the fact that they have shallow dual-cell complexity.

Lastly, we consider variants of the Art Gallery problems called the Minimum (Horizontal) Sliding Cameras problem, M(H)SC. We are given an orthogonal polygon and we need to deploy mobile guards who can walk along an orthogonal (horizontal) line segment and can guard a point inside the polygon if the perpendicular drawn from the point onto the line segment lies inside the polygon. Our local search algorithm yields a PTAS for the MHSC problem and also the MSC problem when the polygon has no holes. Here again, the nontriviality is to prove that an appropriate graph on orthogonal line segments is planar. We also observe that the MSC problem for polygons with holes is APX-hard.

Video Archive Go Top

 

Series: CSA Distinguished Lecture
Title: MULTI-OBJECTIVE NAVIGATION IN UNCERTAINTY

  • Speaker: Professor Krishna R. Pattipati
                   University of Connecticut
                   Storrs, Connecticut
                   USA
  • Date and Time: Thursday, August 03, 2017, 3:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Routing in uncertain environments involves many contextual elements, such as different environmental conditions (ensemble forecasts with varying spatial and temporal uncertainty), multiple objectives, changes in mission goals while en route (e.g., training requirements, pop-up threats, humanitarian aid), and asset status. In this walk, we focus on robust planning under uncertainty by exploiting weather forecast ensembles and realizations using TMPLAR, a Tool for Multi-objective PLanning and Asset Routing, in the context of 3-D and 4-D path navigation. Our approach includes solving for m-best shortest paths for each weather forecast realization via Murty's search space partitioning strategy and evaluating the mean, variance, and signal-to-noise ratio (SNR) of the paths over all weather possibilities. The robust path is one that, for example, minimizes the root mean square (RMS) value or one that maximizes the SNR, given the possible forecast realizations. In a complementary effort to compact the feature-rich multi-dimensional navigational problem space, we develop a self-organizing map (SOM)-based data reduction scheme that filters complex contexts and highlights only the key impacting features within a given information space. We demonstrate the utility of our approaches via application to a real-world shipping tragedy, using the weather forecast realizations available prior to the event.

Speaker Bio:
Krishna R. Pattipati received the B. Tech. degree in electrical engineering with highest honours from the Indian Institute of Technology, Kharagpur, in 1975, and the M.S. and Ph.D. degrees in control and communication systems from UConn, Storrs, in 1977 and 1980, respectively. He was with ALPHATECH, Inc., Burlington, MA from 1980 to 1986. He has been with the Department of Electrical and Computer Engineering at UConn since 1986, where he is currently the Board of Trustees Distinguished Professor and the UTC Chair Professor in Systems Engineering. Dr. Pattipati’s research activities are in the areas of proactive decision support, uncertainty quantification, smart manufacturing, autonomy, knowledge representation, and optimization-based learning and inference. A common theme among these applications is that they are characterized by a great deal of uncertainty, complexity, and computational intractability. He is a cofounder of Qualtech Systems, Inc., a firm specializing in advanced integrated diagnostics software tools (TEAMS, TEAMS-RT, TEAMS-RDS, TEAMATE, PackNGo), and serves on the board of Aptima, Inc. Dr. Pattipati was selected by the IEEE Systems, Man, and Cybernetics (SMC) Society as the Outstanding Young Engineer of 1984, and received the Centennial Key to the Future award. He has served as the Editor-in-Chief of the IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS--PART B from 1998 to 2001. He was co-recipient of the Andrew P. Sage Award for the Best SMC Transactions Paper for 1999, the Barry Carlton Award for the Best AES Transactions Paper for 2000, the 2002 and 2008 NASA Space Act Awards for "A Comprehensive Toolset for Model-based Health Monitoring and Diagnosis," and “Real-time Update of Fault-Test Dependencies of Dynamic Systems: A Comprehensive Toolset for Model-Based Health Monitoring and Diagnostics”, the 2003 AAUP Research Excellence Award and the 2005 School of Engineering Teaching Excellence Award at the University of Connecticut at UCONN. He is an elected Fellow of IEEE for his contributions to discrete-optimization algorithms for large-scale systems and team decision-making, and is a member of the Connecticut Academy of Science and Engineering.

Host Faculty: Prof. Y Narahari

Video Archive Go Top

 

Series: Department Seminar
Title: Data Driven Precondition Inference for Compiler Optimizations

  • Speaker: Dr. Santosh Nagarakatte
                   Assistant Professor
                   Department of Computer Science, Rutgers University
  • Date and Time: Thursday, August 03, 2017, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Peephole optimizations are a common source of compiler bugs. Compiler developers typically transform an incorrect peephole optimization into a valid one by strengthening the precondition. This process is challenging and tedious. In this talk, I will describe our new data-driven approach for inferring preconditions for peephole optimizations expressed in Alive. Our main idea is to generate positive and negative examples for an optimization in a context without tests, enumerate predicates on demand, and learn a set of predicates that separate the positive and negative examples. We repeat this process until we are able to find a precondition that ensures the validity of the optimization. Our prototype, ALIVE-INFER, reports both a weakest precondition and a set of succinct partial preconditions to the developer. Our prototype generates preconditions that are weaker than LLVM’s preconditions for majority of the optimizations in the Alive suite. The talk will also demonstrate the applicability of this technique to generalize optimization patterns generated by Souper, an LLVM IR–based super-optimizer.

Speaker Bio:
Santosh Nagarakatte is an Assistant Professor of Computer Science at Rutgers University. He obtained his PhD from the University of Pennsylvania in 2012. His research interests are in Hardware-Software Interfaces spanning Programming Languages, Compilers, Software Engineering, and Computer Architecture. His papers have been selected as IEEE MICRO TOP Picks papers of computer architecture conferences in 2010 and 2013. He has received the NSF CAREER Award in 2015, ACM SIGPLAN PLDI 2015 Distinguished Paper Award, and ACM SIGSOFT ICSE 2016 Distinguished Paper Award for his research on LLVM compiler verification. His papers have been selected as the 2016 SIGPLAN Research Highlights Paper and 2017 Communication of the ACM Research Highlights Paper.

Host Faculty: Prof. Vinod Ganapathy

Video Archive Go Top

 

 

 

 

 

 

 

 

 

 

Copyright: CSA, IISc 2017      Phone: +91-80-22932368          Fax: +91-80-23602911 Travel Blog    Feedback    Credits