Department of Computer Science and Automation Department of Computer Science and Automation, IISc, Bangalore, India Indian Institute of Science
HOME | ABOUT US | PEOPLE | RESEARCH | ACADEMICS | FACILITIES | EVENTS / SEMINARS | NEWS | CONTACT US

UPCOMING SEMINARS

PAST SEMINARS

Series: Theory Seminar Series
Title: Fair Rent Division

  • Speaker: Ms. Nidhi Rathi
                   IntPh.D Student
                   Dept. of Mathematics
  • Date and Time: Friday, March 13, 2020, 1:00 PM
  • Venue: CSA Lecture Hall (Room No. 117, Ground Floor)

Abstract
The theory of fair division addresses the fundamental problem of allocating resources among agents with equal entitlements but distinct preferences.

In this talk, I will focus on the classic problem of fair rent division that entails splitting the rent (money) and allocating the rooms (indivisible resources) of an apartment among roommates (agents) in a fair manner. Here, a distribution of the rent and an accompanying allocation is said to be fair if it is envy free, i.e., under the imposed rents, no agent has a strictly stronger preference (utility) for any other agent’s room. While envy-free solutions are guaranteed to exist under reasonably general utility functions, efficient algorithms for finding them were known only for quasilinear utilities. Our work addresses this notable gap and develops approximation algorithms for fair rent division with minimal assumptions on the utility functions.

Specifically, we show that if the agents have continuous, monotone decreasing, and piecewise- linear utilities, then the fair rent division problem admits a fully polynomial-time approximation scheme (FPTAS). We complement the algorithmic results by proving that the fair rent division problem (under continuous, monotone decreasing, and piecewise-linear utilities) lies in the intersection of the complexity classes PPAD and PLS. This talk is based on a joint work with Eshwar Arunachaleswaran and Siddharth Barman that appeared in SODA 2019.

Host Faculty: Dr. Anand Louis

Video Archive Go Top

 

Series: M.Tech (Research) Thesis Defense
Title: Optimizing the Linear Fascicle Evaluation Algorithm for Multi-Core and Many-Core Systems

  • Speaker: Mr. Karan Aggarwal
                   M.Tech (Research) student
                   Dept. of CSA
  • Faculty Advisor: Prof. Uday Kumar Reddy B
  • Date and Time: Thursday, March 05, 2020, 2:30 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Sparse matrix-vector multiplication (SpMV) operations are commonly used in various scientific and engineering applications. The performance of the SpMV operation often depends on exploiting regularity patterns in the matrix. Various new representations and optimization techniques have been proposed to minimize the memory bandwidth bottleneck arising from the irregular memory access pattern. Among recent representation techniques, tensor decomposition is a popular one used for very large but sparse matrices. Post sparse-tensor decomposition, the new representation involves indirect accesses, making it challenging to optimize for multi-core architectures and even more demanding for the massively parallel architectures, such as on GPUs.

Computational neuroscience algorithms often involve sparse datasets while still performing compute-intensive operations. The Linear Fascicle Evaluation (LiFE) application is a popular neuroscience algorithm used for pruning brain connectivity graphs. The datasets employed herein involve the Sparse Tucker Decomposition (STD) - a widely used tensor decomposition method. Using this decomposition leads to multiple irregular array references, making it very difficult to optimize for multi-cores and GPUs. Recent implementations of the LiFE algorithm show that its SpMV operations are the key bottleneck for performance and scaling. In this work, we first propose target-independent techniques such as (1) data restructuring techniques to minimize the effects of irregular accesses, and (2) simple compiler optimizations. Then we apply target-specific optimizations to exploit the resources provided by the architecture. The CPU-specific optimizations that we incorporated are loop tiling, loop parallelization and utilizing BLAS calls to exploit data reuse, coarse-grained parallelism and fine-grained parallelism respectively. We extend the PolyMage domain-specific language, embedded in Python, to automate the CPU-based optimizations developed for this algorithm. Next, we propose various GPU-specific optimizations to optimally map threads at the granularity of warps, thread blocks and grid, and methods to split the computation among thread blocks to obtain fine-grained parallelism and data reuse. Our highly optimized and parallelized CPU implementation obtain a reduction in execution time from 225 min to 8.2 min over the original sequential approach running on 16-core Intel Xeon Silver (Skylake-based) system. Our optimized GPU implementation achieves a speedup of 5.2x over a reference optimized GPU code version on NVIDIA's GeForce RTX 2080 Ti GPU, and a speedup of 9.7x over our highly optimized and parallelized CPU implementation.

Video Archive Go Top

 

Series: Department Seminar
Title: Fairness in Algorithmic Decision Making

  • Speaker: Dr Abhijnan Chakraborty, MPI-SWS Saarbrucken
  • Date and Time: Thursday, March 05, 2020, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Algorithmic (data-driven) decision making is increasingly being used to assist or replace human decision making in domains with high societal impact, such as banking (estimating creditworthiness), recruiting (ranking job applicants), judiciary (offender profiling), healthcare (identifying high-risk patients who need additional care) and journalism (recommending news-stories). Consequently, in recent times, multiple research works have uncovered the potential for bias (unfairness) in algorithmic decisions in different contexts, and proposed mechanisms to control (mitigate) such biases. However, the emphasis of existing works has largely been on fairness in supervised classification or regression tasks, and fairness issues in other scenarios remain relatively unexplored. In this talk, I will cover our recent works on incorporating fairness in recommendation and matching algorithms in multi-sided platforms, where the algorithms need to fairly consider the preferences of multiple stakeholders. I will discuss the notions of fairness in these contexts and propose techniques to achieve them. I will conclude the talk with a list of open questions and directions for future work.

Speaker Bio:
Abhijnan Chakraborty is a Post-doctoral Researcher at the Max Planck Institute for Software Systems (MPI-SWS), Germany. He obtained his PhD from the Indian Institute of Technology (IIT) Kharagpur under the supervision of Prof. Niloy Ganguly (IIT Kharagpur) and Prof. Krishna Gummadi (MPI-SWS). During PhD, he was awarded the Google India PhD Fellowship and the Prime Minister's Fellowship for Doctoral Research. Prior to joining PhD, he spent two years at Microsoft Research India, working in the area of mobile systems. His current research interests fall under the broad theme of Computing and Society, covering the research areas of Social Computing, Information Retrieval and Fairness in Machine Learning. He has authored several papers in top-tier computer science conferences including WWW, KDD, AAAI, CSCW, ICWSM and MobiCom. His research works have won the best paper award at ASONAM'16 and best poster award at ECIR’19. He is one of the recipients of internationally competitive research grant from the Data Transparency Lab to advance his research on fairness and transparency in algorithmic systems. More details about him can be found at https://people.mpi-sws.org/~achakrab

Host Faculty: Prof. Matthew Jacob

Video Archive Go Top

 

Series: Ph.D. Thesis Defense
Title: On Learning and Lower Bound Problems Related to Iterated Matrix Multiplication Polynomial

  • Speaker: Mr. Vineer Nair
                   Ph.D student
                   Dept. of CSA
                   IISc
  • Faculty Advisor: Prof. Chandan Saha
  • Date and Time: Wednesday, March 04, 2020, 9:45 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
The iterated matrix multiplication polynomial (IMM) of width w and length d is the 1x1 entry in the product of d square matrices of size w. The w^2d entries in the d matrices are distinct variables. In this thesis, we study certain learning and lower bound problems related to IMM.

Our first work gives a polynomial time randomized algorithm for equivalence testing of IMM. At its core, the equivalence testing algorithm exploits a connection between the irreducible invariant subspaces of the Lie algebra of the group of symmetries of a polynomial f that is equivalent to IMM and the layer spaces of a full-rank algebraic branching program computing f. This connection also helps determine the group of symmetries of IMM and show that IMM is characterized by its group of symmetries.

Our second work is related to learning affine projections of IMM, which is believed to be a very hard problem as it is equivalent to reconstructing a powerful model to compute polynomials called algebraic branching programs (ABP). Equivalence test for IMM can be viewed as reconstructing ABPs in the average-case, when the width of the ABP is at most (n/d)^0.5, where n and d are the number of variables and the degree of the polynomial computed by the ABP respectively. Our second work improves this by first considering a related problem called `linear matrix factorization’ (LMF) which is a natural generalization of the polynomial factorization problem. We give a polynomial time randomized algorithm for average-case LMF for matrix products of width at most 0.5(n^0.5). In fact, we give a polynomial time randomized algorithm that solves (worst-case) LMF problem when the input matrix product is non-degenerate or pure product-- a notion we define in this work. Using our algorithm for LMF, we give a non-trivial average-case reconstruction algorithm for ABPs of width at most 0.5(n^0.5), which is interesting in the context of the Ω(n^0.5) width lower bound known for homogeneous ABPs.

Our last work gives lower bounds on interesting restrictions of arithmetic formulas computing IMM. We prove a w^Ω(d) lower bound on the size of multilinear depth three formulas computing IMM of width w and length d. The lower bound is proved by introducing a novel variant of the partial derivatives measure called skewed partial derivatives, which found applications in other subsequent works. Improving this result to w^Ω(log d) size lower bound on multilinear formulas computing IMM would imply a super-polynomial separation between ABPs and arithmetic formulas. We also show an exponential separation between multilinear depth three and multilinear depth four formulas which was an improvement over the quasi-polynomial separation already known. We also consider a restriction of multilinear formulas, called interval set-multilinear formulas computing IMM. Proving a super-polynomial size lower bound on interval set-multilinear formulas computing IMM would imply a super-polynomial separation between algebraic branching programs and homogeneous formulas in the non-commutative world. We make progress in this direction by giving a super-polynomial size lower bound on an interesting restriction of the interval set-multilinear formula computing IMM.

Video Archive Go Top

 

Series: CSA Faculty Colloquium
Title: Zero-cost synchronization: Fact or fiction? RCU and other stories

  • Speaker: Prof. K Gopinath
                   Professor
                   Dept.of CSA
                   IISc
  • Date and Time: Friday, February 28, 2020, 4:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
With proliferation of commodity multicore systems, low cost synchronization is critical for scaling. The read-copy-update (RCU) mechanism has been used for this purpose since the last 4 decades. We analyse this and a few other related mechanisms, and discuss our work in this context (such as memory management, virtualization and fault tolerance).

Speaker Bio:
K. Gopinath is a professor at Indian Institute of Science in the Computer Science and Automation Department. His research interests are primarily in the computer systems area (Operating Systems, Storage Systems, Systems Security).

Host Faculty: Prof. Sunil L Chandran & Prof. Shalabh Bhatnagar

Video Archive Go Top

 

Series: Department Seminar
Title: Towards automated debugging and optimization

  • Speaker: Dr Abhilash Jindal, CTO, Mobile Enerlytics, San Francisco
  • Date and Time: Friday, February 28, 2020, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Debugging and optimization are largely ad-hoc manual processes taking up 35-75 percent of programmers' time costing more than $100B annually. This process becomes further exacerbated in the modern programming paradigm where programmers stand on the "shoulder of giant" software frameworks to quickly program complex systems but have a limited understanding of the intricacies of the underlying system. In the first part of this talk, we will see how new power management APIs on smartphones seriously encumber programming. These APIs, when not used correctly, give rise to a whole new class of energy bugs called sleep disorder bugs. These bugs plague the complete smartphone software stack including apps, framework, kernel, and device drivers. I will present a taxonomy of sleep disorder bugs with precise bug definitions which enabled us to create a suite of automated tools to identify different flavors of sleep disorder bugs. I will then present an automated static analysis tool KLOCK that identified 63 sleep-induced time bugs, a subclass of sleep disorder bugs, in the Android Linux kernel. In the second part of the talk, we will study how the traditional profilers fall short in giving actionable diagnosis for optimizing programs. As even after being presented with performance hotspots, developers still need significant manual inspection and reasoning of the source code to proceed with the remaining optimization tasks: 1. Is there a more efficient implementation? 2. How to come up with a more efficient implementation? I will present a new optimization methodology called differential profiling that automates answering these two questions. In particular, differential profiling automatically uncovers more efficient API usage by leveraging existing implementations of similar apps which are bountiful in the app marketplace. Our differential energy profiling tool, DIFFPROF employs a novel tree-matching algorithm for comparing energy profiler outputs of pairs of similar apps and found 12 energy improvements in popular Android apps.

Speaker Bio:
Abhilash Jindal received B.Tech. in Electrical Engineering from IIT Kanpur. He received his Ph.D. in Computer Science from Purdue University where he researched software-based approaches for extending mobile battery life. He has published papers in top system conferences including OSDI, ATC, Eurosys, Mobisys, Sigmetrics, and Mobicom. Currently, he is commercializing his research work serving as the CTO of Mobile Enerlytics, a silicon valley startup. His research interests include mobile systems, software engineering, and operating systems.

Host Faculty: Prof. Matthew Jacob

Video Archive Go Top

 

Series: Ph.D. Thesis Defense
Title: Deep Learning for Bug-Localization and Program Repair

  • Speaker: Mr. Rahul Gupta
                   Ph.D Student
                   Dept. of CSA
  • Faculty Advisor: Prof. Aditya S Kanade
  • Date and Time: Thursday, February 27, 2020, 9:30 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
In this thesis, we focus on the problem of program debugging and present novel deep learning techniques for bug-localization and program repair. Deep learning techniques have been successfully applied to a variety of tasks in natural language processing over the years. Although natural languages and programming languages are similar to some extent, the latter have procedural interpretation and richer structure. Applying deep learning techniques to programs presents many novel challenges which arise due to these differences. We address some of these challenges in this thesis.

Most of the existing program debugging research is dominated by formal and theory-first approaches. These approaches fail to take advantage of the existing codebases available online in the form of open source software repositories and student assignment submissions to massive open online courses on programming. Recognizing this, researchers have begun to replace expert-designed heuristics with models learned from codebases to improve the performance of the conventional debugging techniques. This thesis shows that it is possible to solve program debugging problems directly from raw programs using deep learning techniques in an end-to-end manner. More specifically, we present three approaches for bug-localization and program repair which are entirely data-driven and learn to perform their task instead of following the steps specified by a domain expert.

We first introduce the notion of common programming errors and present a deep neural network based end-to-end technique, called DeepFix, that can fix multiple such errors in a program without relying on any external tool to locate or fix them. At the heart of DeepFix is a multi-layered sequence-to-sequence neural network with attention mechanism, comprising an encoder recurrent neural network (RNN) to process the input and a decoder RNN with an attention mechanism that generates the output. The network is trained on a labeled dataset to predict a faulty program location along with the correct program statement. Multiple errors in a program can be fixed by invoking DeepFix iteratively. Our experiments demonstrate that DeepFix is effective and fixes thousands of programs.

While repositories containing erroneous programs are easily available, the labels of these programs (faulty program location and correct statement) required by DeepFix are not easily available. Labeling a large number of erroneous programs is a daunting task. To address this issue, we propose a novel deep reinforcement learning based technique, called RLAssist, that does not require labeled training data and still matches the performance of DeepFix. At the core of RLAssist is a novel programming language correction framework amenable to reinforcement learning. The framework allows an agent to mimic human actions for text navigation and editing. We demonstrate that the agent can be trained through self-exploration directly from the raw input, that is, the program text itself, without any prior knowledge of the formal syntax of the programming language. Reinforcement learning techniques are, however, usually slow to train. We also show that RLAssist can be trained much faster with the help of expert demonstrations for as little as one-tenth of its training data, which also helps it in achieving better performance than DeepFix.

Finally, we present a deep learning based technique for semantic bug-localization in programs with respect to failing tests. The proposed technique works in two phases. In the first phase, a novel tree convolutional neural network is used to predict whether a program passes or fails the given test. In the second phase, we query a state-of-the-art neural prediction attribution technique to find out which lines of the programs make the network predict the failures to localize the bugs. This is a static bug-localization technique and does not require program instrumentation and multiple executions necessary for the existing dynamic bug-localization techniques. Our experiments show that the proposed technique is competitive with two state-of-the-art program-spectrum based and one syntactic difference based bug-localization baselines.

All the techniques proposed in this thesis are programming language agnostic. We believe that the ideas and tools developed in this work can potentially be a road-map for future attempts at applying deep learning techniques to more problems in software engineering research.

Video Archive Go Top

 

Series: Department Seminar
Title: Security and Privacy of Connected Autonomous Vehicles

  • Speaker: Dr Vireshwar Kumar, Purdue university
  • Date and Time: Wednesday, February 26, 2020, 11:30 AM
  • Venue: CSA Lecture Hall (Room No. 112, Ground Floor)

Abstract
The upcoming smart transportation systems which consist of connected autonomous vehicles, are poised to transform our everyday life. The sustainability and growth of these systems to their full potential will significantly depend on the robustness of these systems against security and privacy threats. Unfortunately, the communication protocols employed in these systems lack mainstream network security capabilities due to energy constraints of the deployed platforms and bandwidth constraints of the communication medium. In this talk, I will present the results of my efforts in anatomizing the two vital communication protocols employed in the smart transportation: (1) vehicle-to-everything (V2X) communication protocol which is utilized to facilitate wireless communication among connected vehicles, and (2) controller area network (CAN) protocol which is utilized within an autonomous vehicle to enable real-time control of critical automotive components including brakes. For each of these two protocols, I will first describe the inquisitive approach which led to the discovery of the new security vulnerabilities. Then, through the experiments on real-world systems, I will demonstrate how these vulnerabilities can be exploited to launch malicious attacks which evade the state-of-the-art defense mechanisms employed in these systems. I will conclude the talk by discussing novel countermeasures which are required to mitigate these fundamental vulnerabilities and prevent their exploitation.

Speaker Bio:
Dr. Vireshwar Kumar is a Postdoctoral Research Associate in the Department of Computer Science at Purdue University. Vireshwar earned his B.Tech. in Electrical Engineering at Indian Institute of Technology Delhi in 2009, and Ph.D. degree in Computer Engineering at Virginia Tech in 2016. He was the recipient of the outstanding Ph.D. student award by the Center for Embedded Systems for Critical Applications at Virginia Tech. He also had a short stint as a Project Assistant in the Department of Electrical Communication Engineering at Indian Institute of Science in 2010. His research interests include discovering and mitigating security vulnerabilities in the communication protocols employed in cyber-physical systems, e.g., smart home, smart transportation and smart city. Vireshwar’s research work has featured in top-tier security venues including ACM Conference on Computer and Communications Security (CCS) and IEEE Transactions on Information Forensics and Security (TIFS). He has also served on the TPC of flagship conferences including IEEE Conference on Communications and Network Security (CNS) and IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN).

Host Faculty: Prof. Matthew Jacob

Video Archive Go Top

 

Series: Ph.D. Colloquium
Title: On the Round Complexity Landscape of Secure Multi-party Computation

  • Speaker: Ms. Divya Ravi
                   Ph.D student
                   Dept. of CSA
  • Faculty Advisor: Dr. Arpita Patra
  • Date and Time: Wednesday, February 26, 2020, 10:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
In secure multi-party computation (MPC), n parties wish to jointly perform a computation on their private inputs in a secure way, so that no adversary A corrupting a coalition of t among the n parties can learn more information than their outputs (privacy), nor can they affect the outputs of the computation other than by choosing their own inputs (correctness). The round complexity of MPC protocols is a fundamental question in the area of secure computation and its study constitutes a phenomenal body of work in the MPC literature. The research goal of this thesis is to advance the state of the art by expanding this study of round complexity to various adversarial settings and network models. The questions addressed in the thesis are of both theoretical and practical importance.

In this talk, we first present a high-level overview of our results which involves establishing new lower bounds on the round complexity of MPC under various settings and constructing upper bounds with optimal round complexity. We then elaborate on one of our recent results that focuses on the exact round complexity of fair and robust MPC protocols in a generalized adversarial setting.

Fairness (adversary obtains the output iff honest parties do) and Robustness (adversary cannot prevent honest parties from obtaining the output) are two of the most sought-after properties of MPC protocols. Achieving both, however, brings in the necessary requirement that only upto minority of the parties can be actively corrupt (where actively corrupt parties are allowed to deviate from the protocol in any arbitrary manner). In a generalized adversarial setting where the adversary is allowed to corrupt both actively and passively (weaker adversarial model where corrupt parties follow protocol specifications but try to learn more information than they are supposed to know), the necessary bound for a n-party fair or robust protocol turns out to be t_a + t_p < n, where t_a, t_p denote the threshold for active and passive corruption with the latter subsuming the former. Subsuming active minority as a boundary special case, this setting, denoted as dynamic corruption, opens up a range of possible corruption scenarios for the adversary. While dynamic corruption includes the entire range of thresholds for (t_a, t_p) starting from (ceil(n/2) – 1, floor(n/2)) to (0, n − 1), the boundary corruption restricts the adversary only to the boundary cases of (ceil(n/2) – 1, floor(n/2)) and (0, n − 1). Notably, both corruption settings empower an adversary to control majority of the parties, yet ensuring the count on active corruption never goes beyond ceil(n/2) – 1. We target the round complexity of fair and robust MPC tolerating dynamic and boundary adversaries.



References:

[1] Arpita Patra, Divya Ravi. On the Power of Hybrid Networks in Multi-Party Computation. IEEE Transactions on Information Theory 2018.

[2] Arpita Patra, Divya Ravi. On the Exact Round Complexity of Secure Three-Party Computation. CRYPTO 2018.

[3] Megha Byali, Arun Joseph, Arpita Patra, Divya Ravi. Fast Secure Computation for Small Population over the Internet. ACM CCS 2018.

[4] Arpita Patra, Divya Ravi, Swati Singla. On the Exact Round Complexity of Best-of-both-Worlds Multi-party Computation. Under Submission.

[5] Arpita Patra, Divya Ravi. Beyond Honest Majority: The Round Complexity of Fair and Robust Multi-party Computation. ASIACRYPT 2019.

Video Archive Go Top

 

Series: Ph.D. Colloquium
Title: Stochastic Approximation in Optimization, Estimation and Reinforcement Learning

  • Speaker: Chandramouli Kamanchi
  • Faculty Advisor: Prof. Shalabh Bhatnagar
  • Date and Time: Monday, February 24, 2020, 4:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
​Stochastic approximation deals with the problem of finding zeros of a function expressed as an expectation of a random variable. In this thesis we study and analyze convergence of stochastic approximation algorithms in the context of optimization under uncertainty, statistical estimation and in the context of learning, in particular reinforcement learning. Moreover we also explore second order methods in the context of Reinforcement learning algorithms.

Stochastic optimization (SO) considers the problem of optimizing an objective function in the presence of noise. Most of the solution techniques in SO estimate gradients from the noise corrupted observations of the objective and adjust parameters of the objective along the direction of t​he estimated gradients to obtain locally optimal solutions. Two prominent algorithms in SO namely Random Direction Kiefer-Wolfowitz (RDKW) and Simultaneous Perturbation Stochastic Approximation (SPSA) obtain noisy gradient estimate by randomly perturbing all the parameters simultaneously. This forces the search direction to be random in these algorithms and causes them to suffer additional noise on top of the noise incurred from the samples of the objective. Owing to this additional noise, the idea of using deterministic perturbations instead of random perturbations for gradient estimation has also been studied. Two specific constructions of the deterministic perturbation sequence using lexicographical ordering and Hadamard matrices have been explored and encouraging results have been reported in the literature. In this thesis, we characterise the class of deterministic perturbation sequences that can be utilised in the RDKW algorithm. This class expands the set of known deterministic perturbation sequences available in the literature. Using our characterization we propose a construction of a deterministic perturbation sequence that has the least possible cycle length among all deterministic perturbations. Through simulations we illustrate the performance gain of the proposed deterministic perturbation sequence in the RDKW algorithm over the Hadamard and the random perturbation counterparts. We establish the convergence of the RDKW algorithm for the generalized class of deterministic perturbations utilizing stochastic approximation techniques.

In statistical estimation one of the popular measures of central tendency that provides better representation and interesting insights of the data compared to the other measures like mean and median is the metric mode. If the analytical form of the density function is known, mode is an argument of the maximum value of the density function and one can apply optimization techniques to find the mode. In many of the practical applications, the analytical form of the density is not known and only the samples from the distribution are available. Most of the techniques proposed in the literature for estimating the mode from the samples assume that all the samples are available beforehand. Moreover, some of the techniques employ computationally expensive operations like sorting. In this thesis we provide a computationally effective, on-line iterative algorithm that estimates the mode of a unimodal smooth density given only the samples generated from the density. Asymptotic convergence of the proposed algorithm using stochastic approximation techniques is provided. We also prove the stability of the mode estimates by utilizing the concept of regularisation. Experimental results further demonstrate the effectiveness of the proposed algorithm.

In a discounted reward Markov Decision Process (MDP), the objective is to find the optimal value function, i.e., the value function corresponding to an optimal policy. This problem reduces to solving a functional equation known as the Bellman equation and a fixed point iteration scheme known as the value iteration is utilized to obtain the solution. In literature, a successive over-relaxation based value iteration scheme is proposed to speed-up the computation of the optimal value function. The speed-up is achieved by constructing a modified Bellman equation that ensures faster convergence to the optimal value function. However, in many practical applications, the model information is not known and we resort to Reinforcement Learning (RL) algorithms to obtain optimal policy and value function. One such popular algorithm is Q-learning. In this paper, we propose Successive Over-Relaxation (SOR) Q-learning. We first derive a modified fixed point iteration for SOR Q-values and utilize stochastic approximation to derive a learning algorithm to compute the optimal value function and an optimal policy. We then prove the almost sure convergence of the SOR Q-learning to SOR Q-values. Finally, through numerical experiments, we show that SOR Q-learning is faster compared to the standard Q-learning algorithm.

Value iteration is a fixed point iteration technique utilized to obtain the optimal value function and policy in a discounted reward Markov Decision Process (MDP). Here, a contraction operator is constructed and applied repeatedly to arrive at the optimal solution. Value iteration is a first order method and therefore it may take a large number of iterations to converge to the optimal solution. Successive relaxation is a popular technique that can be applied to solve a fixed point equation. It has been shown in the literature that, under a special structure of the MDP, successive over-relaxation technique computes the optimal value function faster than standard value iteration. In this work, we propose a second order value iteration procedure that is obtained by applying the Newton-Raphson method to the successive relaxation value iteration scheme. We prove the global convergence of our algorithm to the optimal solution asymptotically and show the second order convergence. Through experiments, we demonstrate the effectiveness of our proposed appro​ach.

Speaker Bio:
Chandramouli Kamanchi is a senior PhD student in the department of Computer Science and Automation, IISc.

Video Archive Go Top

 

Series: Department Seminar
Title: Efficient Distance Approximation for Structured High-Dimensional Distributions via Learning

  • Speaker: Dr. Arnab Bhattacharyya
                   (NUS Singapore, School of Computing)
  • Date and Time: Tuesday, February 18, 2020, 11:15 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
We design efficient distance approximation algorithms for several classes of structured high-dimensional distributions. Specifically, we show algorithms for the following problems:

– Given sample access to two Bayes networks P1 and P2 over known directed acyclic graphs G1 and G2 having n nodes and bounded in-degree, approximate dTV (P1, P2) to within additive error ε using poly(n, ε) samples and time

– Given sample access to two ferromagnetic Ising models P1 and P2 on n variables with bounded width, approximate dTV (P1, P2) to within additive error ε using poly(n, ε) samples and time

– Given sample access to two n-dimensional gaussians P1 and P2, approximate dTV (P1, P2) to within additive error ε using poly(n, ε) samples and time

– Given access to observations from two causal models P and Q on n variables that are defined over known causal graphs, approximate dTV (Pa, Qa) to within additive error ε using poly(n, ε) samples, where Pa and Qa are the interventional distributions obtained by the intervention do(A = a) on P and Q respectively for a particular variable A.

Our results are the first efficient distance approximation algorithms for these well-studied problems. They are derived using a simple and general connection to distribution learning algorithms. The distance approximation algorithms imply new efficient algorithms for tolerant testing of closeness of the above-mentioned structured high-dimensional distributions.

(based on a joint work with Sutanu Gayen, Kuldeep S. Meel and N. V. Vinodchandran)

Host Faculty: Sunil Chandran L

Video Archive Go Top

 

Series: Department Seminar
Title: Towards Optimal Secure Computation Protocols

  • Speaker: Mr. Akshayaram Srinivasan
                   University of California
                   Berkeley
  • Date and Time: Wednesday, February 05, 2020, 4:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Secure computation allows a set of mutually distrusting parties to compute a joint function of their private inputs such that the parties only learn the output of the functionality and nothing else about the inputs of the other parties. Secure computation is one of the central primitives in cryptography that encompasses several cryptographic abstractions and has many practical applications. The seminal results from the 1980s showed that every efficiently computable functionality can also be computed securely. However, these protocols were prohibitively inefficient and could only be considered as feasibility results. One of the central problems in cryptography is to construct secure computation protocols that are optimal in all efficiency parameters. In this talk, I will give an overview of my recent works that make progress towards constructing such optimal secure computation protocols.

Speaker Bio:
Akshayaram Srinivasan is a final-year Ph.D. student in the theory group at UC Berkeley. He received his B.Tech in Computer Science and Engineering from IIT-Madras in 2015. He is broadly interested in theoretical computer science and in particular, in the theory and applications of cryptography. He has published research papers in top conferences in cryptography such as Foundations of Computer Science (FOCS), Crypto, Eurocrypt, and TCC. His research has been recognized with the best paper award at Eurocrypt 2018 and invitations to the Journal of Cryptology.

Host Faculty: Prof. Matthew Jacob T

Video Archive Go Top

 

Series: Ph.D. Thesis Defense
Title: Cryptographic Schemes for Predicate Evaluation and Search on Outsourced Data

  • Speaker: Mr. Sayantan Mukherjee
                   Ph.D student
                   Dept. of CSA
  • Faculty Advisor: Prof. Sanjit Chatterjee
  • Date and Time: Tuesday, February 04, 2020, 2:30 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Recent improvements in architecture and networking are fueling ever growing interest in outsourcing data as well as computation over that data. However, this comes at the cost of trusting the service provider with user data as well as computation which often is violated without the user's knowledge. In this work, we propose a few cryptosystems that address various aspects of cryptographic security of outsourced data. Evaluating a predicate function of interest in the encrypted domain is currently one of the challenging questions in this domain. Our first two works in this thesis focus on evaluation of such predicate functions in the encrypted domain. Then we move to the question of performing search on the outsourced cloud data. The question of searching over outsourced data comes in both theoretical and practical flavors. We have pursued both the directions and have come up with solutions for a few selected problems.

The first work deals with a simple access control mechanism where a message is encrypted with respect to a set S. A ciphertext can be decrypted with a secret key associated with a set T if and only if T is a subset of S. Such a scheme is called subset predicate encryption (SPE) and can be used as a building block for constructing wildcard IBE (WIBE), wicked IBE (WKD-IBE) as well as CP-ABE for DNF formula evaluation etc. We propose two constructions of SPE in the large-universe setting with constant-size keys. Our first SPE scheme achieves constant-size ciphertext while the second scheme achieves better security assurance namely, adaptive security in the standard model.

Chosen Ciphertext Security (CCA) is the strongest notion of security that is often mandatory in practical scenario. As the adversary in this model is active, devising a CCA-secure encryption scheme is often hard. There are a few techniques that convert CPA-secure predicate encryptions (PE) to CCA-security generically but at a non-trivial cost which is typically proportional to the ciphertext size of the underlying CPA-secure scheme. We devise two generic constructions of CCA-secure PEs from CPA-secure pair encoding-based PEs incurring only a small constant amount of additional cost.

Our first work on searching in the encrypted domain deals with two related problems of access control-based search. The first problem is to perform keyword search restricted within a privileged set. Available solutions of this problem achieve selective security under parameterized non-standard assumptions. We propose a solution to the so-called Broadcast Encryption with Keyword Search (BEKS) which is efficient and adaptive secure under a standard static assumption. The second problem takes motivation from encrypted email service where a client might search if the newly received mail contains any of the “specified” keywords. We propose a solution to this problem called Key-Aggregate Searchable Encryption (KASE) that allows searching without leaking any new information to the server. As the name suggests, our KASE construction enjoys constant-size secret keys and is proved adaptive secure under a standard static assumption.

On the more applied aspects, our next work presents a searchable encryption framework that can support a number of predicate functions simultaneously. To design the framework, we define a few encodings on top of well-studied Hidden Vector Encryption (HVE). Our novelty lies in the optimization of encodings which result in better search complexity for a few queries and at the same time support new types of queries than the state-of-the-art. This framework can be used to compute statistics on encrypted tabular data like relational database management systems.

A problem of recent interest is to perform wildcard search in the encrypted domain in the symmetric key setting. Precisely, the search should find the list of keywords w that match with the wildcard queryword w’ where the match denotes wildcard pattern matching. Non-triviality of this problem follows from the unrestricted nature of the wildcards allowed in a queryword. We reduce the problem of wildcard search to that of secure Boolean formula evaluation in the encrypted domain. This results in a protocol called HAWcS, a provably secure construction of wildcard search in the three party setting. HAWcS performs orders of magnitude better than the state-of-the-art as both our theoretical analysis and prototype implementation suggest.

The final work improves upon an existing technique of authenticated email search. In this problem, the cloud service provider who stores the outsourced data has to return an efficiently verifiable proof of the search result. The requirement from this protocol is that a thin client should be able to verify the correctness of the search it has requested the server to perform. We propose a more comprehensive solution of authenticated e-mail search in the two party setting. Our prototype implementation gives evidence to the practicability of our proposal.

Video Archive Go Top

 

Series: Department Seminar
Title: Finding densest sub-graphs without flow computations

  • Speaker: Mr. Saurabh Sawlani
                   5th year Ph.D. Student
                   Algorithms, Combinatorics, and Optimization Program
                   Georgia Tech
  • Date and Time: Thursday, January 30, 2020, 4:00 PM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Detecting dense components is a major problem in graph mining,and many different notions of a dense subgraph are used in practice. The densest subgraph problem focuses on finding a subgraph with maximum "average degree". For the static densest subgraph problem, Charikar's greedy 2-approximation algorithm is simple, efficient, and hence is very popular in practice. On the other hand, finding the exact optimal solution involves a reduction to maximum flow, which incurs prohibitive computational costs for the massive graphs arising in modern data science applications. We devise an iterative algorithm which naturally generalizes the greedy algorithm, by drawing insights from the iterative approaches from convex optimization, and exploiting the dual interpretation of the densest subgraph problem. Additionally, this dual interpretation also gives the first fully-dynamic data structure that maintains a (1+epsilon)-approximate densest subgraph, improving over the previous best factor of (4+epsilon). The talk focuses on these two results.

Speaker Bio:
Saurabh Sawlani is a 5th year Ph.D. student in the Algorithms, Combinatorics, and Optimization program at Georgia Tech, advised by Dr. Richard Peng. He received his B.Tech and M.S. at IIT Madras. His research interests include randomized algorithms, graph theory, and scientific computing. About the chapter: The aim of IISc-SIAM student chapter is to encourage academic activities at IISc, such as technical talks, lecture series, workshops etc., related to areas which cooperate between mathematics, science and technology.

Host Faculty: IISc-Siam student chapter

Video Archive Go Top

 

Series: CSA Golden Jubilee Frontier Lecture Series
Title: 1. The Power of Encounters 2. Algorithmic fairness in online decision-making

  • Speaker: 1. Prof. Peter Druschel
                    Director
                    Max Planck Institute for Software Systems
                   
                   2. Prof. Avrim Blum
                    Professor and Chief Academic Officer.
                    Toyota Technological Institute at Chicago (TTIC)
                    Chicago, USA
  • Date and Time: Wednesday, January 29, 2020, 3:00 PM
  • Venue: Faculty Hall, Indian Institute of Science

Abstract
1. A secure encounter is an agreement by two anonymous devices to have met at a given time and place. An associated shared secret enables the devices to subsequently confirm their encounter and communicate securely. In this talk, I will sketch how this simple idea enables fascinating new forms of privacy-preserving, contextual, secure communication among personal and IoT devices, and enables users to produce selective evidence of their personhood and physical whereabouts. Encounters enable powerful forms of secure group communication among devices connected by chains of encounters, subject to spatial, temporal, and causality constraints. Applications range from connecting event attendees and virtual guest books to disseminating targeted health risk warnings, soliciting information and witnesses related to an incident, and tracing missing persons, all while maintaining users’ privacy. Encounters also enable selective proofs of device (co-)location at a given time and place. Finally, encounters can provide evidence of a unique physical trajectory, which suggests a human user and promises a new defense to Sybil attacks.

2. There is growing concern about fairness in algorithmic decision making: Are algorithmic decisions treating different groups fairly? How can we make them fairer? What do we even mean by fair? In this talk I will discuss some of our work on this topic, focusing on the setting of online decision making. For instance, a classic result states that given a collection of predictors, one can adaptively combine them to perform nearly as well as the best of those predictors in hindsight (achieve “no regret”) without any stochastic assumptions. Can one extend this guarantee so that if the predictors are themselves fair (according to a given definition), then the overall combination is fair as well (according to the same definition)? I will discuss this and other issues. This is joint work with Suriya Gunasekar, Thodoris Lykouris, and Nati Srebro.

Speaker Bio:
1. Peter Druschel is the founding director of the Max Planck Institute for Software Systems (MPI-SWS) and Associate Chair of the Chemistry, Physics, and Technology Section of the Max Planck Society in Germany. Previously, he was a Professor of Computer Science and Electrical and Computer Engineering at Rice University in Houston, Texas. His research interests include distributed systems, mobile systems, privacy and compliance. He is the recipient of an NSF CAREER Award, an Alfred P. Sloan Fellowship, the ACM SIGOPS Mark Weiser Award, a Microsoft Research Outstanding Collaborator Award, and the EuroSys Lifetime Achievement Award. Peter is a member of Academia Europaea and the German Academy of Sciences Leopoldina. 2. Avrim Blum received his BS, MS, and PhD from MIT in 1987, 1989, and 1991 respectively. He then served on the faculty in the Computer Science Department at Carnegie Mellon University from 1992 to 2017. In 2017 he joined the Toyota Technological Institute at Chicago as Chief Academic Officer. Prof. Blum’s main research interests are in Theoretical Computer Science and Machine Learning, including Machine Learning Theory, Approximation Algorithms, Algorithmic Game Theory, and Database Privacy, as well as connections among them. Some current specific interests include multi-agent learning, multi-task learning, semi-supervised learning, and the design of incentive systems. He is also known for his past work in AI Planning. Prof. Blum has served as Program Chair for the IEEE Symposium on Foundations of Computer Science (FOCS), the Innovations in Theoretical Computer Science Conference (ITCS), and the Conference on Learning Theory (COLT). He has served as Chair of the ACM SIGACT Committee for the Advancement of Theoretical Computer Science and on the SIGACT Executive Committee. Prof. Blum is recipient of the AI Journal Classic Paper Award, the ICML/COLT 10-Year Best Paper Award, the Sloan Fellowship, the NSF National Young Investigator Award, and the Herbert Simon Teaching Award, and he is a Fellow of the ACM.

Host Faculty: Prof. Shalabh Bhatnagar

Video Archive Go Top

 

Series: M.Tech (Research) Colloquium
Title: Modern Combinatorial Optimization Problems; Balanced Clustering and Fair Knapsack

  • Speaker: Mr. Deval Patel
                   M.Tech (Research)
                   Dept. of CSA
  • Faculty Advisor: Dr. Anand Louis
  • Date and Time: Monday, January 27, 2020, 9:00 AM
  • Venue: CSA Class (Room No. 252, First Floor)

Abstract
In many classical optimization problems, it is desirable to have an output that is equitable in some sense. The property equitability could be defined differently for different optimization problems. We study this property in two classical optimization problems, clustering and knapsack. In the clustering problem, we desire to have a cost of the clustering evenly distributed among the clusters. We study this problem under the name cost-balanced k-clustering. In the knapsack problem, we desire to have a packing which is fair in some sense. We study this problem under the name fair knapsack. In most of the clustering objectives like k-median or k-center, the cost of assigning a client to the cluster is considered to be borne by a client. Algorithms optimizing such objectives might output a solution where few clusters have very large cost and few clusters have a very small cost. Cost-balanced k-clustering problem aims to obtain the clustering which is cost balanced. We consider objective of minimizing maximum cost of each cluster, where the cost of a cluster is sum of distances of all the points in that cluster from the center of that cluster. We define the notion of γ-stability, for γ > 1, for the problem and give a poly time algorithm for 1.5-stable instances of the problem. We give hardness result. We also define the more general version of the problem and name it "lp" cost-balanced k-clustering. Given a black-box algorithm which gives constant factor approximation to the "lp" cost k-clustering problem, we show a procedure that runs in poly time and gives bi-criteria approximation to the "lp" cost-balanced k-clustering problem.

In this work, we also consider the notion of group fairness in knapsack problems. In this setting, each item belongs to a category, and our goal is to compute a subset of items such that each category is fairly represented, in addition to the total weight of the subset not exceeding the capacity, and the total value of the subset being maximized. We study various notions of group fairness, such as the number of items from each category, the total value of items from each category, and the total weight of items from each category. We give algorithms and hardness results for these problems.

Video Archive Go Top

 

Series: M.Tech (Research) Colloquium
Title: Fault Aware Read-Copy-Update

  • Speaker: Mr. Abhishek Dubey
                   M.Tech. (Research) Student
                   Dept. of CSA
  • Faculty Advisor: Prof. K Gopinath
  • Date and Time: Friday, January 24, 2020, 11:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Deferred freeing is the fundamental technique used in Read-Copy-Update (RCU) synchronization technique where reclamation of resources is deferred until the completion of all active RCU read-side critical sections. We observe that faults inside an RCU read-side critical section can indefinitely block writers that are waiting for the completion of RCU readers and also lead to system failures by preventing the reclamation of deferred resources. We show that the impact of such faults in the Linux kernel is global; a fault in one subsystem can propagate and exhaust critical resources in other unrelated subsystems opening a window of opportunity for DoS-based attacks. For example, a fault in a filesystem can exhaust the process ulimit resulting in fork failures. Since, guaranteeing the absence of faults is practically impossible, it is imperative to harden RCU to tolerate faults.

We first study the impact of mitigating lockup by termination of the faulty thread, as thread termination is standard approach used by Linux as recovery strategy. We demonstrate the impact of faults in RCU read-side critical sections and present RCU recovery techniques that use novel approaches to detect and isolate effect of such faults. We also discuss system consistency once the fault is handled by our approach. Timely recovery results in a usable system, preserving the user application state and increasing the system's availability. Our evaluation in the Linux kernel shows that our solution can prevent resource exhaustion in the presence of faults with no additional overhead in the absence of faults.

Video Archive Go Top

 

Series: CSA Golden Jubilee Frontier Lecture Series
Title: Law and Algorithms: A Cryptographic Lens

  • Speaker: Professor Ran Canetti
                   IACR Fellow
                   Boston University
  • Date and Time: Thursday, January 23, 2020, 4:00 PM
  • Venue: Faculty Hall, Indian Institute of Science

Abstract
Computer Science has had a complex relationship with Law since its early days. On the one hand, the disciplines are similar in that they both have deep theoretical roots and at the same time are all-encompassing and very applied. On the other hand, one discipline is founded on mathematics, while the other is purely humanistic in nature. Traditionally, the main points of contact between the discipline were centered around intellectual property for algorithms (software and hardware), and regulating the use and sale of products that include encryption algorithms. Recently, however, many more meeting points have emerged, including (but certainly not limited to) regulating the use of statistical risk-assessment and prediction algorithms; Applying traditionally-humanistic concepts such as privacy, bias, transparency, or individuality, to algorithms; Adjudicating and balancing the protection, sharing, and confinement of data; Determining algorithmic intent, awareness, and liability in automated contracts. Furthermore, cryptographic thinking and tools such as computation over encrypted data, multiparty computation, and zero-knowledge proofs emerge as game-changers in various realistic legal scenarios. This talk is a personal account of the emerging landscape of “Law and Algorithms”, shaped by my interactions with Law scholars and fellow Computer Scientists in the past few years. Three classes co-taught to a mixed audience of CS and Law students, where we tried to build bridges between the disciplines, were particularly influential, and also provide starting points for exciting new research. The classes were co-taught with Daniela Caruso, Aloni Cohen, Stacey Dogan, Cynthia Dwork, Ahmed Ghappour, Shafi Goldwasser, Martha Minow, Frank Partnoy, Pat Williams.

Speaker Bio:
Ran Canetti is a professor of Computer Science and the director of the center for Reliable Information System and Cyber Security at Boston University. He is also a Fellow of the International Association for Cryptologic Research, an incumbent of the RSA Award in Mathematics 2018. Canetti graduated from the Weizmann Institute of Science in 1996, was a researcher at IBM Watson Research Center, and a professor of Computer Science at Tel Aviv University. Canetti’s research interests span multiple aspects of cryptography and information security, with emphasis on the design, analysis and use of cryptographic protocols. Recently he has been interested in finding common ground for Law and Cryptography to collaborate towards enabling a better future for society.

Host Faculty: Prof. Shalabh Bhatnagar

Video Archive Go Top

 

Series: Theory Seminar Series
Title: Rethinking the role of optimization in learning.

  • Speaker: Dr. Suriya Gunasekar
                   Senior Researcher
                   ML & Optimization Group
                   Microsoft Research
                   Redmond
  • Date and Time: Thursday, January 23, 2020, 11:30 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
In this talk, I will overview our recent progress towards understanding how we learn large capacity machine learning models. In the modern practice of machine learning, especially deep learning, many successful models have far more trainable parameters compared to the number of training examples. Consequently, the optimization objective for training such models have multiple minimizers that perfectly fit the training data. More problematically, while some of these minimizers generalize well to new examples, most minimizers will simply overfit or memorize the training data and will perform poorly on new examples. In practice though, when such ill-posed objectives are minimized using local search algorithms like (stochastic) gradient descent ((S)GD), the "special" minimizers returned by these algorithms have remarkably good performance on new examples. In this talk, we will explore the role optimization algorithms like (S)GD in learning overparameterized models in simpler setting of learning linear predictors.

Speaker Bio:
Suriya Gunasekar is a senior researcher in the ML & Optimization group at Microsoft Research, Redmond. Previously, she was a research assistant professor at the Toyota Technological Institute at Chicago. Her research interests are broadly driven by statistical, algorithmic, and societal aspects of machine learning including topics of optimization, high dimensional learning, and algorithmic fairness.

Host Faculty: Dr. Anand Louis

Video Archive Go Top

 

Series: M.Tech (Research) Colloquium
Title: Model Extraction and Active Learning

  • Speaker: Mr. Aditya Shukla
                   M.Tech (Research) Student
                   Dept. of CSA
  • Faculty Advisor: Prof. Vinod Ganapathy
  • Date and Time: Wednesday, January 22, 2020, 10:00 AM
  • Venue: CSA Seminar Hall (Room No. 254, First Floor)

Abstract
Machine learning models trained on a confidential dataset are increasingly being deployed for profit. Machine Learning as a Service (MLaaS) has made such models easily accessible to end-users. They can use it directly as a back-box module to query an input sample and get its corresponding prediction. Prior work has shown that it is possible to extract these models. They developed model extraction attacks that extract an approximation of the MLaaS model by making black-box queries to it. However, none of them satisfy all the four criteria essential for practical model extraction: (i) the ability to extract deep learning models, (ii) non-requirement of domain knowledge, (iii) the ability to work with a limited query budget and (iv) non-requirement of annotations. In this work, we propose a novel model extraction framework that makes use of existing active learning techniques and unannotated public data to satisfy all of them. By using only 30% (30,000 samples) of the unannotated public data, our model extraction framework on an average achieves a performance of 4.70x over uniform noise baseline.

We further introduce an ensemble active learning technique by combining two existing state-of-the-art active learning techniques, i.e., DeepFool based Active Learning (DFAL) and Coreset active learning. We empirically show that the ensemble active learning technique, in general, performs better than DFAL and it turns out to be a winner in the majority of our experiments.

Finally, we show that our proposed model extraction attack cannot be detected by a state-of-the-art detection method, PRADA, that monitors the distribution of distances between queries for deviation from the normal distribution.

Video Archive Go Top

 

 

 

 

 

 

 

 

 

 

Copyright: CSA, IISc 2018      Phone: +91-80-22932368          Fax: +91-80-23602911 Travel Blog    Feedback    Credits