Department of Computer Science and Automation Department of Computer Science and Automation, IISc, Bangalore, India Indian Institute of Science
HOME | ABOUT US | PEOPLE | RESEARCH | ACADEMICS | FACILITIES | EVENTS / SEMINARS | NEWS | CONTACT US

UPCOMING SEMINARS

Series: M.Tech (Research) Colloquium- ON-LINE
Title: A Trusted-Hardware Backed Secure Payments Platform for Android

  • Speaker: Mr. Rounak Agarwal
                   M.Tech (Research) student
                   Dept. of CSA
  • Faculty Advisor: Prof. Vinod Ganapathy
  • Date and Time: Friday, April 23, 2021, 3:00 PM
  • Venue: Microsoft Teams - ON-LINE: https://teams.microsoft.com/l/meetup-join/19%3ameeting_ZWQ3YzBmODUtMzFlZS00NDgzLTk0NTEtZjAyOGFjNWUwNTMw%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97-f6a7-41e3-b2c5-ad4193976476%22%2c%22Oid%22%3a%220432f3a0-d225-405c-b0f4-ff1ffaf4f1fd%22%7d

Abstract
Digital payments using personal electronic devices have been steadily gaining in popularity for the last few years. While digital payments using smartphones are very convenient, they are also more susceptible to security vulnerabilities. Unlike devices dedicated to the purpose of payments (e.g. POS terminals), modern smartphones provide a large attack surface due to the presence of so many apps for various use cases and a complex feature-rich smartphone OS. Because it is the most popular smartphone OS by a huge margin, Android is the primary target of attackers. Although the security guarantees provided by the Android platform have improved signifi cantly with each new release, we still see new vulnerabilities being reported ever month. Vulnerabilities in the underlying Linux kernel are particularly dangerous because of their severe impact on app security. To protect against a compromised kernel, some critical functions of the Android platform such as cryptography and local user authentication have been moved to a Trusted Execution Environment (TEE) in the last few releases. But the Android platform doesn't yet provide a way to protect a user's con fidential input meant for a remote server, from a compromised kernel. Our work aims to address this gap in Android's use of TEEs for app security. We have designed a framework that leverages a TEE for protecting user's confi dential input and we have shown how this framework can be used to improve the security of digital payments.

Microsoft Teams link:

https://teams.microsoft.com/l/meetup-join/19%3ameeting_ZWQ3YzBmODUtMzFlZS00NDgzLTk0NTEtZjAyOGFjNWUwNTMw%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97-f6a7-41e3-b2c5-ad4193976476%22%2c%22Oid%22%3a%220432f3a0-d225-405c-b0f4-ff1ffaf4f1fd%22%7d

Video Archive Go Top

 

PAST SEMINARS

Series: M.Tech (Research) Colloquium- ONLINE
Title: A Multi-Policy Reinforcement Learning Framework for Autonomous Navigation

  • Speaker: Mr. Rajarshi Banerjee
                   M.Tech(Research) Student
                   Dept. of CSA
  • Faculty Advisor: Prof. Ambedkar Dukkipati
  • Date and Time: Wednesday, April 07, 2021, 11:00 AM
  • Venue: Microsoft Teams - ONLINE

Abstract
Reinforcement Learning (RL) is the process of training an agent to take a sequence of actions with the prime objective of maximizing rewards it obtains from an environment. Deep RL is simply using the same approach where a deep neural network parameterizes the policy. Temporal abstraction in RL is learning useful and generalizable skills, which are often necessary for solving complex tasks in various environments of practical interest. One such domain is the longstanding problem of autonomous vehicle navigation. In this work, we focus on learning complex skills in such environments where the agent has to learn a high-level policy by leveraging multiple skills inside an environment that presents various challenges.

Multi-policy reinforcement learning algorithms like the Options Critic Framework require an exorbitant amount of time for converging policies. Even when they do, there is a broad tendency for the policy over options to choose a single sub-policy exclusively, thus rendering the other policies moot. In contrast, our approach combines an iterative approach to complement previously learned policies.

To conduct the experiments, a custom simulated 3D navigation environment was developed where the agent is a vehicle that has to learn a policy by which it can avoid a collision. This is complicated because, in some scenarios, the agent needs to infer certain abstract meaning from the environment to make sense of it while learning from a reward signal that becomes increasingly sparse.

In this thesis, we introduce the `Stay Alive' approach to learn such skills by sequentially adding them into an overall set without using an overarching hierarchical policy where the agent's objective is to prolong the episode for as long as possible. The general idea behind our approach comes from the fact that both animals and human beings learn meaningful skills in previously acquired skills to better adapt to their respective environments.

We compare and report our results on the navigation environment and the Atari Riverraid environment with state-of-the-art RL algorithms and show that our approach outperforms the prior methods.

Microsoft Meeting Link : https://teams.microsoft.com/l/meetup-join/19%3ameeting_YTI5M2MzOWMtMDEwNS00MzU4LTgyN2MtNWZmNGYzMTk0YjQ0%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97-f6a7-41e3-b2c5-ad4193976476%22%2c%22Oid%22%3a%22c90a12b8-df95-4e40-88fd-ee979f2b42ba%22%7d

Video Archive Go Top

 

Series: M.Tech (Research) Colloquium- ONLINE
Title: Approximation Algorithms for Geometric Packing Problems

  • Speaker: Mr. Eklavya Sharma
                   M.Tech (Research)
                   Dept.of CSA
  • Faculty Advisor: Dr. Arindam Khan
  • Date and Time: Tuesday, March 30, 2021, 4:00 PM
  • Venue: Microsoft Teams - ONLINE

Abstract
We study approximation algorithms for the geometric bin packing problem and its variants. In the two-dimensional geometric bin packing problem (2D GBP), we are given n rectangular items and we have to compute an axis-parallel non-overlapping packing of the items into the minimum number of square bins of side length 1. 2D GBP is an important problem in computer science and operations research arising in logistics, resource allocation, and scheduling.

We first study an extension of 2D GBP called the generalized multidimensional bin packing problem (GVBP). Here each item i additionally has d nonnegative weights v_1(i), v_2(i), …, v_d(i) associated with it. Our goal is to compute an axis-parallel non-overlapping packing of the items into bins so that for all j ∈ [d], the sum of the jth weight of items in each bin is at most 1. Despite being well studied in practice, surprisingly, approximation algorithms for this problem have rarely been explored. We first obtain two simple algorithms for GVBP having asymptotic approximation ratios (AARs) 6(d+1) and 3(1 + ln(d+1) + ε). We then extend the Round-and-Approx (R&A) framework [Bansal-Khan, SODA 14] to wider classes of algorithms, and show how it can be adapted to GVBP. Using more sophisticated techniques, we obtain algorithms for GVBP having an AAR of 2(1+ln((d+4)/2))+ε, which improves to 2.919+ε for the special case of d=1.

Next, we explore approximation algorithms for the d-dimensional geometric bin packing problem (dD GBP). Caprara (MOR 2008) gave a harmonic-based algorithm for dD GBP having an AAR of 1.69104^(d-1). However, their algorithm doesnt allow items to be rotated. This is in contrast to some common applications of dBP, like packing boxes into shipping containers. We give approximation algorithms for dD GBP when items can be orthogonally rotated about all or a subset of axes. We first give a fast and simple harmonic-based algorithm, called fullh_k, having an AAR of 1.69104^d. We next give a more sophisticated harmonic-based algorithm, which we call hgap_k, having an AAR of (1+eps)1.69104^(d-1). This gives an AAR of roughly 2.860 + ε for 3BP with rotations, which improves upon the best-known AAR of 4.5. In addition, we study the multiple-choice bin packing problem that generalizes the rotational case. Here we are given n sets of d-dimensional cuboidal items and we have to choose exactly one item from each set and then pack the chosen items. Our algorithms fullh_k and hgap_k also work for the multiple-choice bin packing problem. We also give fast and simple approximation algorithms for the multiple-choice versions of dD strip packing and dD geometric knapsack. These algorithms have AARs 1.69104^(d-1) and (1-ε)3^(-d), respectively.

A rectangle is said to be δ-thin if it has width at most δ or height at most δ. When δ is a small constant (i.e., close to 0), we give an APTAS for 2D GBP when all rectangles are δ-thin. On the other hand, general 2D GBP is APX-hard. This shows that hard instances arise due to items that are large in both dimensions.

A packing of rectangles into a bin is said to be guillotine-separable iff we can use a sequence of end-to-end cuts to separate the items from each other. The asymptotic price of guillotinability (APoG) is the maximum value of opt_G(I)/opt(I) for large opt(I), where opt(I) and opt_G(I) are the minimum number of bins and the minimum number of guillotine-separable bins, respectively, needed to pack I. Computing lower and upper bounds on APoG is an important problem, since proving an upper bound smaller than 1.5 would beat the state-of-the-art algorithm for 2D GBP. The best-known upper bound is 1.69104 and the best-known lower bound is 4/3. We analyze this problem for the special case of δ-thin rectangles, where δ is a small constant (i.e., close to 0). We give a roughly 4/3-asymptotic-approximate algorithm for 2D GBP for this case, which proves an upper-bound of roughly 4/3 on APoG for δ-thin rectangles. We also prove a matching lower-bound of 4/3. This shows that hard examples for upper-bounding APoG include items that are large in both dimensions.

Mocrosoft Teams link:

https://teams.microsoft.com/l/meetup-join/19%3ameeting_ZWE5NDlkMDgtNzBiNi00YzYyLWJjNzAtM2QxMzZiOTQ1Mzhi%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97-f6a7-41e3-b2c5-ad4193976476%22%2c%22Oid%22%3a%22cd1ddf68-b75e-4337-87f3-ded65154fa20%22%7d

Video Archive Go Top

 

Series: M.Tech (Research) Colloquium
Title: GPM - Exploring GPUs with Persistent Memory

  • Speaker: Shweta Pandey
  • Faculty Advisor: Arkaprava Basu
  • Date and Time: Thursday, March 25, 2021, 3:00 PM
  • Venue: https://teams.microsoft.com/l/channel/19%3a13ca80f16f7948a2a87ac38e29410e0a%40thread.tacv2/General?groupId=e8b8b189-9c66-4a15-a4d0-2f2555b5724d&tenantId=6f15cd97-f6a7-41e3-b2c5-ad4193976476

Abstract
Non-volatile memory (NVM) technologies promise to blur the long-held distinction between memory and storage by enabling durability at latencies comparable to DRAM at byte granularity. Persistent Memory (PM) is defined as NVM accessed via load/store instructions at a fine grain. Due to decade-long research into CPU's software and hardware stack for PM, and with the recent commercialization of NVM under the aegis of Intel Optane, PM's promise of revolutionizing computing seems closer to reality than it has ever been before. Unfortunately, while a significant portion of computation today happens on Graphics Processing Units (GPUs), they are deprived of leveraging PM. We find that there exist GPU-accelerated applications that could benefit from fine-grain persistence. Our key goal is to expose byte-grain persistent memory to GPU kernels. For this, we propose a design for GPU with fine-grained access to PM, a.k.a. GPM which combines commercially available GPUs and NVM through software. We find important use-cases to leverage GPM and create a workload suite called GPMBench. GPMBench consists of 11 GPU-accelerated workloads modified to leverage PM. Finally, we demonstrate the benefits of our proposed design, GPM, over conventional methods of persisting from GPU.

Speaker Bio:
Shweta Pandey is an MTech Research student in the Department of Computer Science and Automation at IISC, Bangalore. She is currently advised by Prof. Arkaprava Basu. Her research interests lie in the field of High-Performance Computing. She is currently looking into improving GPU hardware systems for emerging needs.

Host Faculty: Arkaprava Basu

Video Archive Go Top

 

Series: M.Tech (Research) Colloquium
Title: nuKSM: NUMA-aware Memory De-duplication for Multi-socket Servers

  • Speaker: Akash Panda
  • Faculty Advisor: Arkaprava Basu
  • Date and Time: Thursday, March 25, 2021, 2:00 PM
  • Venue: https://teams.microsoft.com/l/channel/19%3aee36253808914fffb930888369bed27e%40thread.tacv2/General?groupId=18b6fde4-c906-4143-b1ea-d5d66e19462d&tenantId=6f15cd97-f6a7-41e3-b2c5-ad4193976476

Abstract
Memory management is one of the most critical pieces in an operating system's design. It has several responsibilities ranging from ensuring quick access to data by applications to enabling memory consolidation. For example, judicious placement of pages in multi-socket NUMA (non-uniform memory access) servers could determine the access latencies experienced by an application. Similarly, memory de-duplication can play a pivotal role in memory consolidation and over-commitment.

Different responsibilities of memory management can conflict with each other. This often happens when different subsystems of an OS are responsible for different memory management goals, and each works in its silo. In this work, we report one such conflict that appears between memory de-duplication and NUMA management. Linux's memory de-duplication subsystem, namely KSM, is NUMA unaware. We demonstrate that memory de-duplication can have unintended consequences to NUMA overheads experienced by applications running on multi-socket servers. Linux's memory de-duplication subsystem, namely KSM, is NUMA unaware. Consequently, while de-duplicating pages across NUMA nodes, it can place de-duplicated pages in a manner that can lead to significant performance variations, unfairness, and subvert process priority.

We introduce NUMA-aware KSM, a.k.a., nuKSM, that makes judicious decisions about the placement of de-duplicated pages to reduce the impact of NUMA and unfairness in execution. nuKSM also enables users to avoid priority subversion. Finally, independent of the NUMA effect, we observed that KSM fails to scale well to large memory systems due to its centralized design. We thus extended nuKSM to adopt a de-centralized design to scale to larger memory.

Speaker Bio:
Akash Panda is an MTech Research student in the Department of Computer Science and Automation at IISC, Bangalore. He is advised by Prof. Arkaprava Basu and is interested in memory management and Linux's virtual memory subsystem.

Host Faculty: Arkaprava Basu

Video Archive Go Top

 

Series: Ph.D (Engg.)Thesis Defence -ON-LINE
Title: Algorithms for Challenges to Practical Reinforcement Learning

  • Speaker: Ms. Sindhu P R
                   Ph.D (Engg.) Student
                   Dept. of CSA
  • Faculty Advisor: Prof. Shalabh Bhatnagar
  • Date and Time: Wednesday, March 24, 2021, 4:00 PM
  • Venue: Microsoft Teams - ON-LINE

Abstract
Reinforcement learning (RL) in real world applications faces major hurdles - the foremost being safety of the physical system controlled by the learning agent and the varying environment conditions in which the autonomous agent functions. A RL agent learns to control a system by exploring available actions. In some operating states, when the RL agent exercises an exploratory action, the system may enter unsafe operation, which can lead to safety hazards both for the system as well as for humans supervising the system. RL algorithms thus need to respect these safety constraints and must do so with limited available information. Additionally, RL autonomous agents learn optimal decisions in the presence of a stationary environment. However, the stationary assumption on the environment is very restrictive. In many real world problems like traffic signal control, robotic applications, etc., one often encounters situations with non-stationary environments, and in these scenarios, RL algorithms yield sub-optimal decisions.

We describe algorithmic solutions to the challenges of safety and non-stationary environmental conditions in RL. In order to handle safety restrictions and facilitate safe exploration during learning, we propose a cross-entropy method based sample efficient learning algorithm. This algorithm is developed based on constrained optimization framework and utilizes limited information for the learning of feasible policies. Also, during the learning iterations, the exploration is guided in a manner that minimizes safety violations. The goal of the second algorithm is to find a good policy for control when the latent model of the environment changes with time. To achieve this, the algorithm leverages a change point detection algorithm to monitor change in the statistics of the environment. The results from this statistical algorithm are used to reset learning of policies and efficiently control an underlying system.

In the second part of talk, we describe the application of RL to obstacle avoidance problem in UAV quadrotor. Obstacle avoidance in quadrotor aerial vehicle navigation brings in additional challenges in comparison to ground vehicles. Our proposed method utilizes the relevant temporal information available from the ambient surroundings for this problem and adapts attention based deep Q networks combined with generative adversarial networks for this application.

Microsoft Teams Link: https://teams.microsoft.com/l/meetup-join/19%3ameeting_ZjIwYjI3YTEtNzU0OC00MDQxLTk1YjAtMzZiMTkzNDY5ZTgz%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97-f6a7-41e3-b2c5-ad4193976476%22%2c%22Oid%22%3a%221b0586c2-1488-4f8f-ab3c-d2e61940254c%22%7d

Video Archive Go Top

 

Series: M.Tech (Research) Thesis Colloquium- ON-LINE
Title: Revisiting Statistical Techniques for Cardinality Estimation in RDBMS

  • Speaker: Mr. Dhrumilkumar Shah
                   M. Tech (Research) student
                   Department of Computer Science and Automation
  • Faculty Advisor: Prof. Jayant R. Haritsa
  • Date and Time: Monday, March 22, 2021, 2:00 PM
  • Venue: Microsoft Teams - ON-LINE

Abstract
Relational Database Management Systems (RDBMS) constitute the backbone of today's information-rich society, providing a congenial environment for handling enterprise data during its entire life cycle of generation, storage, maintenance, and processing. The Structured Query Language (SQL) is the standard interface to query the information present in RDBMS-based storage. Because of the declarative nature of SQL, the query optimizer inside the database engine needs to come up with an efficient execution plan for a given query. To do so, database query optimizers are critically dependent on accurate row-cardinality estimates of the intermediate results generated on the edges of the execution plan tree.

Unfortunately, the histogram and sampling-based techniques commonly used in industrial database engines for cardinality estimation are often woefully inaccurate in practice. As a result, query optimizers produce sub-optimal query execution plans, leading to inflated query response times. This lacuna has motivated a recent slew of papers advocating the use of machine-learning techniques for cardinality estimation. However, these new approaches have their own limitations regarding training overheads, output explainability, incorporating dynamic updates, handling of workload drift, and generalization to unseen queries.

In this work, we take a relook at the traditional techniques and investigate whether they can be made to work satisfactorily when augmented with light-weight data structures. Specifically, we present GridSam, which essentially combines histograms and sampling in a potent partnership incorporating both algorithmic and platform innovations.

From the algorithmic perspective, GridSam first creates a multi-dimensional grid overlay structure by partitioning the data-space on "critical" attributes (Histogram), and then performs dynamic sampling from query-specific regions of the grid to capture correlations (Sampling). A heuristic-based methodology is used to determine the critical grid dimensions. Further, insights from Index-based Join Sampling (IBJS) technique are leveraged to direct the sampling in multi-table queries. Finally, learned-indexes are incorporated to reduce the index-probing cost for join sampling during the estimation process.

From the platform perspective, GridSam leverages the massive parallelism offered by current GPU architectures to provide fast grid setup times. This parallelism is also extended to the run-time estimation process.

A detailed performance study on benchmark environments indicates that GridSam computes cardinality estimates with accuracies competitive to contemporary learning-based techniques. Moreover, it does so while achieving orders-of-magnitude reduction in setup time. Further, the estimation time is in the same ballpark as both traditional and learning-based techniques. Finally, a collateral benefit of GridSam’s simple design is that, unlike learned estimators, it is natively amenable to dynamic data environments.

Microsoft Teams Link:

https://teams.microsoft.com/l/channel/19%3aff37417a25ea41889bb3521f22d917be%40thread.tacv2/General?groupId=cf375c71-892f-441a-ab57-27cbe3049dd4&tenantId=6f15cd97-f6a7-41e3-b2c5-ad4193976476

Video Archive Go Top

 

Series: Prof. V.V.S. Sarma Memorial Lecture (Second in the Series)
Title: Software Fault Tolerance via Environmental Diversity

  • Speaker: Prof. Kishor S. Trivedi
                   Duke University
                   North Carolina
                   USA
  • Date and Time: Friday, March 19, 2021, 7:00 PM
  • Venue: On-Line: Meeting Link: http://bit.do/vvssarma

Abstract
Time: 7 PM with online Networking at 6.30 PM Meeting Link: http://bit.do/vvssarma

Complex systems in different domains contain significant amount of software. Several studies have established that a significant fraction of system outages are due to software faults. Traditional methods of fault avoidance, fault removal based on extensive testing/debugging, and fault tolerance based on design/data diversity are found inadequate to ensure high software dependability. The key challenge then is how to provide highly dependable software. We discuss a viewpoint of fault tolerance of software-based systems to ensure high dependability. We classify software faults into Bohr bugs and Mandel bugs, and identify aging-related bugs as a subtype of the latter. Traditional methods have been designed to deal with Bohr bugs. The next challenge then is to develop mitigation methods for Mandel bugs in general and aging-related bugs in particular. We submit that mitigation methods for Mandel bugs utilize environmental diversity. Retry operation, restart application, failover to an identical replica (hot, warm or cold) and reboot the OS are examples of mitigation techniques that rely on environmental diversity. For software aging related bugs it is also possible to utilize proactive environmental diversity technique known as software rejuvenation. We discuss environmental diversity both from experimental and analytic points of view and cite examples of real systems employing these techniques.

Speaker Bio:
Kishor S. Trivedi holds the Hudson Chair in the Department of Electrical and Computer Engineering at Duke University, Durham, NC. He has a B.Tech (EE,1968) from IIT Mumbai, M.S. (CS,1972) and PhD (CS,1974) from the University of Illinois, Urbana-Champaign. He has been on the Duke faculty since 1975. He is the author of a well- known text entitled, Probability and Statistics with Reliability, Queuing and Computer Science Applications, first published by Prentice-Hall; a thoroughly revised second edition (including its Indian edition) of this book has been published by John Wiley. He has authored several other books. He is a Life Fellow of the Institute of Electrical and Electronics Engineers. He is a Golden Core Member of IEEE Computer Society. He has published over 600 articles and has supervised 48 Ph.D. dissertations. His h-index is 107. He is a recipient of IEEE Computer Society Technical Achievement Award for his research on Software Aging and Rejuvenation. He is a recipient of IEEE Reliability Society’s Lifetime Achievement Award. He has worked closely with industry in carrying out reliability/availability analysis, providing short courses on reliability, availability, performability modelling and in the development and dissemination of software packages such as SHARPE and SPNP. Professor V.V.S. Sarma (May 1944 – January 2018) Professor Vallury Subrahmanya Sarma, an extraordinary teacher and researcher, passed away on 13th January 2018 in Bangalore. Professor V.V.S. Sarma was born on May 7, 1944 in Vijayawada. After graduation with a University gold medal in Mathematics, Physics and Chemistry from Andhra University in 1961, he obtained his BE, ME, and PhD degrees from IISc, Bangalore. He served the IISc as faculty in various capacities from 1967. He became a full professor in 1983, and continued his service until his retirement in 2006. He was a visiting Professor at the University of Southwestern Louisiana, USA, between1984-86 and at Tata Research Development and Design Centre, Pune between 1995-97. He was elected to the fellowships of Indian Academy of Science, Indian National Science Academy and Indian National Academy of Engineering. Post retirement, he was an Honorary Professor in CSA and an INAE Distinguished Professor. Professor V.V.S. Sarma fondly called VVS by his students and friends initiated research at IISc in the then emerging areas of reliability engineering, pattern recognition, artificial Intelligence and machine learning, which are areas of utmost importance in the industry today. His survey paper in a special issue on AI in management with some new material in IEEE Transactions on Knowledge and Data Engineering entitled “Knowledge-based approaches for scheduling problems: A survey” was widely cited. He has guided a generation of researchers in these areas. His students were drawn from CSA, ECE, Aerospace, Mathematics, and Metallurgy departments and engineers from organizations such as IAF, NAL, ISRO, DRDO, BHEL under the external registration program. Many of his students are currently senior professors in universities or senior engineering researchers in Defense and ISRO across India, USA and Canada. With his collaborators N.Viswanadham and M.G. Singh, he wrote a book “Reliability of Computer and Control Systems” published by North-Holland Systems and Control series in 1987. He co-edited the book “Artificial Intelligence and Expert Systems in Indian Context,” published by TataMcGraw-Hill, 1990 jointly with N.Viswanadham, B.L.Deekshatulu, and B. Yegnanarayana. Prof. VVS Sarma was a very inspiring teacher. He used to enthuse and motivate his students to learn many topics of current research. As early as 1976, when the field was still in its infancy, he taught a course on Artificial Intelligence at IISc. He was a very gentle person and used to be affectionate towards all his students. In the passing away of Prof. VVS Sarma the research community has lost a mentor, an influential researcher and an outstanding teacher. All his students lost a father figure whom they will continue to look up to.

Host Faculty: Prof. Y Narahari & Prof. Chiranjib Bhattacharyya

Video Archive Go Top

 

Series: Golden Jubilee Women in Computing Lecture by Prof. Manuela Veloso
Title: AI in Finance: Scope and Examples

  • Speaker: Prof. Manuela Veloso
                   Managing Director
                   Head of AI Research, J.P. Morgan
  • Date and Time: Thursday, March 11, 2021, 7:30 PM
  • Venue: Zoom Webinar: https://zoom.us/j/98243007872

Abstract
AI enables principled representation of knowledge, complex strategy optimization, learning from data, and support to human decision making. I will present examples and discuss the scope of AI in our research in the finance domain.

Speaker Bio:
Manuela M. Veloso is the Head of J.P. Morgan AI Research, which pursues fundamental research in areas of core relevance to financial services, including data mining and cryptography, machine learning, explainability, and human-AI interaction. J.P. Morgan AI Research partners with applied data analytics teams across the firm as well as with leading academic institutions globally.Professor Veloso is on leave from Carnegie Mellon University as the Herbert A. Simon University Professor in the School of Computer Science, and the past Head of the Machine Learning Department. With her students, she had led research in AI, with a focus on robotics and machine learning, having concretely researched and developed a variety of autonomous robots, including teams of soccer robots, and mobile service robots. Her robot soccer teams have been RoboCup world champions several times, and the CoBot mobile robots have autonomously navigated for more than 1,000km in university buildings. Professor Veloso is the Past President of AAAI, (the Association for the Advancement of Artificial Intelligence), and the co-founder, Trustee, and Past President of RoboCup. Professor Veloso has been recognized with a multiple honors, including being a Fellow of the ACM, IEEE, AAAS, and AAAI. She is the recipient of several best paper awards, the Einstein Chair of the Chinese Academy of Science, the ACM/SIGART Autonomous Agents Research Award, an NSF Career Award, and the Allen Newell Medal for Excellence in Research. Professor Veloso earned a Bachelor and Master of Science degrees in Electrical and Computer Engineering from Instituto Superior Tecnico in Lisbon, Portugal, a Master of Arts in Computer Science from Boston University, and Master of Science and PhD in Computer Science from Carnegie Mellon University. See www.cs.cmu.edu/~mmv/Veloso.html for her scientific publications. Link for talk: Zoom Webinar: https://zoom.us/j/98243007872

Video Archive Go Top

 

Series: Ph.D (Engg.) Colloquium- ON-LINE
Title: Statistical Network Analysis: Community Structure, Fairness Constraints,and Emergent Behavior

  • Speaker: Mr. Shubham Gupta
                   Ph.D Student
                   Dept. of CSA
  • Faculty Advisor: Prof. Ambedkar Dukkipati
  • Date and Time: Wednesday, March 10, 2021, 4:00 PM
  • Venue: Microsoft Teams - ON-LINE

Abstract
Networks or graphs provide mathematical tools for describing and analyzing relational data. They are used in biology to model interactions between proteins, in economics to identify trade alliances among countries, in epidemiology to study the spread of diseases, and in computer science to rank webpages on a search engine, to name a few. Each application domain in this wide assortment encounters networks with diverse properties and imposes various constraints. For example, networks may be dynamic, heterogeneous, or attributed, and an application domain may require a fairness constraint on the communities. However, most existing research is concerned with the simplest type of networks with a fixed set of nodes and edges and focuses on the canonical forms of tasks like community detection and link prediction. This thesis aims at bridging this gap by proposing community detection and link prediction methods to analyze different types of networks from various perspectives.

Our first contribution is a spectral algorithm with theoretical guarantees that finds 'fair' clusters. We define a notion of individual fairness in communities using an auxiliary representation graph. Nodes are connected in this graph if they can represent each others' interests in various communities. Informally speaking, a node considers a community fair if an adequate number of its representatives belong to that community. The goal is to find communities that are considered fair by all nodes under the representation graph. We show that our proposed fairness criterion (i) generalizes the idea of statistical fairness and (ii) is also applicable in cases where the sensitive node attributes (like gender and race) are not observable but instead manifest themselves as intrinsic or latent features of a social network. We develop a fair spectral clustering algorithm and prove that it is weakly consistent (#mistakes = o(n) with probability 1 - o(1)) under a proposed variant of the stochastic block model.

Second, we propose a community-based statistical model for dynamic networks where edges appear and disappear over time. Many networks like social networks, citation networks, contact networks, etc., are dynamic in nature. Our model embeds the nodes and communities in a d-dimensional latent space and specifies a procedure for updating these embeddings over time to model the network's evolution. Given an observed dynamic network, we infer these latent quantities using variational inference and use them for link forecasting and community detection. Unlike existing approaches, our model supports the birth and death of communities. It also allows us to use powerful neural networks during inference. Experiments demonstrate that our model is better at link forecasting and community detection as compared to existing approaches. Moreover, it discovers stable communities, as quantified by the normalized mutual information (NMI) score between communities discovered at successive time steps. This desirable quality is absent in methods that ignore the network dynamics.

Third, we propose a statistical model for heterogeneous dynamic networks where the nodes and relations additionally have a type associated with them (e.g., knowledge graphs). Besides the latent node attributes, this model also encodes a set of interaction matrices for each type of relation. These matrices specify the affinity between nodes based on their attribute values and can represent both homophilic (like attracts like) and heterophilic relationships (opposites attract). We develop a scalable neural network-based inference procedure for this model and demonstrate that it outperforms existing state-of-the-art approaches on several homogeneous and heterogeneous dynamic network datasets, particularly the temporal knowledge graphs.

Fourth, we develop a model for networks with node covariates to bring explainability to community detection. This model integrates node covariates into a stochastic block model using restricted Boltzmann machines. We subscribe to the view that a community can be explained by identifying the defining covariates of its member nodes. Our model provides the relative importance of various covariates for each community, thereby explaining its decision to group the members. Existing approaches for modeling networks with covariates lack this property, especially the ones that are based on deep neural networks. We also derive an efficient inference procedure that runs in linear time in the number of nodes and edges. Experiments confirm that our model's community detection performance is comparable with recent deep neural network-based approaches. However, it additionally offers the advantage of explainability.

The discussion till now views communities as passive structures arising out of interactions between nodes. However, just like existing links in a network determine future links, communities also play a functional role in shaping the behavior of the nodes (for example, preference for a clothing brand). Our final contribution explores this functional view of communities and shows that they affect emergent communication in a networked multi-agent reinforcement learning setting.

Meeting Link :

https://teams.microsoft.com/l/meetup-join/19%3ameeting_Yjk2ZWViYWMtYjRhZi00MTdjLWJjNWYtNjAxMGY2MmU3MjEz%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97-f6a7-41e3-b2c5-ad4193976476%22%2c%22Oid%22%3a%22c90a12b8-df95-4e40-88fd-ee979f2b42ba%22%7d

Video Archive Go Top

 

Series: Department Seminar
Title: Multi-Domain Coupling in Cyber-Physical Systems Design

  • Speaker: Dr. Debayan Roy
                   Technical University of Munich, Germany
  • Date and Time: Monday, March 08, 2021, 2:00 PM
  • Venue: https://teams.microsoft.com/l/meetup-join/19%3ameeting_NTUxNzgxOWUtZGIyNi00Y2NiLThiZjAtMGIxMTljMmJmZTFj%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97-f6a7-41e3-b2c5-ad4193976476%22%2c%22Oid%22%3a%2282f39501-c5b2-4bfb-87c3-f17ca74c00b6%22%7d

Abstract
In a cyber-physical system (CPS), hardware and software components control a physical process. Hence, there is a strong interaction between the physical dynamics, the control law, and the software algorithms and hardware resources enabling computation, communication, sensing, actuation, and data storage. Such systems have become very common in several industry sectors such as automotive, avionics, healthcare, manufacturing, and energy. The state of practice in the industry has been to follow a separation of concerns for CPS designs. That is, the control law is calculated and the hardware/software is developed in isolated stages without sufficient knowledge of each other. Thus, the implementation might not preserve the performance guarantees obtained during the control design. Often this leads to a long integration, testing, and debugging phase, besides producing inferior systems. In many cases, it is also challenging to offer safety guarantees. Considering that many CPSs are safety-critical and cost-sensitive, e.g., modern cars, this talk will advocate the use of integrated modeling, design, and analysis approaches for CPSs. In essence, I will show how the models, metrics, and methods from different engineering domains can be coupled together in a comprehensive framework for the design of safe and cost-efficient CPSs. I will discuss a hybrid optimization technique that enables the co-design of controllers and their distributed software implementations on a realistic automotive hardware platform (i.e., multiple electronic control units connected by a FlexRay bus). I will further illustrate a toolchain, the first of its kind, that integrates the co-design scheme with industry-strength tools for control design and software development respectively. This toolchain enables the design automation for control software. This talk will conclude with a discussion on some promising research directions for next-generation CPSs that need to be secure, adaptive, and autonomous, besides being safe and cost-efficient.



Teams link: https://teams.microsoft.com/l/meetup-join/19%3ameeting_NTUxNzgxOWUtZGIyNi00Y2NiLThiZjAtMGIxMTljMmJmZTFj%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97-f6a7-41e3-b2c5-ad4193976476%22%2c%22Oid%22%3a%2282f39501-c5b2-4bfb-87c3-f17ca74c00b6%22%7d

Speaker Bio:
Debayan Roy is a postdoctoral researcher in Prof. Marco Caccamo’s group at the Technical University of Munich in Germany. He received his Ph.D. with “summa cum laude” in electrical and computer engineering from the Technical University of Munich in 2020 under the supervision of Prof. Samarjit Chakraborty. His research interests are in the area of modeling, design, and analysis of cyber-physical systems. He has predominantly looked into co-design methodologies for cyber-physical systems to bridge the semantic gap between the design of controllers and their hardware/software implementations. He has also worked towards integrating the co-design schemes into industry-strength tools to enable design automation for cyber-physical systems. For his research works, he won the Best Paper Award at RTCSA 2017, while receiving two nominations for the best paper at DATE 2019 and 2020 respectively. He also publishes regularly in top-tier conferences on design automation (i.e., DAC and ICCAD) and real-time systems (i.e., RTSS and RTAS). He has served on program committees (COINS and RTAS) and peer-reviewed for several journals (e.g., ACM Transactions on Cyber-Physical Systems, IEEE Transactions on Computers, and Springer Journal on Real-Time Systems). Webpage: https://www.ei.tum.de/rcs/persons/alumni/roy/

Host Faculty: Arkaprava Basu

Video Archive Go Top

 

Series: Golden Jubilee Women in Computing Event - Panel Discussion
Title: PhD in India: Opportunities and Challenges

  • Speaker: Panellists:
                    Dr Suparna Bhattacharya (HP)
                    Prof. Nutan Limaye (IIT Bombay)
                    Dr Geetha Manjunath (Niramai)
                    Prof. Ruta Mehta (UIUC)
                    Prof. Neeldhara Misra (IIT Gandhinagar)
  • Date and Time: Thursday, March 04, 2021, 5:00 PM
  • Venue: Zoom Webinar: https://zoom.us/j/95626106483

Abstract
In this panel discussion, our panellists, all of whom are leading researchers in their respective fields and have obtained their PhD in India, will discuss their research journey -- starting from being PhD students in India to their current positions as entrepreneurs, researchers and academicians, in India and globally.

Link for Panel Discussion: Zoom Webinar: https://zoom.us/j/95626106483

Video Archive Go Top

 

Series: Golden Jubilee Women in Computing Lecture by Vidhya Y & Dr Ranjita Bhagwan
Title: 1. The need for Inclusive STEM Education – ground realities and collaborative solutions 2. Using Data to Build Better Systems and Services

  • Speaker: 1. Vidhya Y (talk start at 3:30 PM – 4:00 PM)
                   Vision Empower
                   
                   2. Dr Ranjita Bhagwan (talk start at 4:00 PM – 5:00 PM)
                   Microsoft Research India
  • Date and Time: Thursday, March 04, 2021, 3:30 PM
  • Venue: Zoom Webinar: https://zoom.us/j/95626106483

Abstract
1. India produces one of the highest number of STEM graduates in the world. India also has the highest population of visually impaired persons. However, out of the millions of visually impaired people, less than 50 students have studied STEM subjects beyond high school due to the non-inclusive and largely inaccessible education system. Due to this, people with visual impairments are deprived from choosing the currently flourishing careers in Science and Computing. In this talk, Vidhya will share her experiences in studying STEM subjects as a visually impaired student and role of technologies in enabling independence both in her education and work. She will also share the various initiatives which she and the team at Vision Empower have undertaken to make STEM subjects accessible to visually impaired children over the past 3 years.

2. Today’s systems and services are large and complex, often supporting millions or even billions of users. Such systems are extremely dynamic as developers continuously commit code and introduce new features, fixes and, consequently, new bugs. Multiple problems crop up in such a dynamic environment, from misconfiguration of essential services, very slow testing and deployment procedures, and extended service disruptions when catastrophic bugs hit deployment. Nevertheless, with the advent of cloud-based services, new opportunities to use machine-learning to alleviate such problems have emerged. Large-scale services generate petabytes of code, test, and usage-related data within just a few days. This data can be potentially harnessed to provide valuable insights to engineers on how to improve service performance, security and reliability. However, cherry-picking important information from such vast amounts of systems-related data proves to be a formidable challenge. Over the last few years, we have been working on leveraging code, test logs and telemetry as data to build several tools that help develop and deploy systems faster while maintaining and even improving system reliability. My talk will first describe the challenges that arise from using machine learning on such systems-related data and metadata. Next I will do a deep-dive on the design of a few tools that we built and are being used by several of Microsoft’s services.

Speaker Bio:
1. Vidhya is the founder of “Vision Empower” - a not for profit enterprise which focuses on bringing education in science and mathematics to students with visual impairment which were mostly out of reach for many children. Her work was recently featured on Forbes India and Deccan herald change makers 20 in 20. Vidhya also was a Research Fellow at Microsoft Research India for 2 years. She has received best paper award for one of her Papers submitted at the CSCW conference and honorable Mention award for another Paper submitted at the CHI conference, both of which are published by ACM. As a student of the Masters in Digital Society program at IIITB, Vidhya graduated as the gold medalist from her batch in 2017. She has received numerous awards and many scholarships for her work and academic excellence. She has given a tedX talk and regularly gives motivational talks in many institutes and corporates. Vidhya had been an RJ for a radio show and hosted 35 episodes on science and technology which benefited thousands of visually impaired listeners. She holds the distinction of being the first blind student to undertake math at higher secondary school in Karnataka, following it up by being the first one to pursue Computer Science as a major in undergraduate studies at her University. 2. Dr. Ranjita Bhagwan is Senior Principal Researcher at Microsoft Research India. Her research predominantly focuses on problems related to networked and distributed systems. Ranjita has worked for more than a decade on applying machine learning to improve system reliability, security and performance. She is the recipient of the 2020 ACM India Outstanding Contributions to Computing by a Woman Award. She has chaired multiple top conferences in the field of systems and networking. Ranjita received her PhD and MS in Computer Engineering from University of California, San Diego and a BTech in Computer Science and Engineering from the Indian Institute of Technology, Kharagpur. Zoom talk link: Zoom Webinar: https://zoom.us/j/95626106483

Video Archive Go Top

 

Series: Seminar on Blockchain and decentralization
Title: 1. Making Synchronous Consensus Protocols Practical: A Journey 2.Key Management and Zero Knowledge Credentials for Decentralized Identity Ecosystem 3.Technical Deep Dive on Hyperledger Fabric 4.Enterprise Blockchain - Applications and Industry Trends

  • Speaker: 1. Dr. Kartik Nayak (talk start at 09:00 to 10:00 AM)
                   Department of Computer Science
                   Duke University
                   
                   2. Dr. Esha Ghosh (talk start at 10:00 AM to 11:00 AM)
                   Senior Researcher
                   Cryptography and Privacy Group
                   Microsoft Research, Redmond
                   MSR Redmond
                   
                   3. Dr. Akshar Kaul (talk start at 11:00 AM to 12:00 noon)
                   Advisory Research Software Engineer
                   IBM Research, Bangalore
                   
                   4. Dr. Pankaj Dayama (talk start at 12:00 noon to 1:00 PM)
                   Senior Technical Staff Member and Master Inventor
                   IBM Research India
  • Date and Time: Friday, February 26, 2021, 9:00 AM
  • Venue: Zoom link: https://zoom.us/j/6280916420?pwd=VTYvbXpPaitUdnBuaHhURVU2Zk02Zz09

Abstract
1. Byzantine Fault Tolerant protocols in the synchronous setting have often been considered impractical due to the strong synchrony assumption. On the flip side, synchronous protocols can be used to tolerate up to one-half Byzantine faults. In this talk, I will explain my journey towards improving synchronous protocols, both in theory and practice.

2. Decentralized Identity Foundation (DIF) is a collection of international organizations that focuses on building an open ecosystem for self-owned decentralized identity (DID). Microsoft is an important member of DIF and is working on building protocols, infrastructure and open source libraries for the DID ecosystem. As a part of this effort, I worked on building a self owned cryptographic key management scheme and a zero-knowledge credentials scheme. The key management library has already been open sourced. For this talk, I will spend most of time discussing the key management scheme and the wonderful collaborative effort in it that brought together developers, cryptography and security researchers and standards people. Then I will briefly talk about the zero knowledge credentials project highlighting the new requirements and challenges that the DID ecosystem brings out in using traditional zero-knowledge credential schemes.

3. Blockchain is a shared, replicated, immutable transaction ledger which is maintained by a distributed network of nodes. The transactions in the ledger are grouped into blocks that includes a hash that binds the block to its preceding block, thus creating an immutable chain of blocks. Blockchain networks can be primarily categorized into Permissionless and Permissioned networks. In a Permissionless blockchain all the participants are anonymous and hence do not have trust in each other. The only source of trust is that the state of the blockchain, prior to a certain depth, is immutable. On the other hand Permissioned blockchain operates amongst a set of known and identified participants operating under a governance model, which provides a certain degree of trust. This talk will provide a technical deep dive on Hyperledger Fabric, which is an enterprise grade permissioned distributed ledger framework for developing solutions and applications. Hyperledger Fabric has a highly modular and configurable architecture, enabling innovation, versatility and optimization for a broad range of industry use cases including banking, finance, insurance, health care etc. Hyperledger Fabric is the first distributed ledger platform to support smart contracts authored in general-purpose programming languages such as Java, Go and Node.js, rather than constrained domain-specific languages (DSL). Hyperledger Fabric introduces a new architecture for transactions i.e. execute-order-validate, which addresses the resiliency, flexibility, scalability, performance and confidentiality challenges faced by the order-execute model. Hyperledger fabric takes a unique approach to consensus which enables performance and scalability while preserving privacy.

4. Blockchain technology provides greater transparency and security in carrying out business transactions by maintaining immutable transaction records within a distributed network of mutually untrusting entities. A secure distributed consensus protocol is used for maintaining the ledger and blockchain has a framework for automatically executing smart contracts based on the state of the distributed ledger. Blockchaintechnology has been seen as a very promising technology in supply chain as well as financial services industry. Applications related to product traceability, international trade finance, paperless trade, etc. are the initial ones that have gone into production. This talk will provide an overview of blockchain solutions we have developed for various industries. We will also discuss some of the recent trends and interesting research problems in this space.

Speaker Bio:
1. Kartik Nayak is an assistant professor in the Department of Computer Science at Duke University. He works in the areas of security, applied cryptography, distributed computing, and blockchains. Before joining Duke University, he spent a year as a postdoctoral researcher at VMware Research. Before that, he graduated from the University of Maryland, College Park. He has served on program committees of several top-tier conferences such as ACM CCS, PODC, Asiacrypt, and PoPETS. Kartik is a recipient of the 2016 Google Ph.D. fellowship in Security. 2. Esha Ghosh is a Senior Researcher in the Cryptography and Privacy group at Microsoft Research, Redmond. Her research interests include end-to-end encrypted systems, decentralized identity management, secure computation and authenticated data structures. More recently, she has been interested in information leakage in ML deployment. Before joining MSR, Esha graduated from Brown University in 2018. 3. Akshar Kaul is an Advisory Research Software Engineer at IBM Research, Bangalore, India. He received Masters Degree in Computer Science and Engineering from Indian Institute of Science (IISc.) Bangalore. His research focus is on “Computation on Encrypted Data” especially in the context of outsourced databases. He has also worked on various projects related to permissioned blockchain networks (Hyperledger Fabric). 4. Dr. Pankaj Dayama is a Senior Technical Staff Member and Master Inventor at IBM Research India. He is currently leading Blockchain Solutions group at IRL. His current work spans different aspects of Blockchain technology including building innovative solutions in supply chain space working directly with clients, and enabling privacy preserving network collaboration on Blockchain. He has published about 20 papers in peer reviewed conferences and has more than 50 filed patents in the USPTO to his credit. The workshop on Blockchain and decentralization (https://sites.google.com/view/blockchain-seminar) will resume with four talks on Friday, 26th February. Zoom link for the workshop talk: https://zoom.us/j/6280916420?pwd=VTYvbXpPaitUdnBuaHhURVU2Zk02Zz09

Host Faculty: Dr. Chaya Ganesh

Video Archive Go Top

 

Series: Seminar on Blockchain and decentralization
Title: Modern Consensus Protocols: The Synchronous, the Asynchronous, and the Partially Synchronous

  • Speaker: Prof. Ittai Abraham
                   VMWare Research
  • Date and Time: Tuesday, February 23, 2021, 2:00 PM
  • Venue: Zoom: https://zoom.us/j/6280916420?pwd=VTYvbXpPaitUdnBuaHhURVU2Zk02Zz09

Speaker Bio:
Ittai is a founding member of the VMware Research Group, the VMware Blockchain Project and the VMware Research Group in Israel. Prior to joining VMware, he was a researcher at Microsoft Research Silicon Valley. He holds a PhD in Computer Science from the Hebrew University. His work spans from the theory of algorithms through the foundations of distributed computing to practical aspects in industrial research, algorithm engineering, distributed systems and blockchain technology. This talk is part of the workshop on Blockchain and decentralization. Talk On-Line link: https://zoom.us/j/6280916420?pwd=VTYvbXpPaitUdnBuaHhURVU2Zk02Zz09

Host Faculty: Dr. Chaya Ganesh

Video Archive Go Top

 

Series: Ph.D (Engg.)Thesis Defence -ON-LINE
Title: Online Learning from Relative Subsetwise Preferences

  • Speaker: Ms. Aadirupa Saha
                   Ph.D student
                   Dept. of CSA
  • Faculty Advisor: Prof. Chiranjib Bhattacharyya & Prof. Aditya Gopalan
  • Date and Time: Tuesday, February 23, 2021, 10:00 AM
  • Venue: Microsoft Teams - https://tinyurl.com/zo46ntdz

Abstract
The elicitation and aggregation of preferences is often the key to making better decisions. Be it a perfume company wanting to relaunch their 5 most popular fragrances, a movie recommender system trying to rank the most favoured movies, or a pharmaceutical company testing the relative efficacies of a set of drugs, learning from preference feedback is a widely applicable problem to solve. One can model the sequential version of this problem using the classical multiarmed-bandit (MAB) (e.g., Auer, 2002) by representing each decision choice as one bandit-arm, or more appropriately as a Dueling-Bandit (DB) problem (Yue and Joachims, 2009). Although DB is similar to MAB in that it is an online decision making framework, DB is different in that it specifically models learning from pairwise preferences. In practice, it is often much easier to elicit information, especially when humans are in the loop, through relative preferences: `Item A is better than item B is easier to elicit than its absolute counterpart: `Item A is worth 7 and B is worth 4.

However, instead of pairwise preferences, a more general subset-wise preference model is more relevant in various practical scenarios, e.g. recommender systems, search engines, crowd-sourcing, e-learning platforms, design of surveys, ranking in multiplayer games. Subset-wise preference elicitation is not only more budget friendly, but also flexible in conveying several types of feedback. For example, with subset-wise preferences, the learner could elicit the best item, a partial preference of the top 5 items, or even an entire rank ordering of a subset of items, whereas all these boil down to the same feedback over pairs (subsets of size 2). The problem of how to learn adaptively with subset-wise preferences, however, remains largely unexplored; this is primarily due to the computational burden of maintaining a combinatorially large, O(n^k), size of preference information in general.

We take a step in the above direction by proposing "Battling Bandits (BB)"---a new online learning framework to learn a set of optimal (good) items by sequentially, and adaptively, querying subsets of items of size up to k (k>=2). The preference feedback from a subset is assumed to arise from an underlying parametric discrete choice model, such as the well-known Plackett-Luce model, or more generally any random utility (RUM) based model. It is this structure that we leverage to design efficient algorithms for various problems of interest, e.g. identifying the best item, set of top-k items, full ranking etc., for both in PAC and regret minimization setting. We propose computationally efficient and (near-) optimal algorithms for above objectives along with matching lower bound guarantees. Interestingly this leads us to finding answers to some basic questions about the value of subset-wise preferences: Does playing a general k-set really help in faster information aggregation, i.e. is there a tradeoff between subsetsize-k vs the learning rate? Under what type of feedback models? How do the performance limits (performance lower bounds) vary over different combinations of feedback and choice models? And above all, what more can we achieve through BB where DB fails?

We proceed to analyse the BB problem in the contextual scenario – this is relevant in settings where items have known attributes, and allows for potentially infinite decision spaces. This is more general and of practical interest than the finite-arm case, but, naturally, on the other hand more challenging. Moreover, none of the existing online learning algorithms extend straightforwardly to the continuous case, even for the most simple Dueling Bandit setup (i.e. when k=2). Towards this, we formulate the problem of "Contextual Battling Bandits (C-BB)" under utility based subsetwise-preference feedback, and design provably optimal algorithms for the regret minimization problem. Our regret bounds are also accompanied by matching lower bound guarantees showing optimality of our proposed methods. All our theoretical guarantees are corroborated with empirical evaluations.

Lastly, it goes without saying, that there are still many open threads to explore based on BB. These include studying different choice-feedback model combinations, performance objectives, or even extending BB to other useful frameworks like assortment selection, revenue maximization, budget-constrained bandits etc. Towards the end we will also discuss some interesting combinations of the BB framework with other, well-known, problems, e.g. Sleeping / Rotting Bandits, Preference based Reinforcement Learning, Learning on Graphs, Preferential Bandit-Convex-Optimization etc.

Microsoft Teams link: https://tinyurl.com/zo46ntdz

Video Archive Go Top

 

Series: M.Tech (Research) Colloquium- ON-LINE
Title: Constructing a TLB-based covert channel on GPUs

  • Speaker: Mr. Ajay Nayak
                   M.Tech (Research) student
                   Dept. of CSA
  • Faculty Advisor: Prof. Vinod Ganapathy and Prof. Arkaprava Basu
  • Date and Time: Monday, February 15, 2021, 12:00 PM
  • Venue: Microsoft Teams - ON-LINE

Abstract
GPUs are now commonly available in most modern computing platforms. They are increasingly being adopted in cloud platforms and data centers due to their immense computing capability. In response to this growth in usage, manufacturers are continuously trying to improve GPU hardware by adding new features. However, this increase in usage and the addition of utility-improving features can create new, unexpected attack channels. In this thesis, we show that two such features—unified virtual memory (UVM) and multi-process service (MPS)—primarily introduced to improve the programmability and efficiency of GPU kernels have an unexpected consequence—that of creating a novel covert timing channel via the GPU’s translation lookaside buffer (TLB) hierarchy. To enable this covert channel, we first perform experiments to understand the characteristics of TLBs present on a GPU. The use of UVM allows fine-grained management of translations, and helps us discover several idiosyncrasies of the TLB hierarchy, such as three-levels of TLB, coalesced entries. We use this newly-acquired understanding to demonstrate a novel covert channel via the shared TLB. We then leverage MPS to increase the bandwidth of this channel by 40×. Finally, we demonstrate the channel’s utility by leaking data from a GPU-accelerated database application.

Microsoft Teams link:

https://teams.microsoft.com/l/meetup-join/19%3a7f81f3a291db4f6796a0d9cca7ffd68b%40thread.tacv2/1612856627978?context=%7b%22Tid%22%3a%226f15cd97-f6a7-41e3-b2c5-ad4193976476%22%2c%22Oid%22%3a%229a1ad18c-768a-4322-8aa7-890013dcb721%22%7d

Video Archive Go Top

 

Series: Distinguished Speaker Series
Title: Quantum Computing: Current Status and Future Prospects - Part of the IBM Research Distinguished Speaker Series

  • Speaker: Prof. John Preskill
                   Richard P. Feynman Professor of Theoretical Physics, and Amazon Scholar
                   California Institute of Technology
  • Date and Time: Tuesday, February 09, 2021, 12:00 PM
  • Venue: ON-LINE: Link is follows

Speaker Bio:
Dr. Preskill is the Richard P. Feynman Professor of Theoretical Physics at the California Institute of Technology, and an Amazon Scholar. He is a pioneer in the fields of quantum information and quantum computing, with important contributions to cryptography, error-correction, topological phases of matter, and cosmological quantum information. Please join us at 12:00 p.m. ET on Tuesday, February 9 for an informative talk entitled “Quantum Computing: Current Status and Future Prospects.” Dr. Preskill will explore noisy intermediate-scale quantum (NISQ) computers, their potential for surpassing classical computers, and the future of quantum computing. This event will also include time for Q&A. In the meantime, you can read the full abstract and enroll here: https://event.on24.com/wcc/r/2985371/D346C15F6C83AA31924C4817B24FEC19

Host Faculty: Prof. K V Raghavan

Video Archive Go Top

 

Series: CSA Golden Jubilee Women in Computing Event
Title: Panel Discussion on Gender Inclusivity in Computer Science

  • Speaker: Dr Swami Manohar
                   Microsoft Research
                   
                   Prof. Hema Murthy
                   Indian Institute of Technology Madras
                   
                   Prof. Tal Rabin
                   University of Pennsylvania
                   
                   Prof. Omer Reingold
                   Stanford University
  • Date and Time: Friday, February 05, 2021, 9:00 PM
  • Venue: Zoom Webinar: https://zoom.us/j/92180298230

Speaker Bio:
Manohar Swaminathan: Manohar Swaminathan (Swami Manohar) is a Principal researcher at Microsoft Research India, where he is part of the Technologies for Emerging Markets group. He is also a founding co-convener of the Center for Accessibility in the Global South at the IIIT-Bangalore. Manohar is an academic-turned technology entrepreneur-turned researcher with a driving passion to build and deploy technology for positive social impact. He has a PhD in CS from Brown University, was a Professor at the Indian Institute of Science, and has co-founded, managed, advised, and angel-funded several technology startups in India. He has guided over 40 graduate students and has more than 50 refereed publications. His research interests as a professor spanned graphics, virtual reality (taught a graduate course in VR in 1995), and internet technologies. His current research focus combines this background in technology with his interest in accessibility in the global south. Hema A. Murthy: Hema A. Murthy is a Professor of Computer Science at IIT, Madras. She has been a faculty at IIT Madras for more than 30 years now. Her research interests are in Signal Processing and Machine Learning. Tal Rabin: Tal Rabin is a Professor of Computer Science at University of Pennsylvania (UPenn). Prior to joining the university Tal was the head of the research group at Algorand Foundation and before that she was at IBM Research for 23 years, as a Distinguished Research Staff Member and manager of the Cryptographic Research team. Her research focuses on the general area of cryptography and more specifically on secure multiparty computation and privacy preserving computations. Rabin is an ACM Fellow, an IACR Fellow and member of the American Academy of Arts and Sciences. She was awarded the RSA Award for Excellence in Mathematics, 2019, and was named by Forbes as one of the Top 50 Women in Tech in the world, 2018. In 2014 she won the Anita Borg Women of Vision Award winner for Innovation. She has initiated and organizes the Women in Theory Workshop, a biennial event for graduate students in Theory of Computer Science. Omer Reingold : Omer Reingold is the Rajeev Motwani professor of computer science at Stanford University and the director of the Simons Collaboration on the Theory of Algorithmic Fairness. Past positions include Samsung Research America, the Weizmann Institute of Science, Microsoft Research, the Institute for Advanced Study in Princeton, NJ and AT&T Labs. His research is in the foundations of computer science and most notably in computational complexity, cryptography and the societal impact of computation. He is an ACM Fellow and a Simons Investigator. Among his distinctions are the 2005 Grace Murray Hopper Award and the 2009 Gödel Prize.

Video Archive Go Top

 

Series: M.Tech (Research) Colloquium- ONLINE
Title: Locally Reconstructable Non-Malleable Secret Sharing

  • Speaker: Ms. Jenit Tomy
                   M.Tech(Research) Student
                   Dept. of CSA
  • Faculty Advisor: Prof. Bhavana Kanukurthi
  • Date and Time: Monday, January 25, 2021, 3:00 PM
  • Venue: Microsoft Teams - ONLINE

Abstract
Non-malleable secret sharing (NMSS) schemes, introduced by Goyal and Kumar (STOC 2018), ensure that a secret m can be distributed into shares m1,...,mn (for some n), such that any t (a parameter <= n) shares can be reconstructed to recover the secret m, any t-1 shares doesnt leak information about m and even if the shares that are used for reconstruction are tampered, it is guaranteed that the reconstruction of these tampered shares will either result in the original m or something independent of m. Since their introduction, non-malleable secret sharing schemes sparked a very impressive line of research.

In this talk, we present a new feature of local reconstructablility in NMSS, which allows reconstruction of any portion of a secret by reading just a few locations of the shares. This is a useful feature, especially when the secret is long or when the shares are stored in a distributed manner on a communication network. In this talk, we give a compiler that takes in any non-malleable secret sharing scheme and compiles it into a locally reconstructable non-malleable secret sharing scheme. To secret share a message consisting of k blocks of length r each, our scheme would only require reading r + log k bits (in addition to a few more bits, whose quantity is independent of r and k) from each partys share (of a reconstruction set) to locally reconstruct a single block of the message.

Microsoft Teans Link:

https://teams.microsoft.com/l/meetup-join/19%3ameeting_OTg2OGMwOGQtOTgxYi00OGEyLWE3M2MtOTgzMDNkMGQ0ODUy%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97-f6a7-41e3-b2c5-ad4193976476%22%2c%22Oid%22%3a%225f3273b8-8838-46b7-b675-b1e9eab4d8ef%22%7d

Video Archive Go Top

 

 

 

 

 

 

 

 

 

 

Copyright: CSA, IISc 2018      Phone: +91-80-22932368          Fax: +91-80-23602911 Travel Blog    Feedback    Credits