Events 

Seminars 

Golden Jubilee Frontier Lectures 

Prof. I. G. Sarma Memorial Lecture Series 

Prof. Priti Shankar Memorial Seminar Series 

Multimedia Archive 



UPCOMING SEMINARS 


PAST SEMINARS 

Series: M.Tech (Research) Colloquium ONLINE Title: Locally Reconstructable NonMalleable Secret Sharing  Speaker: Ms. Jenit Tomy
M.Tech(Research) Student
Dept. of CSA  Faculty Advisor: Prof. Bhavana Kanukurthi
 Date and Time: Monday, January 25, 2021, 3:00 PM
 Venue: Microsoft Teams  ONLINE
Abstract Nonmalleable secret sharing (NMSS) schemes, introduced by Goyal and Kumar (STOC 2018), ensure that a secret m can be distributed into shares m1,...,mn (for some n), such that any t (a parameter <= n) shares can be reconstructed to recover the secret m, any t1 shares doesnt leak information about m and even if the shares that are used for reconstruction are tampered, it is guaranteed that the reconstruction of these tampered shares will either result in the original m or something independent of m. Since their introduction, nonmalleable secret sharing schemes sparked a very impressive line of research.
In this talk, we present a new feature of local reconstructablility in NMSS, which allows reconstruction of any portion of a secret by reading just a few locations of the shares. This is a useful feature, especially when the secret is long or when the shares are stored in a distributed manner on a communication network. In this talk, we give a compiler that takes in any nonmalleable secret sharing scheme and compiles it into a locally reconstructable nonmalleable secret sharing scheme. To secret share a message consisting of k blocks of length r each, our scheme would only require reading r + log k bits (in addition to a few more bits, whose quantity is independent of r and k) from each partys share (of a reconstruction set) to locally reconstruct a single block of the message.
Microsoft Teans Link:
https://teams.microsoft.com/l/meetupjoin/19%3ameeting_OTg2OGMwOGQtOTgxYi00OGEyLWE3M2MtOTgzMDNkMGQ0ODUy%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97f6a741e3b2c5ad4193976476%22%2c%22Oid%22%3a%225f3273b8883846b7b675b1e9eab4d8ef%22%7d
 Series: Ph.D (Engg.) Thesis Colloquium ONLINE Title: Deep Learning over Hypergraphs  Speaker: Mr. Naganand Yadati
Ph.D Student
Dept. of CSA  Faculty Advisor: Prof. Partha Pratim Talukdar
 Date and Time: Monday, January 25, 2021, 11:00 AM
 Venue: Microsoft Teams  ONLINE
Abstract Though graphs have been extensively used for modelling realworld relational datasets, they are restricted to pairwise relationships, i.e., each edge connects exactly two vertices. Many realworld relational datasets such as academic networks, chemical reaction networks, email communication networks contain groupwise relationships that go beyond pairwise associations. Hypergraphs can flexibly model such datasets by relaxing the notion of an edge to connect an arbitrary number of vertices and providing a mathematical foundation for understanding and learning from large amounts of realworld heterogeneous data.
The stateoftheart techniques for learning from graph data with pairwise relationships use graphbased deep learning models such as graph neural networks. A prominent observation that inspires this thesis is that deep neural networks are still underexplored for hypergraph data with groupwise relationships. Hypergraphs have been utilised as primary data structures in many machine learning tasks such as vertex classification, hypergraph link prediction, and knowledge base completion. However, the fundamental limitation of most existing nonneural techniques is that they cannot leverage highdimensional features on vertices, especially those which are not present in relational data (e.g., text attributes of documents in academic networks). In this thesis, we propose novel deep learningbased methods for hypergraph data with high dimensional vertex features.
1) Deep Learning for Hypergraph Vertexlevel Predictions
In the first part of the thesis, we focus on addressing limitations of existing methods for vertexlevel tasks over hypergraphs. In particular, we propose HyperGraph Convolutional Network (HyperGCN) for semisupervised vertex classification over hypergraphs. Unlike existing methods, HyperGCN principally bridges tools from graph neural networks and spectral hypergraph theory.
2) Deep Learning for Hypergraph Link Prediction
In the second part, we focus on the task of predicting groupwise relationships (i.e., link prediction over hypergraphs). We propose Neural Hyperlink Predictor (NHP), a novel neural networkbased method for link prediction over hypergraphs. NHP uses a novel scoring layer that principally enables us to predict group relationships on incomplete hypergraphs where hyperedges need not represent similarity.
3) Deep Learning for MultiRelational and Recursive Hypergraphs
In the third and final part, we explore more complex structures such as multirelational hypergraphs in which each hyperedge is typed (i.e., belongs to a relation type) and recursive hypergraphs in which hyperedges can act as vertices in other hyperedges. We first propose Generalised Message Passing Neural Network (GMPNN) for learning vertex representations on multirelational ordered hypergraphs. GMPNN generalises existing MPNNs on graphs, hypergraphs, multirelational graphs, heterogeneous graphs, and multilayer networks. We then propose MPNNRecursive, a novel framework, to handle recursively structured data. Extensive experimentation on realworld hypergraphs show the effectiveness of our proposed models.
Microsoft Teams link:
https://teams.microsoft.com/l/meetupjoin/19%3ameeting_MTg4MDZiODQtMWE0Zi00YTA1LWEzMmMtNzU3MzY3Y2E1Zjk5%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97f6a741e3b2c5ad4193976476%22%2c%22Oid%22%3a%2228f0419ce018464c85c03f1bd2ac7281%22%7d
 Series: Golden Jubilee Women in Computing Lecture by Dr Carol Frieze (CMU) and Dr Jeria Quesenberry (CMU) Title: Kicking Butt in Computer Science: Women in Computing at Carnegie Mellon and Around the World  Speaker: Dr. Carol Frieze and Dr. Jeria Quesenberry
Carnegie Mellon University  Date and Time: Thursday, January 21, 2021, 8:00 PM
 Venue: Zoom Webinar: https://zoom.us/j/92747420552
Abstract In this talk we discuss our work on the participation of women in computer science in Carnegie Mellons School of Computer Science. Carnegie Mellon (CMU) has been a national leader in paying attention to womens participation in computing. In the last few years CMU hit a landmark in reaching gender parity in the computer science major. Our book  “Kicking butt in Computing in Computer Science: Women in Computing at Carnegie Mellon University”  tells a positive story and illustrates the value of looking to cultural factors, not gender differences. More recently our work moved beyond CMU to ask what is happening with women in computing globally? In “Cracking the Digital Ceiling: Women in Computing Around the World” we brought together more than 20 academics to provide their perspectives on the issue from a wide range of countries and cultures. We will discuss the various obstacles and catalysts that help determine womens participation in the ever growing and life influencing fields of computing.
Speaker Bio: Carol Frieze, Ph.D.
Dr. Carol Frieze was Director of Women@SCS and SCS4ALL in Carnegie Mellons School of Computer Science working on diversity and inclusion for the past 20 years. Her publications, teaching, and research interests focus on the culture of computing, stereotypes and myths, unconscious bias, and broadening participation in computing fields. Carol recently retired but continues to teach parttime in the School of Computer Science.
http://www.cs.cmu.edu/~cfrieze/
Jeria Quesenberry, Ph.D.
Dr. Jeria Quesenberry is an Associate Teaching Professor of Information Systems in the Dietrich College of Humanities and Social Sciences at Carnegie Mellon University. Her research interests are directed at the study of cultural influences on information technology students and professionals, including topics of social inclusion, broadening participation, career values, organizational interventions, and worklife balance.
https://jeriaq.com/
 Series: Department Seminar (ONLINE) Title: Dismantling the deep neural network black box  Speaker: Dr. Chandrashekar Lakshminarayanan
Indian Institute of Technology Palakkad
Kerala  Date and Time: Thursday, January 21, 2021, 3:00 PM
 Venue: Microsoft Teams  ONLINE
Abstract Deep neural networks (DNNs) have been quite successful in a variety of supervised learning tasks. A key reason attributed to the success of DNNs is their ability to automatically learn high level representation of the data. The standard view is that low level representations are learnt in the initial layers, and as one proceeds in depth, sophisticated high level representations are learnt in the deeper layers. In this talk, we will focus on DNNs with rectified linear unit (ReLU) activations (ReLUDNNs), a widely used subclass of DNNs. We will exploit the gating property of ReLU activations to build an alternative theory for representation learning in ReLUDNNs. The highlights are:
1) We encode gating information in a novel neural path feature. We analytically show that the standalone role of gates is characterised by the associated neural path kernel (NPK).
2) We show via experiments (on standard datasets) that almost all useful information is stored in the gates, and that neural path features are learnt during training.
3) We show that the neural path kernel has a composite structure. In case of fully connected DNNs, the NPK is a product of the base kernel, in the case of residual networks with skip connections, the NPK has sum of product (of base kernels) form, and in the case of convolutional nets, the NPK is rotationally invariant.
Speaker Bio: Chandrashekar Lakshminarayanan obtained his PhD from the Department of Computer Science and Automation, Indian Institute of Science (2016), and was a postdoctoral research fellow at the Department of Computing Science (July 2016 June 2017), University of Alberta, and a research scientist at DeepMind, London (August 2017July 2018). Prior to his PhD, he was an analog design engineer at Cosmic Circuits, Bangalore for a period of 3 years. He joined IITPKD as an assistant professor in July 2018. His research interests include deep learning, reinforcement learning, stochastic approximation algorithms. He is also interested in machine learning for humanintheloop systems.
Microsoft Teams link:
https://teams.microsoft.com/l/meetupjoin/19%3ameeting_MDkzMmZhNWEtNmU2My00N2JmLWI3NGUtZTNlM2E0OTcxNzNh%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97f6a741e3b2c5ad4193976476%22%2c%22Oid%22%3a%22c747ccaaceaa4197b4cbce2f1d4694da%22%7d
Host Faculty: Prof. Shalabh Bhatnagar
 Series: Golden Jubilee Women in Computing Lecture by Prof. Lenore Blum (CMU) Title: A Theoretical Computer Science Perspective on Consciousness  Speaker: Prof. Lenore Blum
Carnegie Mellon University  Date and Time: Monday, January 18, 2021, 8:30 PM
 Venue: Microsoft Teams Meeting: https://tinyurl.com/y56jrn9b
Abstract The quest to understand consciousness, once the purview of philosophers and theologians, is now actively pursued by scientists of many stripes. This talk looks at consciousness from the perspective of theoretical computer science. It formalizes the Global Workspace Theory (GWT) originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, and others. Our major contribution lies in the precise formal definition of a Conscious Turing Machine (CTM), also called a Conscious AI. We define the CTM in the spirit of Alan Turings simple yet powerful definition of a computer, the Turing Machine (TM). We are not looking for a complex model of the brain nor of cognition but for a simple model of (the admittedly complex concept of) consciousness.
After formally defining CTM, we give a formal definition of consciousness in CTM. We then suggest why the CTM has the feeling of consciousness. The reasonableness of the definitions and explanations can be judged by how well they agree with commonly accepted intuitive concepts of human consciousness, the range of related concepts that the model explains easily and naturally, and the extent of its agreement with scientific evidence.
Joint work with Manuel Blum and Avrim Blum.
Speaker Bio: Lenore Blum (PhD, MIT) is Distinguished Career Professor Emeritus of Computer Science at Carnegie Mellon University (CMU) and Founding Director of Project Olympus, an innovation center that works with faculty and students to bridge the gap between cuttingedge university research/innovation and economypromoting commercialization for the benefit of our communities. Project Olympus is a good example of Blums determination to make a real difference in the academic community and the world beyond.
Lenore is internationally recognized for her work in increasing the participation of girls and women in Science, Technology, Engineering, and Math (STEM) fields. She was a founder of the Association for Women in Mathematics and founding coDirector of the Math/Science Network and its Expanding Your Horizons conferences for middle and high school girls. At CMU she founded the Women@SCS program and CS4HS, now sponsored worldwide by Google. In 2004 she received the US Presidential Award for Excellence in Science, Mathematics, and Engineering Mentoring. In 2009 she received the Carnegie Science Catalyst Award recognizing her work targeting hightech talent to promote economic growth in the Pittsburgh region and for increasing the participation of women in computer science. Currently half the computer science undergraduate majors at CMU are women.
Lenore has served the professional community in numerous capacities, including as President of the Association for Women in Mathematics, Vice President of the American Mathematical Society, and as a member of the MIT Mathematics Visiting Committee. She has taught at the University of California at Berkeley, was a Senior Researcher at the International Computer Science Institute and Deputy Director of the Mathematical Sciences Research Institute, both also in Berkeley. She is currently on the Advisory Board of the new free online WorldQuant University, built on the premise that while talent is universally distributed, opportunity is not.
Lenores research, from her early work in model theory and differential fields (logic and algebra) to her more recent work in developing a theory of computation and complexity over the real numbers (mathematics and computer science), has focused on merging seemingly unrelated areas. The latter work, founding a theory of computation and complexity over continuous domains (with Felipe Cucker, Mike Shub and Steve Smale), forms a theoretical basis for scientific computation. On the eve of Alan Turings 100th birthday in June 2012, she was plenary speaker at the Turing Centenary Celebration at the University of Cambridge, England, showing how a little known (to logicians and computer scientists!) paper of Turings is fundamental to this theory. She is currently working on an exciting research project with Manuel Blum and Avrim Blum developing a computer architecture for a conscious AI based on cognitive neuroscience.
 Series: Ph.D (Engg.)Thesis Defence ONLINE Title: On the Round Complexity Landscape of Secure Multiparty Computation  Speaker: Ms. Divya Ravi
 Faculty Advisor: Dr. Arpita Patra
 Date and Time: Friday, January 15, 2021, 3:00 PM
 Venue: Microsoft Teams  ONLINE
Abstract In secure multiparty computation (MPC), n parties wish to jointly perform a computation on their private inputs in a secure way, so that no adversary corrupting a subset of the parties can learn more information than their outputs (privacy), nor can they affect the outputs of the computation other than by choosing their own inputs (correctness). The round complexity of MPC protocols is a fundamental question in the area of secure computation and its study constitutes a phenomenal body of work in the MPC literature. The research goal of this thesis is to advance the state of the art by expanding this study of round complexity to various adversarial settings and network models. Following are the main contributions of this thesis organized into three broad categories:
MPC for small population ([1,2]). We begin with the study of roundoptimal (more generally, roundefficient) MPC protocols for small population, namely involving 3 (3PC) and 4 (4PC) parties tolerating single active corruption (honest majority). On the theoretical side, we settle the exact round complexity of 3PC in honestmajority setting, for a range of security notions such as selective abort, unanimous abort, fairness and guaranteed output delivery. On the practical side, we present efficient, constantround 3PC and 4PC protocols with fairness and guaranteed output delivery; suitable for highlatency networks such as the Internet.
Beyond Traditional Adversaries ([3,4]). We extend the study of round complexity beyond the traditional adversarial settings. First, we overcome the demarcation of study of round complexity of MPC based on resilience (i.e honest majority or dishonest majority settings) and investigate an interesting class of protocols called the BestofbothWorlds (BoBW) MPC which simultaneously achieve fairness / guaranteed output delivery in honest majority and unanimous abort in dishonest majority. We nearly settle the question of exact round complexity of BoBW protocols for several popular setups of MPC such as the plain model, Common reference String model (CRS) and PKI model.
Second, we overcome the demarcation of study of round complexity of MPC based on single type of corruption i.e either purely passive or active. We consider a generalized adversarial setting where the adversary can simultaneously perform both kinds of corruptions. We settle the question of exact round complexity of MPC protocols achieving fairness and guaranteed output delivery against two such generalized and powerful adversaries called the dynamic and boundary adversaries; in the CRS model.
Power of Hybrid Networks ([5]). A popular categorization of study of MPC based on network is the synchronous and asynchronous setting. On one hand, asynchronous networks are more realistic but on the other, synchronous protocols are known to have better fault tolerance and properties compared to their asynchronous counterparts. With the goal of combining their best features, we explore hybrid networks that is asynchronous in nature and yet supports a few synchronous rounds at the onset of a protocol execution. We address fundamental questions that throw light on the minimal synchrony assumption needed to achieve the properties of the fully synchronous protocols. We bridge the existing theoretical feasibility gap between perfectlysecure synchronous and asynchronous VSS and MPC protocols; where verifiable secret sharing (VSS) constitutes a fundamental building block of MPC.
References
[1] Arpita Patra and Divya Ravi. On the exact round complexity of secure threeparty
computation. In CRYPTO, 2018.
[2] Megha Byali, Arun Joseph, Arpita Patra, and Divya Ravi. Fast secure computation for
small population over the internet. In ACM Conference of Computer and Communications Security (CCS), 2018.
[3] Arpita Patra, Divya Ravi and Swati Singla. On the Exact Round Complexity of BestofbothWorlds Multiparty Computation. In ASIACRYPT, 2020.
[4] Arpita Patra and Divya Ravi. Beyond honest majority: The round complexity of fair
and robust multiparty computation. In ASIACRYPT, 2019.
[5] Arpita Patra and Divya Ravi. On the power of hybrid networks in multiparty computation.
IEEE Trans. Inf. Theory, 64(6):4207–4227, 2018.
Microsoft Teams Link:
https://teams.microsoft.com/l/meetupjoin/19%3ameeting_NmY0ZGQzOTQtNWJhNS00ZjEyLTg1ODktMmRhOWY1NGNhM2Q1%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97f6a741e3b2c5ad4193976476%22%2c%22Oid%22%3a%22a787cc0157cc4fc1b7e14e9d51923f6d%22%7d
 Series: Department Seminar Title: Prafulla Kumar Choubey, Texas A&M University  Speaker: Discourseguided Approaches for Event Coreference Resolution
 Date and Time: Friday, January 15, 2021, 11:30 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract Event coreference resolution aims to determine and cluster event mentions that refer to the same realworld event. A typical event coreference resolution system relies on scoring similarity between eventarguments features of all eventpairs in a document followed by clustering. However, a text document follows the principle of language economy where most of the events are mentioned only once (singletons). Consequently, only certain key events that connect other peripheral events are repeated to organize the content and produce a coherent story. Additionally, events are barely mentioned together with all of their arguments which results in a sparse and scattered distribution of coreferential event mentions. In the talk, I will discuss some of my recent works that address sparsity in coreferential event distribution by developing a holistic approach based on the cues from the document content organization structures in news articles.
Talk Meeting Link: https://teams.microsoft.com/l/meetupjoin/19%3ameeting_OWVjMWI2YTAtNGI1Yy00NGM3LTlhZTEtZWZiMjEzMTY1Mjlj%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97f6a741e3b2c5ad4193976476%22%2c%22Oid%22%3a%224bcd3d56e4054b0699fb27742262f261%22%7d
Speaker Bio: Prafulla is currently a Ph.D. candidate in the Department of Computer Science and Engineering at Texas A&M University. He is working under the supervision of Prof. Ruihong Huang. In 2014, he got his bachelor's degree in Electrical Engineering from the Indian Institute of Technology (IIT) Roorkee. Before joining Texas A&M in fall 2016, he worked in the system software team at Samsung Research & Development India Bangalore. His research interests are broadly in natural language processing and machine learning.
Host Faculty: R. Govindarajan
 Series: Department Seminar Title: Data Science at Scale: Scaling Up by Scaling Down and Out (to Disk)  Speaker: Dr. Prashant Pandey, Lawrence Berkeley National Lab. & University of California, Berkeley, USA
 Date and Time: Monday, January 11, 2021, 8:00 AM
 Venue: CSA Seminar Hall (Room No. 254, First Floor)
Abstract The standard solution to scaling applications to massive data is scaleout, i.e., use more computers or RAM. This talk presents my work on complementary techniques: scaling down, i.e., shrinking data to fit in RAM, and scaling to disk, i.e., organizing data on disk so that the application can still run fast. I will describe new compact and I/Oefficient data structures and their applications in stream processing, computational biology, and storage.
Concretely, I show how to bridge the gap between the worlds of external memory and stream processing to perform scalable and precise realtime eventdetection on massive streams. I show how to shrink genomic and transcriptomic indexes by a factor of two while accelerating queries by an order of magnitude compared to the stateoftheart tools. I show how to improve filesystem randomwrite performance by an order of magnitude without sacrificing sequential read/write performance.
Teams Meeting Link: https://teams.microsoft.com/l/meetupjoin/19%3ameeting_YmExNmNmZjMtODM1Zi00MDUxLWFkNmEtNjdmYThkZWIxNjkx%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97f6a741e3b2c5ad4193976476%22%2c%22Oid%22%3a%224bcd3d56e4054b0699fb27742262f261%22%7d
Speaker Bio: Dr. Prashant Pandey is a Postdoctoral Research Fellow at Lawrence Berkeley Lab and University of California Berkeley working with Prof. Kathy Yelick and Prof. Aydin Buluc. Prior to that, he spent one year as a postdoc at Carnegie Mellon University (CMU) working with Prof. Carl Kingsford. He obtained his Ph.D. in 2018 in Computer Science at Stony Brook University and was coadvised by Prof. Michael Bender and Prof. Rob Johnson.
His research interests lie at the intersection of systems and algorithms. He designs and builds tools backed by theoretically wellfounded data structures for largescale data management problems across computational biology, stream processing, and storage. He is also the main contributor and maintainer of multiple opensource software tools that are used by hundreds of users across academia and industry.
During his Ph.D. he interned at Intel Labs and Google. While interning at Intel Labs, he worked on an encrypted FUSE file system using Intel SGX. At Google, he designed and implemented an extension to the ext4 file system for cryptographically ensuring file integrity. While at Google, he also worked on the core data structures of Spanner, Google’s geodistributed big database.
Host Faculty: R. Govindarajan
 Series: Ph.D (Engg.)Thesis Defence ONLINE Title: Hypergraph Network Models: Learning, Prediction, and Representation in the Presence of HigherOrder Relations  Speaker: Mr. Govind Sharma
Ph.D Student
Dept. of CSA  Faculty Advisor: Prof. M Narasimha Murty / Prof. Shalabh Bhatnagar
 Date and Time: Tuesday, December 08, 2020, 10:00 AM
 Venue: Microsoft Teams  ONLINE
Abstract The very thought about relating objects makes us assume the relation would be pairwise, and not of a higherorder  involving possibly more than two of them at a time. Yet in reality, higherorder relations do exist and are spread across multiple domains: medical science (e.g., coexisting diseases/symptoms), pharmacology (e.g., reacting chemicals), bibliometrics (e.g., collaborating researchers), the film industry (e.g., cast/crew), human resource (e.g., a team), social sciences (e.g., negotiating/conflicting nations), and so on. Since a collection of intersecting higherorder relations lose context when represented by a graph, "hypergraphs"  graphlike structures that allow edges (called hyperedges/hyperlinks) spanning possibly more than two nodes  capture them better. In a quest to better understand such relations, in this thesis we focus on solving a few networkscience oriented problems involving hypergraphs.
"Hypergraphs and Pairwise Links": In the first of three broad parts, we study the behavior of usual graphoriented networks that have an otherwiseignored hypergraph underpinning. We particularly establish the skewness a hypergraph introduces into its induced graphs, and the effect of these biases on the structure and evaluation of the wellknown problem of link prediction in networks. We find that an underlying hypergraph structure makes popular heuristics such as commonneighbors overestimate their ability to predict links. Gathering enough evidence  both theoretical and empirical  to support the need to reestablish the evaluations of link prediction algorithms on hypergraphderived networks, we propose adjustments that essentially undo the undesired effects of hypergraphs in performance scores. Motivated by this observation, we extend graphbased structural node similarity measures to cater to hypergraphs (although still, for similarity between pairs of nodes). To be specific, we first establish mathematical transformations that could transfer any graphstructurebased notion of similarity between node pairs to a hypergraphstructurebased one. Using exhaustive combinations of the newly established scores with the existing ones, we could show improvements in the performance of both structural as well as temporal link prediction.
"Predicting Higherorder Relations": For the second part of our thesis, we turn our attention towards a more central problem in hypergraphs  the hyperlink/hyperedge prediction problem. It simply refers to developing models to predict the occurrence of missing or future hyperedges. We first study the effect of negative sampling (sampling the negative class)  an exercise performed due to the extreme intractability of the set of all nonhyperlinks, also known as the class imbalance problem  on hyperlink prediction, which has never been studied in the past. Since we observe hyperlink prediction algorithms performing differently under different negative sampling techniques, our experiments help the seemingly unimportant procedure gain some significance. Moreover, we contribute towards two benchmark negative sampling algorithms that would help standardize the step. Moving on from the negative sampling problem to predicting hyperlinks themselves, we work on two different approaches: a cliqueclosure based approach, and a subhigherorder oriented one. While in the former, we develop and successfully test the cliqueclosure hypothesis  that hyperlinks mostly form from cliques or nearcliques  and are able to utilize it for hyperlink prediction via matrix completion (C3MM), the latter approach works on a novel information flow model in hypergraphs. More precisely, we introduce the concept of subhyperedges to capture the subhigherorder notion in relations, and utilize an attentionbased neural network model called SHONeN focusing on subhyperedges of a hyperedge. Owing to SHONeNs computational complexity, we propose a suboptimal heuristic that is able to perform better than its baselines on the downstream task of predicting hyperedges.
"Higherorder Bipartite Relations": The third and final part of our thesis is dedicated exclusively to "bipartite hypergraphs": structures that are used to capture higherorder relations between two disjoint node sets, e.g., a patients diagnosis (possibly multiple diseases and symptoms), a movie project (multiple actors and crew members), etc. We first capture the structure of realworld such networks using perfixed bipartite hypergraphs (those where the set of left and right hyperedges is fixed beforehand), and then focus on the bipartite hyperlink prediction problem. Since existing selfattention based approaches meant for usual hypergraphs do not work for bipartite hypergraphs  a fact that our experiments validate, we propose a crossattention model for the same, and use the notion of setmatching over collections of sets to solve for bipartite hyperlink prediction. As a result, we develop a neural network architecture called CATSETMAT that performs way better than any of the other approaches meant to solve the bipartite hyperlink prediction problem. Last but not least, we also explain how, owing to an observation we call the positivenegative dilemma, existing stateoftheart algorithms fail on bipartite hypergraphs.
Microsoft Teams Link:
https://teams.microsoft.com/l/meetupjoin/19%3ameeting_ZjBiZjg4YTgtMWY5Ny00MDNjLWE4MmItMTI0MWRlYzc3MjIz%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97f6a741e3b2c5ad4193976476%22%2c%22Oid%22%3a%227cc8bd44135e4281b03380d7f9df42fd%22%7d
 Series: Ph.D (Engg.) (Thesis Defence) ONLINE Title: Towards Secure and Efficient Realization of PairingBased Signatures from Static Assumptions.  Speaker: Mr. R. Kabaleeshwaran
Ph.D (Engg.) Student
Dept. of CSA
IISc.  Faculty Advisor: Prof. Sanjit Chatterjee
 Date and Time: Thursday, December 03, 2020, 10:00 AM
 Venue: Microsoft Teams: ONLINE
Abstract Bilinear pairing defined over elliptic curve group was first used to design novel cryptosystem in 2000. Since then a large number of cryptosystems has been proposed in pairingbased cryptography (PBC). The main tool for all such cryptosystems is the pairing map which can be defined on either composite or primeorder groups. Security of a public key cryptosystem is typically proved under some computational hardness assumption. PBC has witnessed a plenty of parameterized/interactive assumptions. However, it is wellknown that such assumptions have several limitations. In this thesis we explore the question of security and efficiency of pairingbased signature schemes based on static assumptions. We have investigated the efficacy of the following approaches towards this goal: (i). frameworks for converting cryptosystems from composite to primeorder bilinear pairing setting, (ii). DejaQ framework, for removing dependency on parameterized assumption and (iii). dualform signature (DFS) technique, for removing dependency on parameterized/interactive assumption.
First, we focus on the conversion framework. In 2005, Boneh et al. introduced a novel homomorphic encryption scheme using compositeorder pairing setting. From then there are plenty of cryptosystems constructed in the compositeorder pairing setting. However, it is well known that a compositeorder pairing is significantly slower than its primeorder counterpart. This motivated Freeman to propose his conversion framework that converts some cryptographic protocols to the primeorder pairing setting. He formally defined certain properties called projecting and canceling, which are used in the protocol construction and/or in the security argument. Since then several frameworks have been proposed for conversion purpose. We revisit all the projecting frameworks and establish that Freemans framework is still optimal in the asymmetric pairing setting. We also present an alternative security proof for SeoCheons projecting and canceling framework under the static symmetric external DiffieHellman (SXDH) assumption, instead of the original tailormade assumption. Next, we formalize the fulldecomposition notion in the existing projecting frameworks and show that this notion is sufficient instead of the socalled translating property. Then, we abstract an unbalanced projecting framework in the asymmetric pairing setting that allows the pairing source groups to have different orders. As application, we convert the following schemes to the primeorder asymmetric pairing setting: ShachamWaters ring signature, BoyenWaters group signature and Meiklejohn et al.s round optimal blind signature. In their original construction, security of the above schemes requires both projecting and canceling properties in the compositeorder symmetric pairing setting. We show that the framework for projecting and canceling is not necessary to instantiate these schemes.
Next, we focus on a set of parameterized assumptions called the BTA family. Such parameterized assumptions play a crucial role in the security of many novel pairingbased cryptosystems. However, they have some negative impact on efficiency at a concrete security level. A prominent approach to remove the dependency on parameterized assumption is the DejaQ framework of Chase et al. The DejaQ framework aims to establish that certain parameterized assumptions are implied by the subgroup hiding assumption in the compositeorder pairing setting. Applying DejaQ framework to a family of assumptions is an important question, as it covers several parameterized assumptions which in turn cover more cryptographic protocols. However, the existing DejaQ framework could cover only Boyens Uber assumption family. Recently Ghadafi and Groth introduced the bilinear target assumption (BTA) family, that covers more parameterized assumptions including the Uber assumption family. We show that the parameterized assumptions that belong to the BTA family are reducible from the subgroup hiding assumption. In the process, we first suitably extend a property called parameterhiding and then adapt the DejaQ proof technique on the parameterized assumptions that belong to the BTA family.
Finally, we focus on the applicability of the dualform signature (DFS) technique on some pairingbased signatures. The DejaQ framework does not address the question of how to remove the dependency on interactive assumption. The DFS technique can be used for this purpose and it is applied directly in the security argument of the protocol. We use the DFS technique to prove the security of Abe et al.s structurepreserving signature, BoyenWaters group signature and PointchevalSanders rerandomizable signature (RRS) under some static assumptions. We also present an efficient construction of RRS scheme in the primeorder setting. Then, we use the proposed RRS scheme as a building block to construct a sequential aggregate signature (SeqAS) scheme with constantsize public key under the SXDH assumption. The performance of the proposed schemes is comparable to that of previous proposals based on some nonstandard interactive assumptions.
Microsoft Teams link:
https://teams.microsoft.com/l/team/19%3ab2e4009d9ba64dd1a4608e1e4d71634f%40thread.tacv2/conversations?groupId=01bc59f0ab9344a28bab93177163507e&tenantId=6f15cd97f6a741e3b2c5ad4193976476
 Series: Ph.D (Engg.) (Colloquium) ONLINE Title: Algorithms for Challenges to Practical Reinforcement Learning  Speaker: Ms. Sindhu P R
Ph.D (Engg.) student
Dept. of CSA  Faculty Advisor: Prof. Shalabh Bhatnagar
 Date and Time: Tuesday, December 01, 2020, 2:30 PM
 Venue: Microsoft Teams: ONLINE
Abstract Reinforcement learning (RL) in real world applications faces major hurdles  the foremost being safety of the physical system controlled by the learning agent and the varying environment conditions in which the autonomous agent functions. A RL agent learns to control a system by exploring available actions. In some operating states, when the RL agent exercises an exploratory action, the system may enter unsafe operation, which can lead to safety hazards both for the system as well as for humans supervising the system. RL algorithms thus need to respect these safety constraints and must do so with limited available information. Additionally, RL autonomous agents learn optimal decisions in the presence of a stationary environment. However, the stationary assumption on the environment is very restrictive. In many real world problems like traffic signal control, robotic applications, etc., one often encounters situations with nonstationary environments, and in these scenarios, RL algorithms yield suboptimal decisions.
This talk describes our algorithmic solutions to the challenges of safety and nonstationary environmental conditions in RL. In order to handle safety restrictions and facilitate safe exploration during learning, we propose a crossentropy method based sample efficient learning algorithm. This algorithm is developed based on constrained optimization framework and utilizes limited information for the learning of feasible policies. Also, during the learning iterations, the exploration is guided in a manner that minimizes safety violations.
The goal of the second algorithm is to find a good policy for control when the latent model of the environment changes with time. To achieve this, the algorithm leverages a change point detection algorithm to monitor change in the statistics of the environment. The results from this statistical algorithm are used to reset learning of policies and efficiently control an underlying system. Both the proposed algorithms are tested numerically on benchmark problems in RL.
The second part of this talk will focus on application of RL to obstacle avoidance problem in UAV quadrotor. Obstacle avoidance in quadrotor aerial vehicle navigation brings in additional challenges in comparison to ground vehicles. This is because, an aerial vehicle needs to navigate across more types of obstacles  for e.g., objects like decorative items, furnishings, ceiling fans, signboards, tree branches, etc., are also potential obstacles for a quadrotor aerial vehicle. Thus, methods of obstacle avoidance developed for ground robots are clearly inadequate for UAV navigation. Our proposed method utilizes the relevant temporal information available from the ambient surroundings for this problem. This information is a sequence of monocular camera images collected by the quadrotor aerial vehicle. Our method adapts attention based deep Q networks combined with generative adversarial networks for this application. It improves efficiency of learning by inferring quadrotor maneuver decisions from temporal information of the ambient surroundings.
Microsoft Teams Meeting Link:
https://teams.microsoft.com/l/meetupjoin/19%3ameeting_M2E3MmRlZjktY2Y0Mi00ZmZhLTljMDgtYTg4NThiYWJhZmU3%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97f6a741e3b2c5ad4193976476%22%2c%22Oid%22%3a%221b0586c214884f8fab3cd2e61940254c%22%7d
 Series: Theory Seminar Series: OnLine Title: An O(m^{4/3+o(1)}) algorithm for exact unitcapacitated max flow  Speaker: Mr. Tarun Kathuria
Ph.D Student
University of California
Berkeley  Date and Time: Friday, November 13, 2020, 11:00 AM
 Venue: Microsoft Teams OnLine
Abstract In this talk, I will present an algorithm for computing st max flows in O(m^{4/3+o(1)}) time in unit capacity graphs. The algorithm is inspired by potential reduction Interior Point Methods for linear programming. Instead of using scaled gradient/Newton steps of a potential function, we consider minimizing the potential function exactly and show how this allows us to directly find a centered point efficiently. Then using the weighted central path framework of Madry and LiuSidford, we show that our steps also benefit from maximizing weights carefully, which can be efficiently solved using work of Kyng et al., which allows us to get the desired runtime.
Link to Microsoft Teams meeting:
https://teams.microsoft.com/l/meetupjoin/19%3ameeting_MjY0NmJhNWMtMDdkNC00ZWIxLTgzMWMtMjExNjZiYTA3YmE2%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97f6a741e3b2c5ad4193976476%22%2c%22Oid%22%3a%22febce5bed427430fb9d50bab6673a9ed%22%7d
Host Faculty: Dr. Anand Louis
 Series: M.Tech (Research) Colloquium ONLINE Title: Robust Algorithms for recovering planted structures in Semirandom instances  Speaker: Mr. Yash Khanna
M.Tech (Research) Student
Dept. of CSA  Faculty Advisor: Dr. Anand Louis
 Date and Time: Monday, November 02, 2020, 4:00 PM
 Venue: Microsoft Teams  ONLINE
Abstract In this work, we design algorithms for two fundamentally important and classical graph problems in the planted setting. Both of these problems are NPhard and the bounds known from the algorithmic front are either not fully understood, or not much progress can be made because of tight lower bounds. Thus it is natural to consider semirandom models for these problems. These models are inspired from the seminal paper of Feige and Killian [FK01] and have been studied in numerous followup works with the latest ones by Steinhardt, and McKenzie et al. [Ste17, MMT20]. The construction of our instance starts with an empty graph, then an arbitrary set of vertices (S) is chosen and either a dense graph or a clique (or an independent set) is planted on it, the subgraph on S x VS is a random graph, while the subgraph on VS might be a random, arbitrary, or some special graph (depending on the model). Our algorithms are based on rounding semidefinite programs and our primary focus is on recovering (completely or partially) the planted structure (S) with high probability (over the randomness of the input). We give algorithms that exploit the geometry of the corresponding vectors (from the SDP) and are easy to design/analyse.
The two problems which we study are:
 Densest kSubgraph/Clique
Given an undirected graph G, the Densest kSubgraph problem (DkS) asks to compute a set S subseteq V of cardinality k such that the weight of edges inside S is maximized. This is a fundamental NPhard problem whose approximability, inspite of many decades of research, is yet to be settled. There is a significant gap between the best known worstcase approximation factor of this problem [BCC+10] and the hardness of approximation for it (assuming the Exponential Time Hypothesis) [Man17]. We ask what are some easier instances of this problem? We propose some natural semirandom models of instances with a planted dense subgraph, and study approximation algorithms for computing the densest subgraph in them. There are many such random and semirandom models which have been studied in the literature [BCC+10, Ame15, HWX16, BA19 etc.].
 Independent Set in Hypergraphs
The independent set problem in graphs poses the following question : given a graph, and a subset of vertices such that any two vertices of the set do not have an edge between them. The maximization version of this problem features as one of the Karp's original twentyone NPcomplete problems ([Kar72], the clique problem instead of its complement, the independent set problem). The independent set problem is relatively well understood and by the famous result of Håstad [Hås99], the lower and upper bounds of this problem are tight. Hypergraphs are a natural extension of graphs, where each edge is formed across a tuple of distinct vertices. For a graph, each tuple has a size, two. To the best of our knowledge, semirandom models on hypergraphs have not been studied so far. Studying classical problems like these on hypergraphs is naturally of theoretical as well as practical interest. We study the algorithmic problems studied in McKenzie et al. [MMT20] and develop algorithms for them in the case of hypergraphs.
Note : Both of these eprints are available online on arXiv
Speaker Bio: Yash Khanna is a third (and final) year M.Tech (Research) student in the Department of Computer Science and Automation in the Theory group. He previously graduated from BITS Pilani. His research interests include Algorithms and Complexity.
Link to Microsoft Teams:
https://teams.microsoft.com/l/meetupjoin/19%3ameeting_MjQ5ZmZkMTAtNDdhZi00ZDg5LWEzOTgtMmExNDZjMDY5Y2Jh%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97f6a741e3b2c5ad4193976476%22%2c%22Oid%22%3a%22a57dc80f813f4a35aa2434fdc7026ee7%22%7d
 Series: Ph.D (Engg.)Thesis Defence ONLINE Title: Representing Networks: Centrality, Node Embeddings, Community Outliers and Graph Representation  Speaker: Mr. Sambaran Bandyopadhyay
Ph.D (Engg.) ERP Student
Dept. of CSA  Faculty Advisor: Prof. M Narasimha Murty, Prof. Shalabh Bhatnagar, Dr. Ramasuri Narayanam (Organizational guide, IBM Research)
 Date and Time: Wednesday, October 28, 2020, 9:00 AM
 Venue: Microsoft Teams  ONLINE
Abstract Networks are ubiquitous. We start our technical work in this thesis by exploring the classical concept of node centrality (also known as influence measure) in information networks. Like clustering, node centrality is also an illposed problem. There exist several heuristics and algorithms to compute the centrality of a node in a graph, but there is no formal definition of centrality available in the network science literature. Lately, researchers have proposed axiomatic frameworks for the centrality of a node in a network. However, these existing formal frameworks are not generic in nature in terms of characterizing the space of influence measures in complex networks. In this work, we propose a set of six axioms in order to capture most of the intrinsic properties that any influence measure ideally should satisfy. We also characterize existing measures of centrality with respect to this framework.
Next, we focus more on the representation learning on networks. Network embedding is required as real life networks are large, extremely sparse and discrete in nature. We investigate the problem of unsupervised node representation in attributed networks through informative random walk. Edges are also useful for various downstream network mining tasks, but most of the existing homogeneous network representation learning approaches focus on embedding the nodes of a graph. So, we propose a novel unsupervised algorithm to embed the edges of a network, through the application of the classical concept of line graph of a network. The optimization framework of edge embedding connects to the concept of node centrality in the representation learning framework. Also, we conduct research on attributed hypergraphs. We propose a novel hypergraph neural network to represent and classify hypernodes.
Outlier analysis is another important problem in network science. All the realworld networks contain outlier nodes to some extent. Empirically we have shown that outliers can affect the quality of network embedding if not handled properly. So, we integrate the process of network embedding and outlier detection into a single framework. In this research thread, we first propose a matrix factorization based approach which minimizes the effect of outlier nodes in the framework of attributed network embedding. Next, we propose two neural network architectures, based on L2 regularization and adversarial training respectively, to minimize the effect of outliers on node embedding of an attributed network. Further, extending the concept of support vector data description, we propose a novel algorithm which integrates node embedding, community detection and outlier detection into a single optimization framework by exploiting the link structure of a graph.
In the last part of the thesis, we focus on graph level representation and tasks. First, we propose a supervised graph neural network based algorithm with hierarchical pooling strategy to classify a graph from a set of graphs. Next, we propose a novel GNN based algorithm for the unsupervised representation of a graph from a set of graphs, so that similar graphs are represented closely in the embedding space and dissimilar graphs are separated away.
Link to Microsoft Teams:
https://teams.microsoft.com/l/channel/19%3a8c5f66cd10ea447eaa90e6c358e7e4d8%40thread.tacv2/General?groupId=3e16bdaee6414e8e83e33719a65cdff2&tenantId=6f15cd97f6a741e3b2c5ad4193976476
 Series: M.Tech (Research) Thesis Defence  ONLINE Title: Embedding Networks: Node and Graph Level Representations  Speaker: Ms. Manasvi Aggarwal
M.Tech (Research) Student
Dept. of CSA  Faculty Advisor: Prof. M Narasimha Murty & Prof. Shalabh Bhatnagar
 Date and Time: Wednesday, October 28, 2020, 11:00 AM
 Venue: Microsoft Teams  ONLINE
Abstract Network representation learning is essential to carry out various network analysis tasks. Graphs are the most suitable structures to represent realworld relational data such as social networks and molecular structures. In this thesis work, we focus on learning representations of the nodes and the entire graphs. Graph neural networks gained significant attention for graph representation and classification.
For graph classification, existing pooling approaches do not consider both the regionbased and the graph level importance of the nodes together. We address this issue in the first part of the thesis by proposing a novel graph pooling layer R2POOL, which retains the most informative nodes for the next coarser version of the graph. Further, we integrate R2POOL with our multilevel prediction and branch training strategies to learn graph representations and to further enhance the model's capability for graph classification.
Moreover, the attention mechanisms on graphs improve the performance of graph neural networks. Typically, it helps to identify a neighbor node which plays a more important role in determining the label of the node under consideration. But in the realworld situation, a particular subset of nodes together may be significant. In the second part of the thesis, we address this problem and introduce the concept of subgraph attention for graphs. To show the efficiency of this scheme, we use subgraph attention for node classification.
Additionally, the hierarchical graph pooling is promising in graph literature. But, not all the graphs at different levels play an equal role in graph classification. Towards this end, we propose a novel algorithm called SubGattPool, which jointly learns the subgraph attention and employs two different attention mechanisms to find the important nodes in a hierarchy and the individual hierarchies in a graph for embedding and classifying the graphs given a collection of graphs. Improved performance over the stateoftheart on both the real world and synthetic graphs for node and graph classification shows the efficiency of the proposed algorithms.
Link to Microsoft Teams:
https://teams.microsoft.com/l/team/19%3a4c60cfad6fdf4fbab81dd8df3991949a%40thread.tacv2/conversations?groupId=1684647f67d54b7b931361e8a6900b61&tenantId=6f15cd97f6a741e3b2c5ad4193976476
 Series: Ph.D (Engg.)Thesis Defence ONLINE Title: Algorithms for Stochastic Optimization, Statistical Estimation and Markov Decision Processes  Speaker: Mr. Chandramouli Kamanchi
Ph.D (Engg.) student
Dept. of CSA  Faculty Advisor: Prof. Shalabh Bhatnagar
 Date and Time: Tuesday, October 20, 2020, 11:00 AM
 Venue: Microsoft Teams  ONLINE
Abstract Stochastic approximation deals with the problem of finding zeros of a function expressed as an expectation of a random variable. In this thesis we study and analyze convergence of stochastic approximation algorithms in the context of optimization under uncertainty, statistical estimation and Reinforcement Learning. Moreover, we also explore second order methods in the context of Markov Decision Processes.
First, we consider Stochastic Optimization (SO) problem where we optimize an objective function in the presence of noise. A prominent algorithm in SO namely Random Direction KieferWolfowitz (RDKW) solves the problem by obtaining noisy gradient estimate by randomly perturbing all the parameters simultaneously. In this thesis, we characterize the class of deterministic perturbation sequences that can be utilized in the RDKW algorithm. Using our characterization, we propose a construction of a deterministic perturbation sequence that has the least possible cycle length among all deterministic perturbations. We establish the convergence of the RDKW algorithm for the generalized class of deterministic perturbations.
In statistical estimation, one of the popular measures of central tendency that provides better representation and interesting insights of the data compared to the other measures like mean and median is the metric mode. In the second part of our thesis, we provide a computationally effective, online iterative algorithm that estimates the mode of a unimodal smooth density given only the samples generated from the density. Asymptotic convergence of the proposed algorithm using stochastic approximation techniques is provided. We also prove the stability of the mode estimates by utilizing the concept of regularization.
In the third part of our thesis, we propose Successive OverRelaxation (SOR) Qlearning. In a discounted reward Markov Decision Process (MDP), the objective is to find the optimal value function. We first derive a modified fixed point iteration for SOR Qvalues and utilize stochastic approximation to derive a learning algorithm to compute the optimal value function and an optimal policy. We then prove the almost sure convergence of the SOR Qlearning to SOR Qvalues. Finally, through numerical experiments, we demonstrate that SOR Qlearning is faster compared to the standard Qlearning algorithm in the literature.
In the fourth part of our thesis, we explore secondorder methods in MDPs. We propose a second order value iteration procedure that is obtained by applying the NewtonRaphson method to the successive relaxation value iteration scheme. We prove the global convergence of our algorithm to the optimal solution asymptotically and show the second order convergence. Through experiments, we show the effectiveness of our proposed approach.
Microsoft Teams Link:
https://teams.microsoft.com/l/meetupjoin/19%3ameeting_OWVlZTEwZTQtYjQ5OC00NGZlLWE2NzAtYTkxOGJkZjBjZjJi%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97f6a741e3b2c5ad4193976476%22%2c%22Oid%22%3a%22733c69c90556462689a4cd9760e97080%22%7d
 Series: CSA Golden Jubilee Frontier Lecture Series Title: Acceleratorlevel Parallelism  Speaker: Prof. Mark D. Hill
University of WisconsinMadison  Date and Time: Thursday, October 15, 2020, 6:30 PM
 Venue: MS Teams Live event: https://tinyurl.com/y2vzddcf
Abstract Computer system performance has improved due to creatively using more transistors (Moore’s Law) in parallel via bit, instruction, thread, and datalevel parallelism. With the slowing of technology scaling, a way to further improve computer system performance under energy constraints is to employ hardware accelerators. Each accelerator is a hardware component that executes a targeted computation class faster and usually with (much) less energy. Already today, many chips in mobile, edge and cloud computing concurrently employ multiple accelerators in what we call acceleratorlevel parallelism (ALP).
This talk develops our hypothesis that ALP will spread to computer systems more broadly. ALP is a promising way to dramatically improve powerperformance to enable broad, future use of deep Al, virtual reality, selfdriving cars, etc. To this end, we review past parallelism levels and the ALP already present in mobile systems on a chip (SoCs). We then aid understanding of ALP with the Gables model and charge computer science researchers to develop better ALP “best practices”for: targeting accelerators, managing accelerator concurrency, choreographing interaccelerator communication, and productively programming accelerators. This joint work with Vijay Janapa Reddi of Harvard. See also: https://www.sigarch.org/acceleratorlevelparallelism/.
Speaker Bio: Mark D. Hill (http://www.cs.wisc.edu/–markhill) is Gene M. Amdahl and John P. Morgridge Professor Emeritus in Computer Sciences at the University of WisconsinMadison. He has long also held a courtesy appointment in Electrical and Computer Engineering. His work targets computers with complex memory systems, multiple processing cores, and systems which do not yet exist so they have to be simulated. Over three decades, he has collaborated with over 160 coauthors, has over 40 patents, and has held several visiting positions in the computer industry, most recently as a 2018 Google Intern. He serves as Chair Emeritus of a national computing think tank–the Computing Community Consortium–and he was Wisconsin Computer Sciences Department Chair 20142017. Mark won the highest award in computer hardware–EckertMauchly—in 2019. Mark is a fellow of ACM and IEEE, and has a PhD from the University of California, Berkeley.
Host Faculty: Arkaprava Basu
 Series: Ph.D (Engg.)Thesis Defence ONLINE Title: Decision Making under Uncertainty : Reinforcement Learning Algorithms and Applications in Cloud Computing, Crowdsourcing and Predictive Analytics  Speaker: Ms. Indu John
Ph.D (Engg.) Student
Dept. of CSA  Faculty Advisor: Prof. Shalabh Bhatnagar
 Date and Time: Wednesday, October 14, 2020, 11:00 AM
 Venue: Microsoft Teams  ONLINE
Abstract In this thesis, we study both theoretical and practical aspects of decision making, with a focus on reinforcement learning based methods. Reinforcement learning (RL) is a form of semisupervised learning in which the agent learns the decision making strategy by interacting with its environment. We develop novel reinforcement learning algorithms and study decision problems in the domains of cloud computing, crowdsourcing and predictive analytics.
In the first part of the thesis, we develop a model free reinforcement learning algorithm with faster convergence named Generalized Speedy Qlearning and analyze its finite time performance. This algorithm integrates ideas from the wellknown Speedy Qlearning algorithm and the generalized Bellman equation to derive a simple and efficient update rule such that its finite time bound is better than that of Speedy Qlearning for MDPs with a special structure. Further, we extend our algorithm to deal with large state and action spaces by using function approximation.
Extending the idea in the above algorithm, we develop a novel Deep Reinforcement Learning algorithm by combining the technique of successive overrelaxation with Deep Qnetworks. The new algorithm, named SORDQN, uses modified targets in the DQN framework with the aim of accelerating training. We study the application of SORDQN in the problem of autoscaling resources for cloud applications, for which existing algorithms suffer from issues such as slow convergence, poor performance during the training phase and nonscalability.
Next, we consider an interesting research problem in the domain of crowdsourcing  that of efficiently allocating a fixed budget among a set of tasks with varying difficulty levels. Further, the assignment of tasks to workers with different skill levels is tackled. This problem is modeled in the RL framework and an approximate solution is proposed to deal with the exploding state space.
We also study the following problem in predictive analytics : predicting the future values of system parameters well in advance for a largescale software or industrial system, which is important for avoiding disruptions. An equally challenging and useful exercise is to identify the 'important' parameters and optimize them in order to attain good system performance. In addition to devising an endtoend solution for the problem, we present a case study on a largescale enterprise system to validate the effectiveness of the proposed approach.
Link to the Online Thesis Defense:
https://teams.microsoft.com/l/meetupjoin/19%3a6e490d72f0b34eac8e690f8daac4a00a%40thread.tacv2/1602060940579?context=%7b%22Tid%22%3a%226f15cd97f6a741e3b2c5ad4193976476%22%2c%22Oid%22%3a%2286976192e64f4e50ae9f8a79b451c8f8%22%7d
 Series: Department Seminar Title: Statistics, computation and adaptation in offline and online learning  Speaker: Avishek Ghosh
 Date and Time: Monday, October 12, 2020, 6:30 PM
 Venue: https://meet.google.com/bfrqciwpkr
Abstract In this talk, I will address the statistical, computational and adaptive aspects of learning theory; both in offline (batch) as well as in online settings. The first onethird of the talk deals with a computationally efficient and statistically sound Alternating Minimization (AM) algorithm (often called hard EM), typically used to solve nonconvex problems. In particular, we apply AM to a classical nonconvex problem, namely maxaffine regression. Maxaffine regression can be thought of as a generalization of the (real) Phase Retrieval problem, and closely resembles the canonical problem of convex regression in nonparametric statistics. In the next segment of the talk, I characterize the (exact) convergence speed of the AM algorithm. In particular, a superlinear convergence of AM is (theoretically) proved, resolving a longstanding (1995) conjecture of Lei Xu and Micheal I. Jordan. The final part of the talk deals with adaptation, in a nontrivial online (bandit) setting. I will talk about my recent works on model selection in contextual bandits, which partially solves an open problem of COLT 2020.
Speaker Bio: I am currently a final year PhD student in the Electrical Engg. and Computer Sciences (EECS) department of UC Berkeley, working with Prof. Kannan Ramchandran and Prof. Aditya Guntuboyina ( and collaborate with Prof. Arya Mazumdar of UMASS, currently at Amazon Berkeley). My research interests are broadly in Theoretical machine learning, Distributed and Robust Learning, Bandits and RL. I spent the summer of 2020 working at the Supply Chain Optimization Team of Amazon Research in New York, with Alexander (Sasha) Rakhlin and Dean Foster. Before coming to Berkeley, I completed my master's degree (in ECE) from IISc Bangalore working with Prof. Anurag Kumar, and with Prof. Aditya Gopalan. Before IISc, I spent 4 years at Jadavpur University, where I completed my bachelor's degree in Electronics and Telecom Department and worked with Prof. Amit Konar. I am a recipient of the Excellency Award from the EECS Department at Berkeley, and a gold medal from Jadavpur University, among other accolades.
Host Faculty: Gugan Thoppe
 Series: Ph.D (Engg.)Thesis Defence ONLINE Title: Algorithms for Fair Decision Making: Provable Guarantees and Applications  Speaker: Ms. Arpita Biswas
Ph.D (Engg.)Student
Dept. of CSA  Faculty Advisor: Prof. Y Narahari & Dr. Siddharth Barman
 Date and Time: Monday, October 12, 2020, 3:00 PM
 Venue: Microsoft Teams  ONLINE
Abstract The topic of fair allocation of indivisible items has received significant attention because of its applicability in several realworld settings. This has led to a vast body of work focusing on defining appropriate fairness notions, providing existence guarantees, and handling computational issues in obtaining such fair allocations. This thesis addresses open questions in the space of fair allocation of indivisible items. We study constrained versions of the fair allocation problem, namely cardinality constraints and matroid constraints. These constrained settings generalize the unconstrained formulation and we are the first to study these constructs. We establish the existence of wellstudied fairness notions (such as EF1 and MMS) under the constrained settings, and we design methods that provide an algorithmic anchor to these existence results. Moreover, we define strictly stronger notions of fairness and provide algorithms for obtaining these stronger fairness guarantees. Finally, we investigate fairness in diverse application scenarios, such as recommendation systems and classification problems. The key novelty involves providing solutions to such applications through the lens of fair allocation.
Fair Allocation under Cardinality Constraints
We investigate the problem of fairly allocating goods under cardinality constraints and additive valuations. In this setting, the set of goods are categorized, and an upper limit is imposed on the number of goods allocated to any agent from a particular category. The objective is to find an allocation that satisfies the given cardinality constraints as well as a fairness constraint. We design an efficient algorithm that computes an envyfree up to one good (EF1) allocation. Additionally, this algorithm outputs an exact maximin share (MMS) allocation when the valuations are binary. We also show that the constrained fair allocation problem with additive valuations reduces to an unconstrained fair allocation problem with submodular valuations. This allows us to guarantee 1/3approximate maximin share (1/3MMS) allocations under cardinality constraints.
Fair Allocation under Matroid Constraints
We consider the fair allocation problem under more general constraints. Here, each allocated bundle, in addition to the fairness criterion, needs to satisfy the independence criterion specified by a matroid. We establish that MMS exists in this setting when the valuations are identical. We provide an algorithm that efficiently computes an EF1 allocation under identical laminar matroid. The algorithm initializes an allocation by computing a matroid feasible partition of goods (using a method proposed by Gabow and Westermann) and then iteratively reallocates goods between the bundles till an EF1 allocation is obtained. Our reallocation strategy maintains matroid feasibility at each iteration (using an extension of the strong basis exchange lemma) and also ensures polynomial time convergence.
Stronger Notions of Fairness
We define two novel fairness notions, namely envyfree up to one less preferred good (EFL) and groupwise maximin share (GMMS). We show that these fairness notions are better, in terms of social welfare, compared to EF1 and MMS, respectively. We provide a scale of fairness to establish how these new fairness notions fit in the hierarchy of existing notions. We provide an algorithm that outputs an EFL and ½GMMS allocations under the unconstrained setting. We also show that exact GMMS allocations are guaranteed to exist when the valuations of the agents are either binary or identical. We empirically show that GMMS allocations exist when the valuations are drawn from Gaussian and Uniform distributions. These results highlight that, for unconstrained settings, we do not fall short on generic existence results by strengthening the existing fairness notions.
Application to TwoSided Fair Recommendation Systems
We investigate the problem of fair recommendation in twosided online platforms, such as Amazon, Netflix, and Spotify, consisting of customers on one side and producers on the other. These services have typically focused only on maximizing customer satisfaction by tailoring the recommendations according to the preferences of individual customers, which may be detrimental for the producers. We consider fairness issues that span both customers and producers. Our approach involves a mapping of the fair recommendation problem to a constrained version of the fair allocation problem. Our proposed algorithm guarantees at least MMS exposure for most of the producers and EF1 fairness for every customer. We establish theoretical guarantees and provide empirical evidence through extensive evaluations on realworld datasets.
Application to Classification Problems under Prior Probability Shifts
We consider the problem of fair classification under prior probability shifts, which is a kind of distributional change occurring between the training and test datasets. Such shifts can be observed in the yearly records of several realworld datasets, such as COMPAS. If unaccounted for, such shifts can cause the predictions to become unfair towards specific population subgroups. We define a fairness notion, called proportional equality (PE) which is motivated by solution concepts from the fair allocation literature, and accounts for prior probability shifts. We develop an algorithm CAPE that uses prevalence estimation techniques, sampling and an ensemble of classifiers to ensure fair predictions. We evaluate the performance of CAPE on realworld datasets and compare its performance with stateoftheart fair algorithms. Our findings indicate that CAPE ensures PEfair predictions, with low compromise on other performance metrics.
Microsoft Teams link :
https://teams.microsoft.com/l/team/19%3a55c4dc3c4baa4347bff91658e32050ff%40thread.tacv2/conversations?groupId=9c4443356a844c988a99b93aa0744052&tenantId=6f15cd97f6a741e3b2c5ad4193976476

